A few weeks ago, Ramon van Handel and I uploaded a paper entitled “Sharp non asymptotic bounds on the norm of random matrices with independent entries” to the arxiv. In this blog post, I’ll attempt to describe and motivate the results there.
The main motivation comes from the vast number of applications for bounds on the spectral norm of random matrices, some of which I have described on this blog (see here, here, and here). In many (applied) math problems one needs to control the spectral norm of a certain application-specific random matrix. A common approach, which has found great success, is to introduce randomness in the models and control this quantity on certain random matrices. Perhaps the simplest example of how randomness can help is the spectral norm of an standard Wigner matrix, a symmetric matrix with iid standard gaussian entries. As each entry of the matrix is essentially of constant order the spectral norm of such a matrix can go up to
, however, due to a concentration phenomenon, it is almost always roughly
. Unfortunately, random matrix in many applications do not correspond to this simple model.
Several fairly general techniques exists to bound the spectral norm of random matrices. A particularly notable one is a non-commutative version of Khintchine inequality (see here). Recently, there has been some effort in developing an alternative approach to this problem, via “matrix concentration inequalities”, essentially matrix versions of Chernoff-type bounds. The first instance of this is by Ahlswede and Winter (see here) and was improved by Oliveira and Tropp. I recommend Tropp’s monograph for a particularly elegant exposition and proof of this type of results and a vast description of interesting examples.
Although the results mentioned above apply in more general settings we will restrict our attention to matrices with independent entries. We will focus on bounding the expected value of the spectral norm of matrices with gaussian entries — in fact, it is not difficult to adapt our results to deal with either different distributions or to create tail bounds (see the paper for more details).
More specifically, consider a symmetric random matrix with independent entries distributed as
The results described above, when applied to this setting, give the following inequality
for some universal constant (the constants tend to be better in the matrix concentration approach).
Unfortunately, this bounds fails to capture the right asymptotic order already for the simple Wigner case, where the bound gives while
. On the other hand, the logarithmic term is needed in some cases, when
is a diagonal matrix (
) when
while
is distributed as the maximum as the maximum of
iid standard gaussians and so
.
It is intuitive that if we decrease the variance of some entries of a Wigner matrix, should not increase. Indeed, such intuition can be made precise and gives the following bound (often attributed to Gordon, see here).
for some universal constant . While this explains the Wigner matrix case it is worst than the bound above in most other situations.
The purpose of our paper is to provide a general sharp bound for this quantity. To help guiding intuition let us consider an example that interpolates between the two above, given , let us consider a symmetric
matrix
consisting of
blocks of size
. On each of the diagonal blocks the entries are iid standard gaussians, and the rest of the matrix is equal to zero.
corresponds to the Wigner matrix example and
to the diagonal example. It is also easy to see that
and
. In this family of examples
is the maximum spectral norm among
independent
standard Wigner matrices. In this case, it is not very hard to show that
for some universal constant .
Not only this observation serves to motivate the inequality that we prove, but, in fact, we prove our inequality using a combinatorial comparison to reduce the general problem to this case (see the paper for more details, I really like the proof!). More specifically, we show, for a symmetric
random matrix with gaussian entries and for any
,
for some constant , depending only on
.
This bound is indeed sharp in a very natural sense. As , where
is the
-th element of the canonical basis,
is a lower bound. Also, as long as there are
entries with variance comparable to
, the
is also needed. Also, the gaussian assumption on the entries can be dropped and it is easy to obtain tail bounds on
using these bounds on
and standard concentration techniques.
There are many fascinating open problems left unresolved. Perhaps the most important is to provide general sharp bounds for the setting on which the entries of are correlated. Another interesting question is related with bounded random variables, if instead of gaussian random variables we have Rademacher random variables (weighted by
say), then the
term is not always needed. It is known that even when
is constant, if the
‘s correspond to an adjacency matrix of a
-regular graph, then
. However, there are examples (such as the block example above) where the logarithmic terms are needed. It would be interesting to understand this phenomenon better, perhaps it is related to combinatorial properties of the underlying graph (described by the
‘s).
Another interesting question, posed by Riemer and Schuett, is wether it is true that, when has independent gaussian entries,
Note that our result forces that any counter-example for this equality needs to be very inhomogeneous, as whenever there are entries with variance comparable to
, our bound implies this equality.