A few months ago, while trying to show that the analysis in this paper is optimal (I discussed the paper in a past post), I was faced with the following question:
Given
non-negative variance values
that sum to
, consider a
random matrix whose entries
are independent gaussians with mean zero and variance equal to
. For which choice of
is the expected value of the average singular value of that matrix maximized?
After thinking about it a bit, I conjectured that this was maximized when were all equal to
, and thought this had to have an elementary proof. The result is indeed true but I wasn’t able to come up with an elementary proof for it. I ended up proving it using a non-elementary result that I didn’t know about before, the Lowner-Heinz Theorem. The purpose of this blog post is twofold, 1) to challenge the reader to find an elementary proof for this fact and 2) to briefly present this cool and powerful matrix inequality, by showing how it allowed me to solve the problem above.
Here is a more precise version of the statement:
Theorem 1 Let be a
random matrix with i.i.d. standard gaussian entries, and
the
-th singular value of
. Then,
I suggest the reader to think about this Theorem a bit and try to come up with an elementary proof for it before scrolling down and reading the proof based on the Lowner-Heinz Theorem. If you do come up with an elementary proof, I would love to hear about it.
Proof sketch (for the full proof see Theorem 18 in this paper): Recall that we want to show:
where is a
matrix with i.i.d. entries
. By taking
, and using the definition of singular value, we can rewrite the equality as
We will use concavity arguments. The idea is to show that if and
are maximizers, then so needs to be their average. After we show this fact the rest of the proof is easy: out of all the maximizers consider the one with the smallest Frobenius norm
, if
has constant diagonal we are done, otherwise we can swap two of its different diagonal elements and create
. By rotation invariance of the Gaussian,
has to also be a minimizer, and since
and
are not a multiple of each other, their average needs to have smaller Frobenius norm which, together with the concavity, would lead in a contradiction.
We will proceed by contradiction, suppose (1) does not hold, since the optimization space is compact and the function continuous it must have a maximum that is attended by a certain . Out of all maximizers
, let
be one with smallest possible Frobenius norm, the idea will be to use concavity arguments to build an optimal
with smallest Frobenius norm, arriving at a contradiction and hence showing the theorem.
Note that, by linearity of expectation, what we want to show is that
has non-negative expectation. Note that, if the exponent was outside of the trace, then concavity of the square-root function would give us what we want.
Since the trace is also linear, this is equivalent to showing that the trace of
has non-negative expectation. It turns out that the much stronger statement is true: For any realization of the matrix in (2), all it’s eigenvalues are non-negative (which implies that the trace is always non-negative). In fact, this follows from a property of the square-root function called operator concavity: given two matrices
and
, and any
, the following holds:
Lowner-Heinz Theorem (whose proof is in this very nice notes) establishes operator concavity (or convexity) of many functions, including the square root function.