Last week, Ramon van Handel showed me a really cool open problem that I want to share with you today. The problem was posed by Stephane Mallat and Ofer Zeitouni in a nice short note available here.

The problem is remarkably simple to pose: Let be a gaussian random vector in with a known covariance matrix and . Now, for any orthonormal basis of , consider the following random variable: Given a draw of the random vector , consider the norm of the largest projection of on a subspace generated by elements of the basis . The question is:

*What is the basis that maximizes the expected value of this random variable?*

To help give some intuition, suppose that one had to choose the projection before seeing the outcome of the gaussian vector. In other words, what is the -dimesional subspace that maximizes the expected value of the of the projection of the random vector? The answer is easily seen to be the subspace generated by the leading eigenvectors of . This means that, in this case, the optimal basis is the basis that diagonalizes , and this is essentially the premise of Principal component analysis.

The, very believable, conjecture posed by Mallat and Zeitouni is that this basis is still optimal in the setting of the problem. In fact, they show this to be true for in the same note, but their argument unfortunately does not seem to generalize to , see the comment added on page 3.

I think the problem is super interesting. Such a simple natural question and yet it seems to be quite difficult to resolve.

Looking forward to hearing your thoughts on it!

### Like this:

Like Loading...

*Related*

If is the identity, is it obvious? (It is not to me)

Hi Joel, if is the identity then every basis has the same performance, the gaussian becomes orthogonal invariant in that case. Note that you can assume, without loss of generality, that is a diagonal matrix.