Deterministic RIP matrices: a new cooperative online project

I started thinking about the problem of building deterministic RIP matrices  3 years ago, when, as a first year graduate student at Princeton, I was sharing an office with Dustin Mixon. I am probably not exaggerating much by saying that Dustin spend a big part of his PhD obsessed with this problem, and mathematical obsessions (although not necessarily mathematical diseases), are somewhat contagious. Since then I have (together with Dustin and others) intermittently thought about RIP matrices and have had some successes (a promising construction, hardness results for certifying this property and, more recently, a construction conditioned on a Number Theory conjecture – a paper should appear soon). Still, we have so far failed to achieve the goal: a deterministic construction that compares with random constructions.

I am not going to go into the problem (you can read all about it in: Terry Tao’s blog, our paper, Dustin’s new blog post, Dustin’s old blog post, etc). The objective is to increase the exponent on the sparsity parameter. It turns out that, due to the techniques normally used, it is particularly difficult to increase the exponent beyond \frac12, in comparison, random constructions achieve an optimal exponent of 1. This phenomenon is oftentimes referred to as the square-root bottleneck.

In fact, the only deterministic construction known to break the square-root bottleneck is the know proposed in the breakthrough paper

J. Bourgain et al., Explicit constructions of RIP matrices and related problems

Although a breakthrough result, this paper had a disappointing impact in the community and, to the best of my knowledge, no further progress on the problem has since been made using similar techniques (in over 3 years time). In my opinion, the reason for this is perhaps twofold: 1) the techniques used are involved and rely heavily on Additive Combinatorics (an area that most people interested in RIP matrices are not very familiar with) and 2) the gain over the square root bottleneck is modest: the exponent obtained is \frac12+\epsilon_0 where \epsilon_0\approx 5\times 10^{-28}.

A few days ago, Dustin called me saying that he was carefully going through Bourgain et al.’s paper and was wondering how far their analysis can be pushed to yield a larger \epsilon_0 (note that the purpose of the paper was simply to break the barrier). Inspired by the Polymath8 phenomenon we agreed that it would be a good idea to try to create a cooperative online project to increase \epsilon_0. I am happy to announce that Dustin did exactly that and there is now a wiki page devoted to it (and also the blog post that started it all). I am excited to see how far we can push \epsilon_0.

The main purpose of the post is accomplished: Advertising the new wiki page. In the rest of this post, I’ll introduce a piece of notation that I think will be helpful and make my own baby step on increasing \epsilon_0 further.

                              The n_\circ and n^\circ notation                              

In many occasions in the analysis in Bourgain et al.’s paper there is the need to have numbers (most often, appearing in exponents) being larger (or smaller) than a given integer (or real number), say n. As the paper was not concerned with the value of \epsilon_0, and to simplify the analysis, this is often done by simply taking n\pm 1. If our goal is to optimize the analysis we should take n\pm \epsilon for an arbitrarily small \epsilon. In order to simplify the analysis I suggest we use the notation

n^\circ to mean a number larger than n+\epsilon for an arbitrarily small (fixed) \epsilon and

n_\circ to mean a number smaller than n-\epsilon for an arbitrarily small (fixed) \epsilon

                              A baby step in the path to a larger \epsilon_0                              

On page 174 of this version of the paper (two equations above (7.1.) in the proof of Lemma 10) the inequality can be made sharper:

Due to the size of m it is not difficult to see that the second summand is larger than the first (for arbitrarily large p) and that, for arbitrarily large p, p^{n_1}+p^{n_1}\leq p^{\max\{n_1,n_2\}^\circ}. Since  |B|^{-1/2} \leq p^{3\alpha-1/4} we have |B|^{3/2} p^{1/4} \leq |B|^2 p^{3\alpha}. It is then not difficult to check (essentially the definition of \alpha in terms of m and the value of m gives that the second exponent is larger) that the 84 in the last upper bound can be replaced by 66^\circ, this will later by divided by 2 and a positive, arbitrarily small, quantity is added (84 was giving the 43 as \frac{84}{2}+1). This means that the 43 in the definition of b (see here) can be replaced by a 33^\circ improving the bound on \epsilon_0 by a factor of \big(\frac{43}{33}\big)^2 \approx 1.6979..., giving

\epsilon_0 = 1.8295 \times 10^{-27}.

 I am looking forward to see how far we can push this value.

Advertisement

3 thoughts on “Deterministic RIP matrices: a new cooperative online project

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s