I started thinking about the problem of building deterministic RIP matrices 3 years ago, when, as a first year graduate student at Princeton, I was sharing an office with Dustin Mixon. I am probably not exaggerating much by saying that Dustin spend a big part of his PhD obsessed with this problem, and mathematical obsessions (although not necessarily mathematical diseases), are somewhat contagious. Since then I have (together with Dustin and others) intermittently thought about RIP matrices and have had some successes (a promising construction, hardness results for certifying this property and, more recently, a construction conditioned on a Number Theory conjecture – a paper should appear soon). Still, we have so far failed to achieve the goal: a deterministic construction that compares with random constructions.
I am not going to go into the problem (you can read all about it in: Terry Tao’s blog, our paper, Dustin’s new blog post, Dustin’s old blog post, etc). The objective is to increase the exponent on the sparsity parameter. It turns out that, due to the techniques normally used, it is particularly difficult to increase the exponent beyond , in comparison, random constructions achieve an optimal exponent of
. This phenomenon is oftentimes referred to as the square-root bottleneck.
In fact, the only deterministic construction known to break the square-root bottleneck is the know proposed in the breakthrough paper
J. Bourgain et al., Explicit constructions of RIP matrices and related problems
Although a breakthrough result, this paper had a disappointing impact in the community and, to the best of my knowledge, no further progress on the problem has since been made using similar techniques (in over 3 years time). In my opinion, the reason for this is perhaps twofold: 1) the techniques used are involved and rely heavily on Additive Combinatorics (an area that most people interested in RIP matrices are not very familiar with) and 2) the gain over the square root bottleneck is modest: the exponent obtained is where
.
A few days ago, Dustin called me saying that he was carefully going through Bourgain et al.’s paper and was wondering how far their analysis can be pushed to yield a larger (note that the purpose of the paper was simply to break the barrier). Inspired by the Polymath8 phenomenon we agreed that it would be a good idea to try to create a cooperative online project to increase
. I am happy to announce that Dustin did exactly that and there is now a wiki page devoted to it (and also the blog post that started it all). I am excited to see how far we can push
.
The main purpose of the post is accomplished: Advertising the new wiki page. In the rest of this post, I’ll introduce a piece of notation that I think will be helpful and make my own baby step on increasing further.
The and
notation
In many occasions in the analysis in Bourgain et al.’s paper there is the need to have numbers (most often, appearing in exponents) being larger (or smaller) than a given integer (or real number), say . As the paper was not concerned with the value of
, and to simplify the analysis, this is often done by simply taking
. If our goal is to optimize the analysis we should take
for an arbitrarily small
. In order to simplify the analysis I suggest we use the notation
to mean a number larger than
for an arbitrarily small (fixed)
and
to mean a number smaller than
for an arbitrarily small (fixed)
A baby step in the path to a larger
On page 174 of this version of the paper (two equations above (7.1.) in the proof of Lemma 10) the inequality can be made sharper:
Due to the size of it is not difficult to see that the second summand is larger than the first (for arbitrarily large
) and that, for arbitrarily large
,
. Since
we have
. It is then not difficult to check (essentially the definition of
in terms of
and the value of
gives that the second exponent is larger) that the
in the last upper bound can be replaced by
, this will later by divided by
and a positive, arbitrarily small, quantity is added (
was giving the
as
). This means that the
in the definition of
(see here) can be replaced by a
improving the bound on
by a factor of
, giving
.
I am looking forward to see how far we can push this value.
I think you mean the 60 is later _divided_ by 2.
[Corrected, thanks. B.]