# Tightness of the Angular Synchronization SDP relaxation and rank recovery

A few weeks ago, Nicolas, Amit, and myself uploaded our new paper entitled “Tightness of the maximum likelihood semidefinite relaxation for angular synchronization”. I am quite excited about this one as it is the first instance for which we were able to establish a rank recovery type of result! I will briefly explain the main result, its motivation, and describe some of the main ideas behind the proof.

The motivation arises from trying to understand the rank recovery phenomenon (that I have described in another post). One of the simplest problems for which this phenomenon is observed is the angular synchronization problem (also described previously in this blog). In a nutshell, the problem consists in, for an Hermitian matrix ${C\in \mathbb{C}^{n\times n}}$ to solve

$\displaystyle \max\ x^TCx \quad \text{ s.t. } \left|x_i\right|^2=1, \ \ \ \ \ (1)$

This is also known as the little Grothendieck problem (see this post). Note that this optimization problem is equivalent (by taking ${X=xx^T}$) to:

$\displaystyle \max\ \mathrm{Tr}\left( CX \right) \quad \text{ s.t. } X_{ii}=1, \ X\succeq 0, \ \mathrm{rank}(X) = 1.$

We will focus our attention on the natural semidefinite relaxation:

$\displaystyle \max\ \mathrm{Tr}\left( CX \right) \quad \text{ s.t. } X_{ii}=1, \ X\succeq 0. \ \ \ \ \ (2)$

Note that this optimization problem is a Semidefinite Program (SDP) and is thus solvable in polynomial time to arbitrary precision.

The purpose of the paper is to understand in which instances the solution to the convex relaxation corresponds to the solution of the original problem (or, equivalently, when the solution is of rank ${1}$), when this happens we say the SDP is tight.

Motivated by the angular synchronization problem we consider a random model for ${C}$ with a planted solution ${z\in\mathbb{C}^n}$, satisfying ${\left|z_i\right|^2=1}$. For simplicity we model ${C}$ as

$\displaystyle C = zz^\ast + \sigma W,$

where ${W}$ is a symmetric matrix with i.i.d. standard gaussian entries.

The first natural question would be the exact recovery one: “For which values of ${\sigma}$ do we expect the solution of (2) to be ${X=zz^\ast}$ ?” Unfortunately, it is not hard to see that as long as ${\sigma>0}$ the ${\ell_2}$ nature of the problem together with the continuity of the feasible set will render exact recovery impossible even for (1), and consequently also for (2).

On the other hand we are still interested in solving (1), as it corresponds to the maximum likelihood estimator (of the unknown ${z}$). This motivates the question of when the SDP is tight (even though it does not achieve exact recovery), this an instance of the rank recovery problem (see here).

The main result in our paper is to show that, as long as

$\displaystyle \sigma = \mathbf{O}\left( \frac{n^{0.1}}{\log n} \right),$

then the SDP is tight with high probability.

Let me say a few words about the proof (which I quite like!):

The most common proof technique used for exact recovery is to build a dual certificate in order to establish the KKT conditions of the optimal solution (see this post for a construction for the ${\mathbb{Z}_2}$ version of this very problem, where exact recovery in indeed observed). Given a “guessed” solution ${x}$ it is fairly straightforward to build a potential dual certificate for that solution, let us call it ${S^{(x)}}$. By the construction of ${S^{(x)}}$ the only property needed to be established (to guarantee that it is indeed a dual certificate) is showing that it is a positive semidefinite matrix (I refer to paper for the details of this). In a way, the purpose is to find an ${x}$ such that ${S^{(x)}}$ is PSD.

The natural attempt would be to consider ${S^{(z)}}$, This indeed works when ${\sigma = 0}$, in that case ${S^{(z)}}$ is a matrix with one null eigenvalue and all other eigenvalues equal to ${n}$. However, for ${\sigma>0}$ we know that ${zz^\ast}$ is not the solution and indeed ${S^{(z)}}$ is no longer PSD (in a way, what happens is that the null eigenvalue becomes negative). As ${S^{(x)}}$ depends in a relatively smooth way in terms of ${x}$, as long as ${x}$ is close to ${z}$ we expect that all of the eigenvalues that were ${n}$ before will still be fairly large (by concentration). The issue is then to control the null eigenvalue and that is where the nice argument comes in.

The original problem (1) has an optimal solution, let us call it ${\hat{x}}$. Because this problem is over a nice manifold, ${\hat{x}}$ has to satisfy certain local optimality conditions. It turns out that these optimality conditions (think null gradient) correspond exactly to setting on of the eigenvalues of ${S^{(\hat{x})}}$ to zero! The only other thing that needs to be shown is that ${\hat{x}}$ is close to ${z}$ which can be done by concentration arguments. This can then be used to lower bound all the other ${n-1}$ eigenvalues of ${S^{(\hat{x})}}$. This shows that ${\hat{x}\hat{x}^\ast}$ is indeed an optimal solution (and can be shown to be the unique one) of (2), showing that the SDP is tight.