Approximation Algorithm for Max-Cut (SDP relaxation)

I plan to write a few posts about approximation algorithms. I decided to start with my favorite one, and the first non-trivial one I read about, the SDP relaxation for Max-Cut. This post will be about this particular approximation algorithm and the proof for its guarantee.

The objective of approximation algorithms is to efficiently compute approximate solutions to problems for which finding an exact solution is believed to be computationally hard. In order to illustrate this we will look into the Max-Cut problem. Given a graph {G=(V,E)} with non-negative weights {w_{ij}} on the edges. The Max-Cut problem is that of finding a set {S\subset V} such that {cut(S,S^c)} is maximal.

The Max-Cut problem is known to be NP-hard (if the widely believed {P\neq NP} conjecture is true this means that the problem cannot be solved in polynomial time). On what follows we will show a randomized algorithm of Goemans and Williamson that is able to, in polynomial time, obtain a cut whose expected value is guaranteed to be no smaller than a particular constant times the optimum cut.

Let {V={1,...,n}}. One can restate the problem has

\displaystyle \begin{array}{l} \max \frac12 \sum_{i<j}w_{ij}(1-y_iy_j) \\ s.t. |y_i|=1. \end{array} \ \ \ \ \ (1)

We consider the following relaxation of this problem:

\displaystyle \begin{array}{l} \max \frac12 \sum_{i<j}w_{ij}(1-v_i^Tv_j) \\ s.t. v_i\in \mathbb{R}^n, |v_i|=1. \end{array} \ \ \ \ \ (2)

Note that this is in fact a relaxation because if one would restrict to maximizing {v_i} multiple of {e_1}, the first element of the canonical basis, the problem would be equivalent to the original.

From the solution {v} of the relaxed problem one can produce a cut subset {S'} by randomly picking an {\mathbb{R}^n} vector {r} from the uniform distribution on the unit sphere and letting {S'=\{i|r^Tv_i\geq0\}}. In order for this algorithm to be of interest two things must happen: The relaxed problem needs to be significantly easier to solve than the original and the cut given by the set {S'} needs to be comparable to the optimal one. We will start by showing the latter.

Let {W} be the value of the cut produced by the relaxation described above. Clearly {W} is a random variable, it’s expectation is

\displaystyle \begin{array}{rcl} \mathbb{E}[W] & = & \sum_{i<j}w_{ij}Prob\left\{sign(r^Tv_i)\neq sign(r^Tv_j)\right\} \\ & = & \sum_{i<j}w_{ij}\frac1\pi \arccos(v_i^Tv_j). \\ \end{array}

It is not hard to show (see here) that

\displaystyle \sum_{i<j}w_{ij}\frac1\pi \arccos(v_i^Tv_j) \geq \alpha \frac12 \sum_{i<j}w_{ij}(1-v_i^Tv_j), \ \ \ \ \ (3)

where {\displaystyle{\alpha=\min_{0\leq\theta\leq\pi}\frac2\pi \frac{\theta}{1-\cos\theta}>0.87}}.

Let {\rho_G} be the maximum cut of {G}, meaning the maximum of the original problem. It is easy to see that the solution to the relaxed version is larger or equal than {\rho_G}. Hence,

\displaystyle \mathbb{E}[W] \geq \alpha \frac12 \sum_{i<j}w_{ij}(1-v_i^Tv_j) \geq \alpha \rho_G, \ \ \ \ \ (4)

showing that the algorithm based on the relaxation has a performance guarantee (in expectation) of {\alpha}.

What makes all this interesting is the fact that one can obtain a solution of the relaxed problem in a trackable way. If we set {X=V^TV} where {V} has the vectors {v_i} as its columns (it is called the Gram Matrix of {\{v_1,...,v_n\}} then we can rewrite the objective function of the relaxed problem as {\frac12\sum_{i<j}w_{ij}(1-X_{ij})}. One can exploit the fact that {X} having a decomposition of the form {X=V^TV} is equivalent to being positive Semi-Definite {(X\succeq 0)}, which is a convex constraint. Also, the constraint that {\|v_i\|=1} can be rewritten has {X_{ii}=1}. This implies that the relaxed problem is equivalent to the following Semi-Definite program:

\displaystyle \begin{array}{l} \max \frac12 \sum_{i<j}w_{ij}(1-X_{ij}) \\ s.t. X\succeq 0 \text{ and } X_{ii}=1\,\forall_i. \end{array} \ \ \ \ \ (5)

Since a Semi-Definite Program can be solved (up to {\epsilon} accuracy) in time growing only polynomial on the input data and {\log(\epsilon^{-1})} this describes a randomized algorithm for Max-Cut with a {0.87} performance guarantee.

About these ads

4 thoughts on “Approximation Algorithm for Max-Cut (SDP relaxation)

  1. Sometimes randomized algorithms blow my mind, and this is a perfect example. Thanks for the post. FYI – I think the inequality is going the wrong way in (3), and your v’s turned into y’s in the discussion before (5).

    1. Hi Dustin,

      The algorithm is pretty impressive yes! Even more impressive is the fact that, if you believe Unique Games Conjecture and P\neq NP, then this algorithm is tight — There is no polynomial time algorithm that is able to get a better approximation performance.

      Thanks for catching the typos!

      Afonso

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s