It has been a while since I posted here (not counting the announcement I did a few days ago). I have decided to revive the blog from the ashes and start posting with compliance to the Boris law of Blogging (the time elapsed between any two consecutive posts should be never shorter than 1 week nor longer than 2 weeks).
Now the math, this is based on part of a talk I gave last Tuesday on the PACM Graduate Student Seminar.
Suppose we are given a data set we want to understand (I’ll be more precise later). Let’s say we have a notion of affinity between two data points. One popular way of describing this type of data set is through a weighted graph where each vertex
represents a data point and, for each edge
, the weight
represents the affinity between node i and node j. The matrix W is often called the weighted adjacency matrix of the graph. We make the reasonable assumption of symmetry in the weights,
.
We further assume that: and
. These assumptions are not needed but they clean up the math below.
As you might have already guessed by now, the reason why I want the degree of each vertex to be 1 is because I want to define a random walk on it. Indeed, a random walk is a very natural process to define on a graph and its properties usually give meaningful understanding about the structure of the graph itself, this post is one such example. A natural random walk to define in our graph G is simply one that, given the vertex i as its position, walks to vertex j with probability . This means that
. The matrix W gives us a easy way of computing the probability distribution after t steps of the walk:
.
We are now interested in defining a meaningful distance between nodes. One could use the inverse of the affinity or something of that sort but it might fail in two ways: It wouldn’t be robust to errors on the measurement of the affinity or lack of such measurements, very often we only have a notion of affinity of nearby data points. Another issue is that this distance might not be meaningful, in fact if you think of a graph composed of two star graphs sharing all the leaf nodes then both centers cannot be very distant (although having no edge between them, which would assign weight zero and infinite distance). Another option is what is called Diffusion Distance: Given a time t we look at the probability cloud of a random walk starting at i and one starting at j and we see how different the probability clouds are (the assumption that makes the random walk “lazy” enough so that parity and things like it don’t play so much of a role and the random walk is mostly governed by the geometry the graph, think of what would happen in a bipartite graph if
).
We define, given t, the Diffusion distance between i and j as
In order to compute this we are going to define the Laplacian of G, which is given by (if we were not assuming that the graph is regular with degree 1 the definition would be
where
is the degree matrix).
is a semi-definite positive matrix with eigenvalues
, due to our assumption of laziness of the random walk
. The fact that this matrix is called the Laplacian is not a coincidence but I won’t discuss that here, very briefly under certain conditions this matrix approximates the Laplacian we all know and love (or more general the Laplace–Beltrami operator on manifolds). With respect to these eigenvalues the spectral decomposition of W is given by
where the columns of
are the eigenvectors
of L and
is a diagonal matrix with
on its l’th diagonal.
We thus have:
where the next to last equality comes from the orthogonality of the eigenvectors and the last one comes from the fact that is the all ones vector.
Note that, if we consider the embedding given by
we have that the Diffusion Distance between node and node
is simply the
distance on this embedding. This is a strong argument towards the meaningfulness of this representation of the graph. The caveat is that one needs
dimensions to represent the graph.
However, as is decreasing from
to
, when
is sufficiently large that dimension on the embedding starts to be negligible. This suggests that truncating the map is good way to obtain a embedding on a lower dimensional space. Given
The following embedding is known as truncated Diffusion Maps and was introduced and analyzed in Coifman and Lafon (2006)
Since the dependency on is a bit inconvenient and it only corresponds to stretching some of the dimensions, we consider the following embedding, known as Laplacian Eigenmaps see Belkin and Niyogi 2003,
given by,
Let’s say we are interested in clustering the graph. More specifically we want to split the graph in two clusters in such a way such that each cluster has roughly the same volume and there are as least as possible edges across the clusters. If the embedding is (sufficiently) faithful to the graph structure it would make sense to embed the graph in
and then cluster in
, since that is significantly easier. This motivates an algorithm for clustering:
Algorithm for Clustering:
Do the embedding
, “choose”
and then output two clusters as
In fact, one can show that there exists at least one that gives a “good” clustering (note that it is computationally tracktable to test all values of
that give different clusters). This non-trivial fact is a consequence of the Cheeger’s inequality that was first shown by Noga Alon in 1986. My next post will be devoted to this inequality and to Spectral Clustering through another viewpoint.
2 thoughts on “Diffusion Maps and Clustering on Graphs”