Let be a graph with vertices . The degree of vertex is denoted . Let be the Laplacian matrix of , so that , is when the vertices are adjacent, and is otherwise. The eigenvalues of are written as .
The graph is regular if all vertices have the same degree: . How can this property be seen from its Laplacian eigenvalues ?
Since the sum of eigenvalues is equal to the trace, we have . Moreover, is the trace of , which is equal to the sum of the squares of all entries of . This sum is because the th row of contains one entry equal to and entries equal to . In conclusion, .
The Cauchy-Schwarz inequality says that with equality if and only if all numbers are equal, i.e., the graph is regular. In terms of eigenvalues, this means that the difference
is always nonnegative, and is equal to zero precisely when the graph is regular. This is how one can see the regularity of a graph from its Laplacian spectrum.
As an aside, is an even integer. Indeed, the sum is even because it double-counts the edges. Hence the number of vertices of odd degree is even, which implies that is even for every positive integer .
Up to a constant factor, is simply the degree variance: the variance of the sequence . What graph maximizes it for a given ? We want to have some very large degrees and some very small ones.
Let be the union of the complete graph on vertices and isolated vertices. The sum of degrees is and the sum of squares of degrees is . Hence,
For the maximum is attained by , that is there is one isolated vertex. For the maximum is . In general it is attained by .
The graph is disconnected. But any graph has the same degree variance as its complement. And the complement is always connected: it consists of a “center”, a complete graph on vertices, and “periphery”, a set of vertices that are connected to each central vertex. Put another way, is obtained from the complete bipartite graph by connecting all vertices of the group together.
Tom A. B. Snijders (1981) proved that and are the only graphs maximizing the degree variance; in particular, is the unique maximizer among the connected graphs. It is pictured below for .
This is a collection of entirely unoriginal remarks about Laplacian spectrum of graphs. For an accessible overview of the subject I recommend the M.S. thesis The Laplacian Spectrum of Graphs by Michael William Newman. It also includes a large table of graphs with their spectra. Here I will avoid introducing matrices and enumerating vertices.
Let be the vertex set of a graph. Write if are adjacent vertices. Given a function , define .
This is a linear operator (the graph Laplacian) on the Euclidean space of all functions with the norm . It is symmetric: and positive semidefinite: . Since equality is attained for constant , 0 is always an eigenvalue of .
This is the standard setup, but I prefer to change things a little and replace by the smaller space of functions with zero mean: . Indeed, maps to anyway, and since it kills the constants, it makes sense to focus on . It is a vector space of dimension where .
One advantage is that the smallest eigenvalue is 0 if and only if the graph is disconnected: indeed, is equivalent to being constant on each connected component. We also gain better symmetry between and the Laplacian of the graph complement, denoted . Indeed, since , it follows that for every . So, the identity holds on (it does not hold on ). Hence the eigenvalues of are obtained by subtracting the eigenvalues of from . As a corollary, the largest eigenvalue of is at most , with equality if and only if the graph complement is disconnected. More precisely, the multiplicity of eigenvalue is one less than the number of connected components of the graph complement.
Let denote the diameter of the graph. Then the number of distinct Laplacian eigenvalues is at least . Indeed, let be two vertices at distance from each other. Define and elsewhere. Also let for . Note that for all . One can prove by induction that when the distance from to is greater than , and when the distance from to is equal to . In particular, when and . This shows that is not a linear combination of . Since , it follows that is not a linear combination of . Hence the minimal polynomial of has degree at least , which implies the claim.
Let’s consider a few examples of connected graphs.
There are two connected graphs: the 3-path (D=2) and the 3-cycle (D=1). In both cases we get D distinct eigenvalues. The spectra are [1, 3] and [3, 3], respectively.
One graph of diameter 3, the path. Its spectrum is .
One graph of diameter 1, the complete graph. Its spectrum is . This pattern continues for other complete graphs: since the complement is the empty graph ( components), all eigenvalues are equal to .
Four graphs of diameter 2, which are shown below, with each caption being the spectrum.
The graph [1, 3, 4] has more distinct eigenvalues than its diameter.
The graph [2, 2, 4] is regular (all vertices have the same degree).
The smallest eigenvalue of graphs [1, 1, 4] and [2, 2, 4] is multiple, due to the graphs having a large group of automorphisms (here rotations); applying some of these automorphisms to an eigenfunctions for the smallest eigenvalue yields another eigenfunction.
[1, 3, 4] and [2, 4, 4] also have automorphisms, but their automorphisms preserve the eigenfunction for the lowest eigenvalue, up to a constant factor.
One graph of diameter 4, the path. Its spectrum is related to the golden ratio: it consists of .
One graph of diameter 1, the complete one: [5, 5, 5, 5]
Five graphs of diameter 3. All have connected complement, with the highest eigenvalue strictly between 4 and 5. None are regular. Each has 4 distinct eigenvalues.
14 graphs of diameter 2. Some of these are noted below.
Two have connected complement, so their eigenvalues are less than 5 (spectrum shown on hover):
1.382, 1.382, 3.618, 3.618
1.382, 2.382, 3.618, 4.618
One has both integers and non-integers in its spectrum, the smallest such graph. Its eigenvalues are .
Two have eigenvalues of multiplicity 3, indicating a high degree of symmetry (spectrum shown on hover).
1, 1, 1, 5
3, 5, 5, 5
Two have all eigenvalues integer and distinct:
1, 2, 4, 5
2, 3, 4, 5
The 5-cycle and the complete graph are the only regular graphs on 5 vertices.
This is where we first encounter isospectral graphs: the Laplacian spectrum cannot tell them apart.
Both of these have spectrum but they are obviously non-isomorphic (consider the vertex degrees):
Both have these have spectrum and are non-isomorphic.
Indeed, the second pair is obtained from the first by taking graph complement.
Also notable are regular graphs on 6 vertices, all of which have integer spectrum.
Here [3, 3, 3, 3, 6] (complete bipartite) and [2, 3, 3, 5, 5] (prism) are both regular of degree 3, but the spectrum allows us to tell them apart.
The prism is the smallest regular graph for which the first eigenvalue is a simple one. It has plenty of automorphisms, but the relevant eigenfunction (1 on one face of the prism, -1 on the other face) is compatible with all of them.
There are four regular graphs on 7 vertices. Two of them are by now familiar: 7-cycle and complete graph. Here are the other two, both regular of degree 4 but with different spectra.
There are lots of isospectral pairs of graphs on 7 vertices, so I will list only the isospectral triples, of which there are five.
Spectrum 0.676596, 2, 3, 3.642074, 5, 5.681331:
Spectrum 0.726927, 2, 3.140435, 4, 4, 6.132637:
Spectrum 0.867363, 3, 3, 3.859565, 5, 6.273073:
Spectrum 1.318669, 2, 3.357926, 4, 5, 6.323404:
All of the triples mentioned so far have connected complement: for example, taking the complement of the triple with the spectrum [0.676596, 2, 3, 3.642074, 5, 5.681331] turns it into the triple with the spectrum [1.318669, 2, 3.357926, 4, 5, 6.323404].
Last but not least, an isospectral triple with an integer spectrum: 3, 4, 4, 6, 6, 7. This one has no counterpart since the complement of each of these graphs is disconnected.
Regular graphs, excluding the cycle (spectrum 0.585786, 0.585786, 2, 2, 3.414214, 3.414214, 4) and the complete one.
Wikipedia article on nodes offers this 1D illustration: a node is an interior point at which a standing wave does not move.
(At the endpoints the wave is forced to stay put, so I would not count them as nodes despite being marked on the plot.)
A standing wave in one dimension is described by the equation , where is its (angular) frequency. The function solves the wave equation : the wave vibrates without moving, hence the name. In mathematics, these are the (Dirichlet) eigenfunctions of the Laplacian.
Subject to boundary conditions (fixed ends), all standing waves on the interval are of the form for . Their eigenvalues are exactly the perfect squares, and the nodes are equally spaced on the interval.
Things get more interesting in two dimensions. For simplicity consider the square . Eigenfunctions with zero value on the boundary are of the form for positive integers . The set of eigenvalues has richer structure, it consists of the integers that can be expressed as the sum of two positive squares: 2, 5, 8, 10, 13, 17,…
The zero sets of eigenfunctions in two dimensions are called nodal lines. At a first glance it may appear that we have nothing interesting: the zero set of is a union of equally spaced horizontal lines, and equally spaced vertical lines:
But there is much more, because a sum of two eigenfunctions with the same eigenvalue is also an eigenfunction. To begin with, we can form linear combinations of and . Here are two examples from Partial Differential Equations by Walter Strauss:
When , the square is divided by nodal lines into 12 nodal domains:
After slight perturbation there is a single nodal line dividing the square into two regions of intricate geometry:
And then there are numbers that can be written as sums of squares in two different ways. The smallest is , with eigenfunctions such as
This is too good not to replicate: the eigenfunctions naturally extend as doubly periodic functions with anti-period .
The dreaded calculus torture device that works for exactly two integrals, and .
Actually, no. A version of it (with one integration by parts) works for :
hence (assuming )
Yes, this is more of a calculus joke. A more serious example comes from Fourier series.
The functions , , are orthogonal on , in the sense that
This is usually proved using a trigonometric identity that converts the product to a sum. But the double integration by parts give a nicer proof, because no obscure identities are needed. No boundary terms will appear because the sines vanish at both endpoints:
All integrals here must vanish because . As a bonus, we get the orthogonality of cosines, , with no additional effort.
The double integration by parts is also a more conceptual proof, because it gets to the heart of the matter: eigenvectors of a symmetric matrix (operator) that correspond to different eigenvalues are orthogonal. The trigonometric form is incidental, the eigenfunction property is essential. Let’s try this one more time, for the mixed boundary value problem , . Suppose that and satisfy the boundary conditions, , and . Since and vanish at both endpoints, we can pass the primes easily:
The signature of a Hermitian matrix can be defined either as the pair (number of positive eigenvalues, number of negative eigenvalues), or simply as the difference
The function hides some information when the matrix is degenerate, but this will not be of concern here. For a matrix of size , the number is between and and has the same parity as .
Given any square matrix with real entries and a complex number , we can form , which is a Hermitian matrix. Then is an integer-valued function of . Restricting attention to the unit circle , we obtain a piecewise constant function with jumps at the points where is degenerate.
When is a Seifert matrix of a knot, the function is the Tristram-Levine signature of the knot. To each knot there are infinitely many Seifert surfaces , and to each Seifert surface there are infinitely many Seifert matrices , depending on how we choose the generators of . Yet, depends on alone.
Below I plot for a few knots using Scilab for computation of signature and the knot data from
Technical point: since Scilab enumerates colors using positive integers, the colors below correspond to the number of positive eigenvalues rather than the signature. Namely, black for 0, blue (1), green (2), and cyan (3).
As always, first comes the trefoil:
One of its Seifert matrices is
and the Tristram-Levine signature is
Next, the knot
with Seifert matrix
and the signature
And finally, the knot :
with Seifert matrix
and the signature
I experimented with a few more, trying to create more colors. However, guessing the complexity of the signature by looking at the Seifert matrix is one of many skills I do not have. So I conclude with the simple code used to plot the signatures.
[n,n] = size(A);
Npoints = 200;
r = 2*%pi/Npoints;
arcs = zeros(6,Npoints);
colors = zeros(Npoints);
for m = 1:Npoints
x = cos(2*%pi*(m-1/2)/Npoints);
y = sin(2*%pi*(m-1/2)/Npoints);
omega = complex(x,y);
B = (1-omega)*A+(1-conj(omega))*A';
signature = sum(sign(spec(B)));
colors(m) = (signature+n)/2+1;
arcs(:,m) = [x-r,y-r,2*r,2*r,0,360*64]';