## Measuring the regularity of a graph by its Laplacian eigenvalues

Let ${G}$ be a graph with vertices ${1, 2, \dots, n}$. The degree of vertex ${i}$ is denoted ${d_i}$. Let ${L}$ be the Laplacian matrix of ${G}$, so that ${L_{ii}=d_i}$, ${L_{ij}}$ is ${-1}$ when the vertices ${i, j}$ are adjacent, and is ${0}$ otherwise. The eigenvalues of ${L}$ are written as ${\lambda_1\le \dots \le \lambda_n}$.

The graph is regular if all vertices have the same degree: ${d_1=\cdots = d_n}$. How can this property be seen from its Laplacian eigenvalues ${\lambda_1, \dots, \lambda_n}$?

Since the sum of eigenvalues is equal to the trace, we have ${\sum \lambda_i = \sum d_i}$. Moreover, ${\sum \lambda_i^2}$ is the trace of ${L^2}$, which is equal to the sum of the squares of all entries of ${L}$. This sum is ${\sum d_i^2 + \sum d_i}$ because the ${i}$th row of ${L}$ contains one entry equal to ${d_i}$ and ${d_i}$ entries equal to ${-1}$. In conclusion, ${\sum d_i^2 = \sum \lambda_i^2 - \sum\lambda_i}$.

The Cauchy-Schwarz inequality says that ${n\sum d_i^2 \ge \left(\sum d_i \right)^2}$ with equality if and only if all numbers ${d_i}$ are equal, i.e., the graph is regular. In terms of eigenvalues, this means that the difference
${\displaystyle D =n\sum d_i^2 - \left(\sum d_i \right)^2 = n\sum (\lambda_i^2 - \lambda_i) - \left( \sum\lambda_i \right)^2 }$
is always nonnegative, and is equal to zero precisely when the graph is regular. This is how one can see the regularity of a graph from its Laplacian spectrum.

As an aside, ${D }$ is an even integer. Indeed, the sum ${\sum d_i}$ is even because it double-counts the edges. Hence the number of vertices of odd degree is even, which implies that ${\sum d_i^k }$ is even for every positive integer  ${k }$.

Up to a constant factor, ${D}$ is simply the degree variance: the variance of the sequence ${d_1, \dots, d_n}$. What graph maximizes it for a given ${n}$? We want to have some very large degrees and some very small ones.

Let ${G_{m, n}}$ be the union of the complete graph ${K_m}$ on ${m}$ vertices and ${(n-m)}$ isolated vertices. The sum of degrees is ${m(m-1)}$ and the sum of squares of degrees is ${m(m-1)^2}$. Hence,

${D = nm(m-1)^2 - (m(m-1))^2 = m(m-1)^2(n-m)}$

For ${n=3, 4, 5, 6}$ the maximum is attained by ${m=n-1}$, that is there is one isolated vertex. For ${n=7, 8, 9, 10}$ the maximum is ${m=n-2}$. In general it is attained by ${m^*=\lfloor (3n+2)/4 \rfloor}$.

The graph ${G_{m, n}}$ is disconnected. But any graph has the same degree variance as its complement. And the complement ${G^c(m, n)}$ is always connected: it consists of a “center”, a complete graph on ${n-m}$ vertices, and “periphery”, a set of ${m}$ vertices that are connected to each central vertex. Put another way, ${G^c(m, n)}$ is obtained from the complete bipartite graph ${K_{m, n-m}}$ by connecting all vertices of the ${n-m}$ group together.

Tom A. B. Snijders (1981) proved that ${G(m^*, n)}$ and ${G^c(m^*, n)}$ are the only graphs maximizing the degree variance; in particular, ${G^c(m^*, n)}$ is the unique maximizer among the connected graphs. It is pictured below for ${n=4, \dots, 9}$.

## Laplacian spectrum of small graphs

This is a collection of entirely unoriginal remarks about Laplacian spectrum of graphs. For an accessible overview of the subject I recommend the M.S. thesis The Laplacian Spectrum of Graphs by Michael William Newman. It also includes a large table of graphs with their spectra. Here I will avoid introducing matrices and enumerating vertices.

Let ${V}$ be the vertex set of a graph. Write ${u\sim v}$ if ${u, v}$ are adjacent vertices. Given a function ${f\colon V\to \mathbb R}$, define ${L f(v) = \sum_{u\colon u\sim v}(f(v)-f(u))}$.
This is a linear operator (the graph Laplacian) on the Euclidean space ${\ell^2(V)}$ of all functions ${f\colon V\to \mathbb R}$ with the norm ${\|f\|^2 = \sum_{v\in V} f(v)^2}$. It is symmetric: ${\langle L f, g\rangle = \langle f, L g\rangle }$ and positive semidefinite: ${\langle L f, f\rangle = \frac12 \sum_{u\sim v}(f(u)-f(v))^2\ge 0}$. Since equality is attained for constant ${f}$, 0 is always an eigenvalue of ${L}$.

This is the standard setup, but I prefer to change things a little and replace ${\ell^2(V)}$ by the smaller space ${\ell^2_0(V)}$ of functions with zero mean: ${\sum_{v\in V}f(v)=0}$. Indeed, ${L}$ maps ${\ell^2(V)}$ to ${\ell^2_0(V)}$ anyway, and since it kills the constants, it makes sense to focus on ${\ell^2_0(V)}$. It is a vector space of dimension ${n-1}$ where ${n=|V|}$.

One advantage is that the smallest eigenvalue is 0 if and only if the graph is disconnected: indeed, ${\langle L f, f\rangle=0}$ is equivalent to ${f}$ being constant on each connected component. We also gain better symmetry between ${L}$ and the Laplacian of the graph complement, denoted ${L'}$. Indeed, since ${L' f(v) = \sum_{u\colon u\not \sim v}(f(v)-f(u))}$, it follows that ${(L+L')f(v) = \sum_{u\colon u\ne v} (f(v)-f(u)) = n f(v)}$ for every ${f\in \ell^2_0(V)}$. So, the identity ${L+L' = nI}$ holds on ${\ell^2_0(V)}$ (it does not hold on ${\ell^2(V)}$). Hence the eigenvalues of ${L'}$ are obtained by subtracting the eigenvalues of ${L}$ from ${n}$. As a corollary, the largest eigenvalue of ${L}$ is at most ${n}$, with equality if and only if the graph complement is disconnected. More precisely, the multiplicity of eigenvalue ${n}$ is one less than the number of connected components of the graph complement.

Let ${D}$ denote the diameter of the graph. Then the number of distinct Laplacian eigenvalues is at least ${D}$. Indeed, let ${u, v}$ be two vertices at distance ${D}$ from each other. Define ${f_0(u) = 1}$ and ${f_0=0}$ elsewhere. Also let ${f_k=L^k f_0}$ for ${k=1, 2, \dots}$. Note that ${f_k\in \ell_0^2(V)}$ for all ${k\ge 1}$. One can prove by induction that ${f_k(w)=0}$ when the distance from ${w}$ to ${u}$ is greater than ${k}$, and ${(-1)^k f_k(w) > 0}$ when the distance from ${w}$ to ${u}$ is equal to ${k}$. In particular, ${f_k(v) = 0}$ when ${k and ${f_D(v)\ne 0}$. This shows that ${f_D}$ is not a linear combination of ${f_1, \dots, f_{D-1}}$. Since ${f_k = L^{k-1}f_1}$, it follows that ${L^{D-1}}$ is not a linear combination of ${L^0, L^1, \dots, L^{D-2}}$. Hence the minimal polynomial of ${L}$ has degree at least ${D}$, which implies the claim.

Let’s consider a few examples of connected graphs.

## 3 vertices

There are two connected graphs: the 3-path (D=2) and the 3-cycle (D=1). In both cases we get D distinct eigenvalues. The spectra are [1, 3] and [3, 3], respectively.

## 4 vertices

• One graph of diameter 3, the path. Its spectrum is ${[2-\sqrt{2}, 2, 2+\sqrt{2}]}$.
• One graph of diameter 1, the complete graph. Its spectrum is ${[4, 4, 4]}$. This pattern continues for other complete graphs: since the complement is the empty graph (${n}$ components), all ${n-1}$ eigenvalues are equal to ${n}$.
• Four graphs of diameter 2, which are shown below, with each caption being the spectrum.

Remarks:

• The graph [1, 3, 4] has more distinct eigenvalues than its diameter.
• The graph [2, 2, 4] is regular (all vertices have the same degree).
• The smallest eigenvalue of graphs [1, 1, 4] and [2, 2, 4] is multiple, due to the graphs having a large group of automorphisms (here rotations); applying some of these automorphisms to an eigenfunctions for the smallest eigenvalue yields another eigenfunction.
• [1, 3, 4] and [2, 4, 4] also have automorphisms, but their automorphisms preserve the eigenfunction for the lowest eigenvalue, up to a constant factor.

## 5 vertices

• One graph of diameter 4, the path. Its spectrum is related to the golden ratio: it consists of ${(3\pm \sqrt{5})/2, (5\pm \sqrt{5})/2}$.
• One graph of diameter 1, the complete one: [5, 5, 5, 5]
• Five graphs of diameter 3. All have connected complement, with the highest eigenvalue strictly between 4 and 5. None are regular. Each has 4 distinct eigenvalues.
• 14 graphs of diameter 2. Some of these are noted below.

Two have connected complement, so their eigenvalues are less than 5 (spectrum shown on hover):

One has both integers and non-integers in its spectrum, the smallest such graph. Its eigenvalues are ${3\pm \sqrt{2}, 3, 5}$.

Two have eigenvalues of multiplicity 3, indicating a high degree of symmetry (spectrum shown on hover).

Two have all eigenvalues integer and distinct:

The 5-cycle and the complete graph are the only regular graphs on 5 vertices.

## 6 vertices

This is where we first encounter isospectral graphs: the Laplacian spectrum cannot tell them apart.

Both of these have spectrum ${3\pm \sqrt{5}, 2, 3, 3}$ but they are obviously non-isomorphic (consider the vertex degrees):

Both have these have spectrum ${3\pm \sqrt{5}, 3, 3, 4}$ and are non-isomorphic.

Indeed, the second pair is obtained from the first by taking graph complement.

Also notable are regular graphs on 6 vertices, all of which have integer spectrum.

Here [3, 3, 3, 3, 6] (complete bipartite) and [2, 3, 3, 5, 5] (prism) are both regular of degree 3, but the spectrum allows us to tell them apart.

The prism is the smallest regular graph for which the first eigenvalue is a simple one. It has plenty of automorphisms, but the relevant eigenfunction (1 on one face of the prism, -1 on the other face) is compatible with all of them.

## 7 vertices

There are four regular graphs on 7 vertices. Two of them are by now familiar: 7-cycle and complete graph. Here are the other two, both regular of degree 4 but with different spectra.

There are lots of isospectral pairs of graphs on 7 vertices, so I will list only the isospectral triples, of which there are five.

Spectrum 0.676596, 2, 3, 3.642074, 5, 5.681331:

Spectrum 0.726927, 2, 3.140435, 4, 4, 6.132637:

Spectrum 0.867363, 3, 3, 3.859565, 5, 6.273073:

Spectrum 1.318669, 2, 3.357926, 4, 5, 6.323404:

All of the triples mentioned so far have connected complement: for example, taking the complement of the triple with the spectrum [0.676596, 2, 3, 3.642074, 5, 5.681331] turns it into the triple with the spectrum [1.318669, 2, 3.357926, 4, 5, 6.323404].

Last but not least, an isospectral triple with an integer spectrum: 3, 4, 4, 6, 6, 7. This one has no counterpart since the complement of each of these graphs is disconnected.

## 8 vertices

Regular graphs, excluding the cycle (spectrum 0.585786, 0.585786, 2, 2, 3.414214, 3.414214, 4) and the complete one.

#### Degree 6 regular

Credits: the list of graphs by Brendan McKay and NetworkX library specifically the methods read_graph6 (to read the files provided by Prof. McKay), laplacian_spectrum, diameter, degree, and draw.

## Nodal lines

Wikipedia article on nodes offers this 1D illustration: a node is an interior point at which a standing wave does not move.

(At the endpoints the wave is forced to stay put, so I would not count them as nodes despite being marked on the plot.)

A standing wave in one dimension is described by the equation ${f''+\omega^2 f=0}$, where ${\omega}$ is its (angular) frequency. The function ${u(x,t) = f(x)\cos \omega t}$ solves the wave equation ${u_{tt}=u_{xx}}$: the wave vibrates without moving, hence the name. In mathematics, these are the (Dirichlet) eigenfunctions of the Laplacian.

Subject to boundary conditions ${f(0)=0 = f(\pi)}$ (fixed ends), all standing waves on the interval ${(0,\pi)}$ are of the form ${\sin nx}$ for ${n=1,2,3,\dots}$. Their eigenvalues are exactly the perfect squares, and the nodes are equally spaced on the interval.

Things get more interesting in two dimensions. For simplicity consider the square ${Q=(0,\pi)\times (0,\pi)}$. Eigenfunctions with zero value on the boundary are of the form ${f(x,y) = \sin mx \sin ny}$ for positive integers ${m,n}$. The set of eigenvalues has richer structure, it consists of the integers that can be expressed as the sum of two positive squares: 2, 5, 8, 10, 13, 17,…

The zero sets of eigenfunctions in two dimensions are called nodal lines. At a first glance it may appear that we have nothing interesting: the zero set of ${\sin mx \sin ny}$ is a union of ${n-1}$ equally spaced horizontal lines, and ${m-1}$ equally spaced vertical lines:

But there is much more, because a sum of two eigenfunctions with the same eigenvalue is also an eigenfunction. To begin with, we can form linear combinations of ${\sin mx \sin ny}$ and ${\sin nx \sin my}$. Here are two examples from Partial Differential Equations by Walter Strauss:

When ${f(x,y) = \sin 12x \sin y+\sin x \sin 12y }$, the square is divided by nodal lines into 12 nodal domains:

After slight perturbation ${f(x,y) = \sin 12x \sin y+0.9\sin x \sin 12y }$ there is a single nodal line dividing the square into two regions of intricate geometry:

And then there are numbers that can be written as sums of squares in two different ways. The smallest is ${50=1^2+7^2 = 5^2+5^2}$, with eigenfunctions such as

$\displaystyle f(x,y) = \sin x\sin 7y +2\sin 5x \sin 5y+\sin 7x\sin y$

pictured below.

This is too good not to replicate: the eigenfunctions naturally extend as doubly periodic functions with anti-period ${\pi}$.

## Integrate by parts twice and solve for the integral

The dreaded calculus torture device that works for exactly two integrals, ${\int e^{ax}\sin bx\,dx}$ and ${\int e^{ax}\cos bx\,dx}$.

Actually, no. A version of it (with one integration by parts) works for ${\int x^n\,dx}$:

$\displaystyle \int x^n\,dx = x^n x - \int x\, d(x^n) = x^{n+1} - n \int x^n\,dx$

hence (assuming ${n\ne -1}$)

$\displaystyle \int x^n\,dx = \frac{x^{n+1}}{n+1} +C$

Yes, this is more of a calculus joke. A more serious example comes from Fourier series.

The functions ${\sin nx}$, ${n=1,2,\dots}$, are orthogonal on ${[0,\pi]}$, in the sense that

$\displaystyle \int_0^\pi \sin nx \sin mx \,dx =0 , \quad m\ne n$

This is usually proved using a trigonometric identity that converts the product to a sum. But the double integration by parts give a nicer proof, because no obscure identities are needed. No boundary terms will appear because the sines vanish at both endpoints:

$\displaystyle \int_0^\pi \sin nx \sin mx \,dx = \frac{n}{m} \int_0^\pi \cos nx \cos mx \,dx = \frac{n^2}{m^2} \int_0^\pi \sin nx \sin mx \,dx$

All integrals here must vanish because ${n^2/m^2\ne 1}$. As a bonus, we get the orthogonality of cosines, ${\int_0^\pi \cos nx \cos mx \,dx=0}$, with no additional effort.

The double integration by parts is also a more conceptual proof, because it gets to the heart of the matter: eigenvectors of a symmetric matrix (operator) that correspond to different eigenvalues are orthogonal. The trigonometric form is incidental, the eigenfunction property is essential. Let’s try this one more time, for the mixed boundary value problem ${f(a)=0}$, ${f'(b)=0}$. Suppose that ${f}$ and ${g}$ satisfy the boundary conditions, ${f''=\lambda f}$, and ${g''=\mu g}$. Since ${fg'}$ and ${f'g}$ vanish at both endpoints, we can pass the primes easily:

$\displaystyle \int_a^b fg= \frac{1}{\mu}\int_a^b fg'' = -\frac{1}{\mu}\int_a^b f'g' = \frac{1}{\mu}\int_a^b f''g = \frac{\lambda}{\mu} \int_a^b fg$

If ${\lambda\ne \mu}$, all integrals must vanish.

## Tristram-Levine signatures with Scilab

The signature of a Hermitian matrix ${A}$ can be defined either as the pair (number of positive eigenvalues, number of negative eigenvalues), or simply as the difference

$\displaystyle s(A)=\#\{\lambda_i: \lambda_i>0\}-\#\{\lambda_i: \lambda_i<0\}$

The function ${s(A)}$ hides some information when the matrix is degenerate, but this will not be of concern here. For a matrix of size ${n}$, the number ${s(A)}$ is between ${-n}$ and ${n}$ and has the same parity as ${n}$.

Given any square matrix ${V}$ with real entries and a complex number ${\omega}$, we can form ${V_\omega = (1-\omega)V+(1-\bar\omega)V^T}$, which is a Hermitian matrix. Then ${s(\omega):=s(V_\omega)}$ is an integer-valued function of ${\omega}$. Restricting attention to the unit circle ${|\omega|=1}$, we obtain a piecewise constant function with jumps at the points where ${V_\omega}$ is degenerate.

When ${A}$ is a Seifert matrix of a knot, the function ${s(\omega)}$ is the Tristram-Levine signature of the knot. To each knot ${K}$ there are infinitely many Seifert surfaces ${F}$, and to each Seifert surface there are infinitely many Seifert matrices ${A}$, depending on how we choose the generators of ${H_1(F)}$. Yet, ${s(\omega)}$ depends on ${K}$ alone.

Below I plot ${s(\omega)}$ for a few knots using Scilab for computation of signature and the knot data from

J. C. Cha and C. Livingston, KnotInfo: Table of Knot Invariants

Technical point: since Scilab enumerates colors using positive integers, the colors below correspond to the number of positive eigenvalues rather than the signature. Namely, black for 0, blue (1), green (2), and cyan (3).

As always, first comes the trefoil:

One of its Seifert matrices is

$\displaystyle \begin{pmatrix} -1 & 0 \\ -1 & -1 \end{pmatrix}$

and the Tristram-Levine signature is

Next, the knot ${8_5}$

with Seifert matrix

$\displaystyle \begin{pmatrix} -1& 0& 0& -1& -1& -1\\ 0& 1& 0& 0& 0& 0\\ -1& 0& -1& -1& -1& -1\\ 0& -1& 0& -1& -1& -1\\ 0& -1& 0& 0& -1& 0\\ 0& -1& 0& 0& -1& -1 \end{pmatrix}$

and the signature

And finally, the knot ${8_{19}}$:

with Seifert matrix

$\displaystyle \begin{pmatrix} -1& 0& 0& 0& 0& 0\\ -1& -1& 0& 0& 0& 0\\ -1& -1& -1& -1& 0& -1\\ -1& -1& 0& -1& 0& 0\\ 0& 0& -1& -1& -1& -1\\ -1& -1& 0& -1& 0& -1\end{pmatrix}$

and the signature

I experimented with a few more, trying to create more colors. However, guessing the complexity of the signature by looking at the Seifert matrix is one of many skills I do not have. So I conclude with the simple code used to plot the signatures.

 function sig(A) clf(); [n,n] = size(A); Npoints = 200; r = 2*%pi/Npoints; arcs = zeros(6,Npoints); colors = zeros(Npoints); for m = 1:Npoints x = cos(2*%pi*(m-1/2)/Npoints); y = sin(2*%pi*(m-1/2)/Npoints); omega = complex(x,y); B = (1-omega)*A+(1-conj(omega))*A'; signature = sum(sign(spec(B))); colors(m) = (signature+n)/2+1; arcs(:,m) = [x-r,y-r,2*r,2*r,0,360*64]'; end xfarcs(arcs,colors'); replot([-1,-1,1,1]); a=get("current_axes"); a.isoview="on"; endfunction