Nodal lines

Wikipedia article on nodes offers this 1D illustration: a node is an interior point at which a standing wave does not move.

Standing wave and its nodes
Standing wave and its nodes

(At the endpoints the wave is forced to stay put, so I would not count them as nodes despite being marked on the plot.)

A standing wave in one dimension is described by the equation {f''+\omega^2 f=0}, where {\omega} is its (angular) frequency. The function {u(x,t) = f(x)\cos \omega t} solves the wave equation {u_{tt}=u_{xx}}: the wave vibrates without moving, hence the name. In mathematics, these are the (Dirichlet) eigenfunctions of the Laplacian.

Subject to boundary conditions {f(0)=0 = f(\pi)} (fixed ends), all standing waves on the interval {(0,\pi)} are of the form {\sin nx} for {n=1,2,3,\dots}. Their eigenvalues are exactly the perfect squares, and the nodes are equally spaced on the interval.

Things get more interesting in two dimensions. For simplicity consider the square {Q=(0,\pi)\times (0,\pi)}. Eigenfunctions with zero value on the boundary are of the form {f(x,y) = \sin mx \sin ny} for positive integers {m,n}. The set of eigenvalues has richer structure, it consists of the integers that can be expressed as the sum of two positive squares: 2, 5, 8, 10, 13, 17,…

The zero sets of eigenfunctions in two dimensions are called nodal lines. At a first glance it may appear that we have nothing interesting: the zero set of {\sin mx \sin ny} is a union of {n-1} equally spaced horizontal lines, and {m-1} equally spaced vertical lines:

Boring nodal lines
This is a square, not a tall rectangle

But there is much more, because a sum of two eigenfunctions with the same eigenvalue is also an eigenfunction. To begin with, we can form linear combinations of {\sin mx \sin ny} and {\sin nx \sin my}. Here are two examples from Partial Differential Equations by Walter Strauss:

When {f(x,y) = \sin 12x \sin y+\sin x \sin 12y }, the square is divided by nodal lines into 12 nodal domains:

Frequency 145, twelve nodal domains
Eigenvalue 145, twelve nodal domains

After slight perturbation {f(x,y) = \sin 12x \sin y+0.9\sin x \sin 12y } there is a single nodal line dividing the square into two regions of intricate geometry:

Also frequency 145, but two  nodal domains
Also eigenvalue 145, but two nodal domains

And then there are numbers that can be written as sums of squares in two different ways. The smallest is {50=1^2+7^2 = 5^2+5^2}, with eigenfunctions such as

\displaystyle    f(x,y) = \sin x\sin 7y +2\sin 5x \sin 5y+\sin 7x\sin y

pictured below.

Frequency 50
Frequency 50

This is too good not to replicate: the eigenfunctions naturally extend as doubly periodic functions with anti-period {\pi}.

Periodic extension
Periodic extension

Integrate by parts twice and solve for the integral

The dreaded calculus torture device that works for exactly two integrals, {\int e^{ax}\sin bx\,dx} and {\int e^{ax}\cos bx\,dx}.

Actually, no. A version of it (with one integration by parts) works for {\int x^n\,dx}:

\displaystyle    \int x^n\,dx = x^n x - \int x\, d(x^n) = x^{n+1} - n \int x^n\,dx

hence (assuming {n\ne -1})

\displaystyle  \int x^n\,dx = \frac{x^{n+1}}{n+1} +C

Yes, this is more of a calculus joke. A more serious example comes from Fourier series.

The functions {\sin nx}, {n=1,2,\dots}, are orthogonal on {[0,\pi]}, in the sense that

\displaystyle    \int_0^\pi \sin nx \sin mx \,dx =0 , \quad m\ne n

This is usually proved using a trigonometric identity that converts the product to a sum. But the double integration by parts give a nicer proof, because no obscure identities are needed. No boundary terms will appear because the sines vanish at both endpoints:

\displaystyle    \int_0^\pi \sin nx \sin mx \,dx = \frac{n}{m} \int_0^\pi \cos nx \cos mx \,dx = \frac{n^2}{m^2} \int_0^\pi \sin nx \sin mx \,dx

All integrals here must vanish because {n^2/m^2\ne 1}. As a bonus, we get the orthogonality of cosines, {\int_0^\pi \cos nx \cos mx \,dx=0}, with no additional effort.

The double integration by parts is also a more conceptual proof, because it gets to the heart of the matter: eigenvectors of a symmetric matrix (operator) that correspond to different eigenvalues are orthogonal. The trigonometric form is incidental, the eigenfunction property is essential. Let’s try this one more time, for the mixed boundary value problem {f(a)=0}, {f'(b)=0}. Suppose that {f} and {g} satisfy the boundary conditions, {f''=\lambda f}, and {g''=\mu g}. Since {fg'} and {f'g} vanish at both endpoints, we can pass the primes easily:

\displaystyle    \int_a^b fg= \frac{1}{\mu}\int_a^b fg'' = -\frac{1}{\mu}\int_a^b f'g' = \frac{1}{\mu}\int_a^b f''g = \frac{\lambda}{\mu} \int_a^b fg

If {\lambda\ne \mu}, all integrals must vanish.

Tristram-Levine signatures with Scilab

The signature of a Hermitian matrix {A} can be defined either as the pair (number of positive eigenvalues, number of negative eigenvalues), or simply as the difference

\displaystyle s(A)=\#\{\lambda_i: \lambda_i>0\}-\#\{\lambda_i: \lambda_i<0\}

The function {s(A)} hides some information when the matrix is degenerate, but this will not be of concern here. For a matrix of size {n}, the number {s(A)} is between {-n} and {n} and has the same parity as {n}.

Given any square matrix {V} with real entries and a complex number {\omega}, we can form {V_\omega = (1-\omega)V+(1-\bar\omega)V^T}, which is a Hermitian matrix. Then {s(\omega):=s(V_\omega)} is an integer-valued function of {\omega}. Restricting attention to the unit circle {|\omega|=1}, we obtain a piecewise constant function with jumps at the points where {V_\omega} is degenerate.

When {A} is a Seifert matrix of a knot, the function {s(\omega)} is the Tristram-Levine signature of the knot. To each knot {K} there are infinitely many Seifert surfaces {F}, and to each Seifert surface there are infinitely many Seifert matrices {A}, depending on how we choose the generators of {H_1(F)}. Yet, {s(\omega)} depends on {K} alone.

Below I plot {s(\omega)} for a few knots using Scilab for computation of signature and the knot data from

J. C. Cha and C. Livingston, KnotInfo: Table of Knot Invariants

Technical point: since Scilab enumerates colors using positive integers, the colors below correspond to the number of positive eigenvalues rather than the signature. Namely, black for 0, blue (1), green (2), and cyan (3).

As always, first comes the trefoil:

Trefoil 3_1

One of its Seifert matrices is

\displaystyle \begin{pmatrix} -1 & 0 \\ -1 & -1 \end{pmatrix}

and the Tristram-Levine signature is

trefoil

Next, the knot {8_5}

8_5

with Seifert matrix

\displaystyle  \begin{pmatrix} -1& 0& 0& -1& -1& -1\\ 0& 1& 0& 0& 0& 0\\ -1& 0& -1& -1& -1& -1\\ 0& -1& 0& -1& -1& -1\\ 0& -1& 0& 0& -1& 0\\ 0& -1& 0& 0& -1& -1 \end{pmatrix}

and the signature

TL signature for 8_5
TL signature for 8_5

And finally, the knot {8_{19}}:

8_19

with Seifert matrix

\displaystyle  \begin{pmatrix} -1& 0& 0& 0& 0& 0\\ -1& -1& 0& 0& 0& 0\\ -1& -1& -1& -1& 0& -1\\ -1& -1& 0& -1& 0& 0\\ 0& 0& -1& -1& -1& -1\\ -1& -1& 0& -1& 0& -1\end{pmatrix}

and the signature

TL signature for 8_19
TL signature for 8_19

I experimented with a few more, trying to create more colors. However, guessing the complexity of the signature by looking at the Seifert matrix is one of many skills I do not have. So I conclude with the simple code used to plot the signatures.

function sig(A)
    clf();
    [n,n] = size(A);
    Npoints = 200;
    r = 2*%pi/Npoints;
    arcs = zeros(6,Npoints);
    colors = zeros(Npoints);
    for m = 1:Npoints
        x = cos(2*%pi*(m-1/2)/Npoints); 
        y = sin(2*%pi*(m-1/2)/Npoints);
        omega = complex(x,y);
        B = (1-omega)*A+(1-conj(omega))*A';
        signature = sum(sign(spec(B)));
        colors(m) = (signature+n)/2+1;
        arcs(:,m) = [x-r,y-r,2*r,2*r,0,360*64]';
    end
    xfarcs(arcs,colors');
    replot([-1,-1,1,1]);
    a=get("current_axes");
    a.isoview="on";
endfunction