Another orthonormal basis: Hermite functions

This is an orthonormal basis for L^2(\mathbb R). Since the measure of \mathbb R is infinite, functions will have to decay at infinity in order to be in L^2. The Hermite functions are
\displaystyle \Phi_n(x)=(2^n n! \sqrt{\pi})^{-1/2} H_n(x)e^{-x^2/2}
where H_n is the nth Hermite polynomial, defined by
\displaystyle H_n(x)=(-1)^n e^{x^2} \left(\frac{d}{dx}\right)^n e^{-x^2}.
The goal is to prove that the functions \Phi_n can be obtained from x^n e^{-x^2/2} via the Gram-Schmidt process. (They actually form a basis, but I won’t prove that.)

One can observe that the term e^{-x^2/2} would be unnecessary if we considered the weighted space L^2(\mathbb R, w) with weight w(x)=e^{-x^2} and the inner product \langle f,g\rangle=\int_{\mathbb R} fg\,w\,dx. In this language, we orthogonalize the sequence of monomials \lbrace x^n\rbrace\subset L^2(\mathbb R, w) and get the ON basis of polynomials \{c_n H_n\} with c_n = (2^n n! \sqrt{\pi})^{-1/2} being a normalizing constant. But since weighted spaces were never introduced in class, I’ll proceed with the original formulation. First, an unnecessary graph of \Phi_0,\dots,\Phi_4; the order is red, green, yellow, blue, magenta.

Hermite Functions

Claim 1. H_n is a polynomial of degree n with the leading term 2^n x^n. Proof by induction, starting with H_0=1. Observe that

\displaystyle H_{n+1}=- e^{x^2} \frac{d}{dx}\left(e^{-x^2} H_n\right) =2x H_n - H_n'

where the first term has degree n+1 and the second n-1. So, their sum has degree exactly n+1, and the leading coefficient is 2^{n+1}. Claim 1 is proved.

In particular, Claim 1 tells us that the span of the \Phi_0,\dots,\Phi_n is the same as the span of \lbrace x^k e^{-x^2/2}\colon 0\le k\le n\rbrace.

Claim 2. \Phi_m\perp \Phi_n for m\ne n. We may assume m<n. Must show \int_{\mathbb R} H_m(x) H_n(x) e^{-x^2}\,dx=0. Since H_m is a polynomial of degree m<n, it suffices to prove

(*) \displaystyle \int_{\mathbb R} x^k H_n(x) e^{-x^2}\,dx=0 for integers 0\le k<n.

Rewrite (*) as \int_{\mathbb R} x^k \left(\frac{d}{dx}\right)^n e^{-x^2} \,dx=0 and integrate by parts repeatedly, throwing the derivatives onto x^k until the poor guy can't handle it anymore and dies. No boundary terms appear because e^{-x^2} decays superexponentially at infinity, easily beating any polynomial factors. Claim 2 is proved.

Combining Claim 1 and Claim 2, we see that \Phi_n belongs to the (n+1)-dimensional space \mathrm{span}\,\lbrace x^k e^{-x^2/2}\colon 0\le k\le n\rbrace, and is orthogonal to the n-dimensional subspace \mathrm{span}\,\lbrace x^k e^{-x^2/2}\colon 0\le k\le n-1\rbrace. Since the “Gram-Schmidtization'' of x^n e^{-x^2/2} has the same properties, we conclude that \Phi_n agrees with this “Gram-Schmidtization'' up to a scalar factor.

It remains to prove that the scalar factor is unimodular (\pm 1 since we are over reals).

Claim 3. \langle \Phi_n, \Phi_n\rangle=1 for all n. To this end we must show \int_{\mathbb R} H_n(x)H_n(x)e^{-x^2}\,dx =2^n n! \sqrt{\pi}. Expand the first factor H_n into monomials, use (*) to kill the degrees less than n, and recall Claim 1 to obtain
\int_{\mathbb R} H_n(x)H_n(x)e^{-x^2}\,dx = 2^n \int_{\mathbb R} x^n H_n(x)e^{-x^2}\,dx = (-1)^n 2^n\int_{\mathbb R} x^n \left(\frac{d}{dx}\right)^n e^{-x^2} \,dx.
As in the proof of Claim 2, we integrate by parts throwing the derivatives onto x^n. After n integrations the result is
2^n \int_{\mathbb R} n! e^{-x^2} \,dx = 2^n n! \sqrt{\pi}, as claimed.

P.S. According to Wikipedia, these are the “physicists’ Hermite polynomials”. The “probabilists’ Hermite polynomials” are normalized to have the leading coefficient 1.

Three knot invariants

This has been a long Monday, toward the end of which I confused knot width with its bridge number. This post is meant to remind me which is which. Here is a trefoil knot:

Trefoil knot

The crossing number is the smallest number of self-intersections that the knot can have when thrown on a plane. For the trefoil cr=3.

Here is the same knot on which points of locally maximal height are marked with red, and points of locally minimal height are in blue. There are two extrema of each kind.

Bridge number

The bridge number of a knot is the smallest number of maxima in any of its diagrams. The trefoil has b=2. A knot is in the bridge position if its diagram achieves b, and in addition all the maxima are above all the minima. This allows one to neatly cut the knot by its bridge sphere (in orange), leaving b untangled arcs hanging from either side.

The third invariant also has to do with the extrema of the height function. Now we pick a regular value in each interval between each pair of consecutive critical values, and count the multiplicity of said value. The sum of all these multiplicities is the width of the knot. For the trefoil w=8.

Width

The knot is in a thin position when it attains w, as on the diagram above. Despite their apparent similarity, two different positions of the knot may be required to attain b and w. Also, the invariants behave in different ways under connected sums. Here is a connected sum of two knots, with trefoil on the left:

Sum of knots from http://en.wikipedia.org/wiki/File:Sum_of_knots3.svg

It’s obvious that the crossing number is subadditive: \mathrm{cr}\,(K\# K')\le \mathrm{cr}\,(K)+\mathrm{cr}\,(K'). It remains an open problem whether it’s in fact additive, i.e., whether the equality always holds.

The bridge number is the best behaved of the three: it’s known that b(K\# K')=b(K)+b(K')-1 (one-sided inequality is easy: connected sum can kill the absolute maximum on the lower-placed knot). Horst Schubert proved this equality in 1954. Much more recently (in 2004) Jennifer Schultens used modern machinery to give a 6-page proof of Schubert’s theorem.

The width turns out to misbehave. By putting one of the summands way below the other, one sees at once that w(K\# K')\le w(K)+w(K')-2. In many examples this is in fact an equality. However, very recently (2011) Blair and Tomova proved that strict inequality may hold; moreover, the Scharlemann-Schultens lower bound w(K\# K')\ge \min(w(K),w(K')) is attained for infinitely many pairs of knots.

The Walsh basis

If you ask a random passerby to give you an orthonormal basis for L^2[0,1], they will probably respond with e_n(t)=\exp(2\pi i nt), n\in \mathbb Z. There is a lot to like about this exponential basis: most importantly, it diagonalizes the \frac{d}{dt} operator: \frac{d}{dt}e_n=2\pi i n e_n. This property makes the exponential basis indispensable in the studies of differential equations. However, I prefer to describe the Walsh basis, which has several advantages:

  • the basis functions take just two values \pm 1, which simplifies the computation of coefficients
  • the proof of the basis property is easier than for the exponential basis
  • there is a strong connection to probability: the Walsh expansion can be seen as conditional expectation, and the partial sums form a Doob martingale
  • partial sums converge a.e. for any L^1 function, which is not the case for the exponential basis.

First, introduce the Rademacher functions r_n=\mathrm{sign}\, \sin (2^{n+1} \pi t), n=0,1,2,\dots (The enumeration is slightly different from what I used in class.) These are r_0,r_1,r_2,r_3:

Rademacher functions

Alternatively, one can define r_n as the function which takes the values +1,-1 alternatively on the dyadic intervals \displaystyle \bigg[\frac{j}{2^{n+1}},\frac{j+1}{2^{n+1}}\bigg).

To define the nth Walsh function W_n, express the index n as the sum of powers of 2, i.e., n=2^{p_1}+2^{p_2}+\dots and let W_n=r_{p_1}r_{p_2}\dots . For example, W_{13}=r_3r_2r_0 because 13=2^3+2^2+2^0. Since the binary representation is unique, the definition makes sense. We also have W_0=1 because the product of an empty set of numbers is 1.

In class I checked that the set \lbrace W_n\colon n=0,1,2,\dots\rbrace is orthonormal. Also, for any integer k\ge 0 the linear span of \lbrace W_n\colon 0\le n< 2^k \rbrace is the space V_k of all functions that are constant on the dyadic intervals of length 2^{-k}. This follows by observing that \lbrace W_n\colon 0\le n< 2^k \rbrace\subset V_k and that the dimension of V_k is 2^k.

To prove that the Walsh basis is indeed a basis, suppose that h\in L^2[0,1] is orthogonal to all W_n. Since h\perp V_k for all k, the integral of h over any dyadic interval is zero (note that the characteristic function of any dyadic interval belongs to some V_k). But any subinterval I\subset [0,1] can be written as a disjoint countable union of dyadic intervals: just take all dyadic intervals that are contained in I. (You don't necessarily get the right type of endpoints, but as long as we work with integrals, the difference between open and closed intervals is immaterial.) Thus, the integral of h over any subinterval of [0,1] is zero. By the Lebesgue differentiation theorem, for a.e. t we have \displaystyle h(t)=\lim_{\delta\to 0}\frac{1}{2\delta}\int_{t-\delta}^{t+\delta} h =0. Thus h=0 as required.

The proof is even simpler if we use the non-centered form of the Lebesgue differentiation theorem: for a.e. t the average \frac{1}{b-a}\int_a^b h approaches h(t) as a,b\to t in such a way that a\le t\le b. Armed with this theorem, we can consider the sequence of dyadic intervals containing t, and immediately obtain h(t)=0 a.e.

Having proved that \lbrace W_n\rbrace is a basis, let’s expand something in it. For example, this moderately ugly function f:

Ugly function

I used Maple to compute the coefficients c_n=\langle f, W_n\rangle and plotted the partial sums \sum_{n=0}^N c_n W_n for N=1,3,7,15:

Partial sums

Such partials sums (those that use 2^k basis functions) are particularly nice: they are obtained simply by averaging f over each dyadic interval of length 2^{-k}. In probability theory this is known as conditional expectation. The conditional expectation is a contraction in any L^p space, including L^1 which gives so much trouble to the exponential basis. The highly oscillatory parts of f are killed by the dyadic averaging; in contrast, when integrated against the exponentials, they may cleverly accumulate and destroy the convergence of partial sums.

The magic of cyclic sieving and q-analogs

In my previous post the 14 possible triangulations of hexagons were arranged by Whitehead moves. But if asked, most people would probably group them in this way:

Triangulations of hexagon
Triangulations of hexagon

Well, some may decide to put the S- and Z-shapes together since they are mirror images of each other. But I won’t. It’s easy to rotate a piece of paper with a triangulation on it; it’s not nearly as easy to fetch a mirror and study triangulations through it. So let it be recorded that the cyclic group \mathbb Z_6 acts on triangulations by rotating the hexagon. The orbits have lengths 6,3,3,2.

In the previous post I also mentioned that the number of triangulations of an (n+2)-gon is the Catalan number C_n=\frac{1}{n+1}\binom{2n}{n}. With n=4 we get

\displaystyle C_4=\frac{1}{5}\,\frac{8\cdot 7\cdot 6\cdot 5}{4\cdot 3\cdot 2}=\frac{8\cdot 7\cdot 6}{4\cdot 3\cdot 2} \qquad (*)

Amazingly, the formula (*) does not just count triangulations: it also knows how they behave under rotations of the hexagons. But it will only tell us if we don’t rush to its evaluation. Instead, replace each number on the right of (*) by its q-analog

\displaystyle [n]_q=\frac{1-q^n}{1-q}  =1+q+q^2+\dots+q^{n-1}

The result is

\displaystyle C_4(q) = \frac{(1-q^8)(1-q^7)(1-q^6)}{(1-q^4)(1-q^3)(1-q^2)} =1+q^2+q^3+2q^4+q^5+2q^6+q^7+2q^8+q^9+q^{10}+q^{12}\qquad (**)

The fact that we get a polynomial can be explained: since the Catalan number is an integer, every prime divisor of the denominator in (*) also appears in the numerator (to equal or higher power), and this can be translated into the order of vanishing of polynomials in (**) at the roots of unity. It’s not as easy to explain why the coefficients end up nonnegative.

Now, both the cyclic group \mathbb Z_6 and the roots of unity were mentioned. Let’s bring them together by introducing \zeta=\exp(2\pi i/6) which generates \mathbb Z_6 by multiplication. The cyclic sieving phenomenon (in this special case) is the following: C_4(\zeta^d) is the number of triangulations fixed by the action of \zeta^d. Indeed, evaluating C_4(q) at q=\zeta,\zeta^2,\dots,\zeta^6 we get 0,2,6,2,0,14. So, no triangulations are invariant under rotation by \zeta, two are invariant under rotation by \zeta^2, etc. It’s easy to read the entire orbit structure from this:

  • there are no fixed points
  • 2 elements of order 2 form a single orbit of size 2
  • 6 elements of order 3 form two orbits of size 3
  • there are no elements of order 4, since their number is C_4(\zeta^4)-C_4(\zeta^2)=2-2=0
  • there are C_4(\zeta^6)-C_4(\zeta^2)-C_4(\zeta^3)=6 elements of order 6, which form a single orbit of size 6

And it’s not just triangulations. Consider noncrossing matchings of 2n points — these can be observed experimentally by sitting 2n people at a round table and asking them all to shake hands with someone with no arms crossing.

Handshake patterns, 8 people
Handshake patterns, 8 people

The number of such matchings is also the Catalan number C_n (hence 14 matchings for 8 people). But the orbit structure is completely different! How can the same polynomial C_4(q) work here? It can because we now act by \mathbb Z_8 and therefore take \zeta=\exp(2\pi i/8). Evaluation of C_4(q) at the powers of \zeta yields 0,2,0,6,0,2,0,14. As above, we see the orbit structure in these numbers:

  • 2 elements of order 2 form a single orbit
  • 4 elements of order 4 form a single orbit
  • 8 elements of order 8 form a single orbit

And it’s not just the Catalan numbers. Suppose that among those 8 guests at the round table are 2 men and 6 women. The number of seating arrangements is \displaystyle \binom{8}{2}=\frac{8\cdot 7}{2}. Don’t evaluate to 28 just yet. Replace each integer with its q-analog and you’ll get the polynomial \displaystyle 1+q+2q^2+2q^3+3q^4+3q^5+4q^6+3q^7+3q^8+2q^9+2q^{10}+q^{11}+q^{12}, which will tell you that under rotation, 4 arrangements form an orbit of size 4, while the other 24 form 3 orbits of size 8.

And there is much more. The cyclic sieving phenomenon was established recently (in 2004) by three mathematicians from Minnesota (Reiner, Stanton, and White), and there are still paths waiting to be explored.

Triangulation of polygons, Whitehead moves, and associahedron

To triangulate a polygon means to cut it into triangles by diagonals. Let us assume that the polygon is convex, which means it may as well be the regular n-gon. There is nothing to do if n=3 (it’s already a triangle), so we begin with n=4. There are two ways to cut a square with a diagonal… well that wasn’t much to say.

Next, n=5. There are five ways to triangulate a pentagon: just choose a vertex and draw two diagonals starting from it. Actually, the number of triangulations of any convex n-gon is the kth Catalan number C_k=frac{1}{k+1}binom{2k}{k} with k=n-2. With k=5-2=3 we get 5 as expected.

But we should do something beyond counting. A basic operation that one can perform on a triangulation is to remove one diagonal and replace it with another one (the choice of replacement is unique). This is called a Whitehead move. We can draw a graph in which vertices represent triangulations and edges correspond to Whitehead moves. Doing this for a pentagon gives… a pentagon. Funny.

Note that any triangulation of an n-gon involves exactly k-1=n-3 diagonals. Hence, there are k-1 possible Whitehead moves for any triangulation. This means that each vertex in our graph will have k-1 neighbors; in mathematical terms the degree of each vertex is k-1. As a consequence, the graph has (k-1) C_k /2 edges.

The case n=6 yields this pretty graph.

Triangulations of hexagon and Whitehead moves
Triangulations of hexagon and Whitehead moves

Note that there are no triangles in this graph. There can’t be any, because two subsequent Whitehead moves result in removal of two diagonals, and this can’t be undone in one move. So, the shortest cycle has length 4. Looking closely at the faces (some with 4 vertices, some with 5) you notice that all their triangulations have one diagonal in common. The faces can actually be represented by polygons, making the graph into a polytope known as an associahedron:

Associahedron
Associahedron from http://en.wikipedia.org/wiki/File:Polytope_K3.svg

Is there a natural way to color the vertices, for any n? What is the chromatic number anyway? But it’s getting late.

The Curve Complex

This note is filed under “things I know nothing about”. If you want to learn from someone who knows something, I recommend “Notes on the complex of curves” by Saul Schleimer.

Genus 2 closed surface (double torus)
Double torus from http://en.wikipedia.org/wiki/File:Double_torus_illustration.png

The curve complex is an infinite simplicial complex associated to a (topological) surface. It is already interesting enough to look at the 1-skeleton of this complex, which is simply an infinite graph. I will try to say something about this graph using as an example the closed surface of genus 2, known as a double torus. (Some call it a pretzel, but most pretzels I’ve seen have genus 3.)

Imagine a simple closed curve \gamma on this surface. If \gamma can be continuously shrunk to a point, it is of no interest to us, because it does not see the topology of the surface. Let us consider only essential curves: those that cannot be shrunk to a point. We are also not interested in the exact position of the curve, and so consider two curves \gamma and \gamma' equivalent (isotopic) if it’s possible to slide \gamma into \gamma' along the surface. For example, A and B are equivalent, but B and C are not, and neither are C and D.
Four closed curves
These equivalence classes will be the vertices of our graph. There are infinitely many vertices, because one can construct complicated curves by, say, spiraling about one handle a bunch of times, then switching to the other, and then coming back. To define a graph we must know when to draw an edge between two vertices. The answer is: when they can be represented by curves that do not intersect. So, the vertices corresponding to A, C, and D would all be connected to one another. Let us denote this graph by C^1(S), the 1-skeleton of the curve complex C(S).

The first non-obvious fact about C^1(S) is that it is connected. Think about what this means: now matter how crazily intertwined two curves \gamma and \gamma' are, one can find a finite sequence \gamma_0=\gamma, \gamma_1,\dots, \gamma_n=\gamma' such that \gamma_k and \gamma_{k+1} can slide out of each other’s way.

The second non-obvious fact is that C^1(S) is an unbounded metric space. The metric is defined as on any connected graph: the distance between two vertices v,w is the minimal number of edges one must travel to get from v to w. So, neighboring vertices are at distance 1. If the distance between two curves \gamma_1 and \gamma_2 is 3 or more, then any other curve must intersect either \gamma_1 or \gamma_2. This is expressed by saying that \gamma_1 and \gamma_2 fill the surface; there is no room for anyone else. Therefore, if we cut the surface along \gamma_{1,2}, it falls apart into simply connected pieces, i.e., disks. Note that if the curves were smooth, so are the disks, because we do not create corners in this process.

To exhibit a pair of curves at distance 4 (and to prove that the example works) is a serious undertaking. So it should not come as a surprise that the large-scale geometry of the curve complex is a rich and challenging subject of study. It is a Gromov hyperbolic space, so there is a well-defined boundary at infinity which can be equipped with one of (equivalent) visual metrics. The topology of this boundary remains to be understood.

One last remark. Some vertices in the complex correspond to curves that bound a disk inside of the surface. Let D be the union of such vertices. This set is not invariant under homeomorphisms of the surface; it depends on how it sits inside of \mathbb R^3; or, in other words, on how the surface is “filled in” to become a 3-dimensional handlebody B. This makes it possible to quantify the complexity of a homeomorphism f\colon \partial B\to \partial B: if the distance between D and f(D) is large, then the homeomorphism is rather complicated.

Playing with iterated function systems

After Colin Carroll posted several fractal experiments with Matlab, I decided to do something of the sort. One difference is that I use Scilab, an open-source alternative to Matlab.

The first experiment: drawing the Sierpinski carpet using the Chaos Game. Namely, given a finite family of strict contractions f_1,\dots,f_r\colon \mathbb R^2\to \mathbb R^2 and an initial point p_0, plot the sequence p_{n+1}=f_{j_n}(p_n), where j_n \in \{1,\dots,r\} is chosen randomly at each step. To simplify matters, let f_j be the similarity transformation with scaling factor s\in (0,1) and the fixed point v_j.

A canonical example is: v_1,v_2,v_3 are the vertices of equilateral triangle, s=1/2. This produces the fractal known as the Sierpinski gasket. For a different example, set s=1/3 and let v_1,\dots,v_8 be the vertices of square together with midpoints of its sides. The resulting fractal is known as the Sierpinski carpet.

Sierpinski Carpet
Sierpinski Carpet

This image was obtained by calling the scilab function given below as Scarpet(1/3, 100000). The function is essentially a translation of Colin’s code to scilab. Caution: if you copy and paste this code, watch out for line breaks and encoding of quote marks.

function Scarpet(scale,steps)
    b=1-scale;
    x = [1,0,-1,-1,-1,0,1,1];
    y = [1,1,1,0,-1,-1,-1,0];
    sides=length(x);
    point = zeros(steps,2);
    vert = grand(1,steps,'uin',1,sides);
    for j = 2:steps
        point(j,:) = scale*point(j-1,:) + b*[x(vert(j)),y(vert(j))];
    end
    plot(point(:,1),point(:,2),'linestyle','none','markstyle','.','marksize',1);
endfunction

Regardless of the choice of initial point p_0, the set of cluster points of the sequence (p_n) is exactly the invariant set K, namely the unique nonempty compact set such that K=\bigcup_{j=1}^r f_j(K). This is proved, for example, in the book Integral, Probability, and Fractal Measures by Gerald Edgar.

The scaling factor s=1/3 for the carpet is chosen so that the images of the original square under the eight similarities touch, but do not overlap. With a smaller factor the fractal looks like dust (a totally disconnected set), while with s\ge 1/2 it becomes a solid square. The intermediate range 1/3<s<1/2 is tricky: I think that K has measure zero, but can’t even prove that it’s nowhere dense.

It’s also possible to draw K in the opposite way, by removing points rather than adding them. To this end, let P be the convex hull of the set \{v_1,\dots,v_r\}; that is, a solid convex polygon. It’s not hard to see that K\subset P. Therefore, \bigcup_{j=1}^r f_j(K)\subset \bigcup_{j=1}^r f_j(P), but since the set on the left is K itself, we get K\subset \bigcup_{j=1}^r f_j(P). By induction, K=\bigcap_{n=1}^{\infty} P_n where P_0=P and P_{n+1}=\bigcup_{j=1}^r f_j(P_n).

triangle
Fat Sierpinski gasket: s=3/5 instead of 1/2

The above example is ifs(3,3/5,11), calling the Scilab code below.

function ifs(sides,scale,steps)
    b=1-scale; t=2*%pi*(1:sides)/sides; x=cos(t); y=sin(t);
    xpols=x'; ypols=y';
    for j=2:steps
        xpols=scale*xpols; ypols=scale*ypols;
        xpolsnew=[]; ypolsnew=[];
        for k=1:sides
            xpolsnew=[xpolsnew xpols+b*x(k)*ones(xpols)];
            ypolsnew=[ypolsnew ypols+b*y(k)*ones(ypols)];
        end
        xpols=xpolsnew; ypols=ypolsnew;
    end
    a=gca(); a.data_bounds=[-1,-1;1,1];
    [m,n]=size(xpols);
    xfpolys(xpols,ypols,ones(n,1))
endfunction

The final example is an “upper bound” for the fat pentagonal fractal that Colin created with the Chaos Game: the points v_1,\dots,v_5 are the vertices of regular pentagon, and s=1/2. The function was called as ifs(5,1/2,8). Again, I think that the invariant set has measure zero, but can’t even prove that the interior is empty. (Or find a reference where this is already done.)

Pentagonal set with s=1/2
Pentagonal set with s=1/2