## Triangulation of polygons, Whitehead moves, and associahedron

To triangulate a polygon means to cut it into triangles by diagonals. Let us assume that the polygon is convex, which means it may as well be the regular n-gon. There is nothing to do if n=3 (it’s already a triangle), so we begin with n=4. There are two ways to cut a square with a diagonal… well that wasn’t much to say.

Next, n=5. There are five ways to triangulate a pentagon: just choose a vertex and draw two diagonals starting from it. Actually, the number of triangulations of any convex n-gon is the kth Catalan number $C_k=frac{1}{k+1}binom{2k}{k}$ with $k=n-2$. With $k=5-2=3$ we get 5 as expected.

But we should do something beyond counting. A basic operation that one can perform on a triangulation is to remove one diagonal and replace it with another one (the choice of replacement is unique). This is called a Whitehead move. We can draw a graph in which vertices represent triangulations and edges correspond to Whitehead moves. Doing this for a pentagon gives… a pentagon. Funny.

Note that any triangulation of an n-gon involves exactly $k-1=n-3$ diagonals. Hence, there are $k-1$ possible Whitehead moves for any triangulation. This means that each vertex in our graph will have $k-1$ neighbors; in mathematical terms the degree of each vertex is $k-1$. As a consequence, the graph has $(k-1) C_k /2$ edges.

The case $n=6$ yields this pretty graph.

Note that there are no triangles in this graph. There can’t be any, because two subsequent Whitehead moves result in removal of two diagonals, and this can’t be undone in one move. So, the shortest cycle has length 4. Looking closely at the faces (some with 4 vertices, some with 5) you notice that all their triangulations have one diagonal in common. The faces can actually be represented by polygons, making the graph into a polytope known as an associahedron:

Is there a natural way to color the vertices, for any $n$? What is the chromatic number anyway? But it’s getting late.

## The Curve Complex

This note is filed under “things I know nothing about”. If you want to learn from someone who knows something, I recommend “Notes on the complex of curves” by Saul Schleimer.

The curve complex is an infinite simplicial complex associated to a (topological) surface. It is already interesting enough to look at the 1-skeleton of this complex, which is simply an infinite graph. I will try to say something about this graph using as an example the closed surface of genus 2, known as a double torus. (Some call it a pretzel, but most pretzels I’ve seen have genus 3.)

Imagine a simple closed curve $\gamma$ on this surface. If $\gamma$ can be continuously shrunk to a point, it is of no interest to us, because it does not see the topology of the surface. Let us consider only essential curves: those that cannot be shrunk to a point. We are also not interested in the exact position of the curve, and so consider two curves $\gamma$ and $\gamma'$ equivalent (isotopic) if it’s possible to slide $\gamma$ into $\gamma'$ along the surface. For example, A and B are equivalent, but B and C are not, and neither are C and D.

These equivalence classes will be the vertices of our graph. There are infinitely many vertices, because one can construct complicated curves by, say, spiraling about one handle a bunch of times, then switching to the other, and then coming back. To define a graph we must know when to draw an edge between two vertices. The answer is: when they can be represented by curves that do not intersect. So, the vertices corresponding to A, C, and D would all be connected to one another. Let us denote this graph by $C^1(S)$, the 1-skeleton of the curve complex $C(S)$.

The first non-obvious fact about $C^1(S)$ is that it is connected. Think about what this means: now matter how crazily intertwined two curves $\gamma$ and $\gamma'$ are, one can find a finite sequence $\gamma_0=\gamma, \gamma_1,\dots, \gamma_n=\gamma'$ such that $\gamma_k$ and $\gamma_{k+1}$ can slide out of each other’s way.

The second non-obvious fact is that $C^1(S)$ is an unbounded metric space. The metric is defined as on any connected graph: the distance between two vertices $v,w$ is the minimal number of edges one must travel to get from $v$ to $w$. So, neighboring vertices are at distance 1. If the distance between two curves $\gamma_1$ and $\gamma_2$ is 3 or more, then any other curve must intersect either $\gamma_1$ or $\gamma_2$. This is expressed by saying that $\gamma_1$ and $\gamma_2$ fill the surface; there is no room for anyone else. Therefore, if we cut the surface along $\gamma_{1,2}$, it falls apart into simply connected pieces, i.e., disks. Note that if the curves were smooth, so are the disks, because we do not create corners in this process.

To exhibit a pair of curves at distance 4 (and to prove that the example works) is a serious undertaking. So it should not come as a surprise that the large-scale geometry of the curve complex is a rich and challenging subject of study. It is a Gromov hyperbolic space, so there is a well-defined boundary at infinity which can be equipped with one of (equivalent) visual metrics. The topology of this boundary remains to be understood.

One last remark. Some vertices in the complex correspond to curves that bound a disk inside of the surface. Let $D$ be the union of such vertices. This set is not invariant under homeomorphisms of the surface; it depends on how it sits inside of $\mathbb R^3$; or, in other words, on how the surface is “filled in” to become a 3-dimensional handlebody $B$. This makes it possible to quantify the complexity of a homeomorphism $f\colon \partial B\to \partial B$: if the distance between $D$ and $f(D)$ is large, then the homeomorphism is rather complicated.

## Playing with iterated function systems

After Colin Carroll posted several fractal experiments with Matlab, I decided to do something of the sort. One difference is that I use Scilab, an open-source alternative to Matlab.

The first experiment: drawing the Sierpinski carpet using the Chaos Game. Namely, given a finite family of strict contractions $f_1,\dots,f_r\colon \mathbb R^2\to \mathbb R^2$ and an initial point $p_0$, plot the sequence $p_{n+1}=f_{j_n}(p_n)$, where $j_n \in \{1,\dots,r\}$ is chosen randomly at each step. To simplify matters, let $f_j$ be the similarity transformation with scaling factor $s\in (0,1)$ and the fixed point $v_j$.

A canonical example is: $v_1,v_2,v_3$ are the vertices of equilateral triangle, $s=1/2$. This produces the fractal known as the Sierpinski gasket. For a different example, set $s=1/3$ and let $v_1,\dots,v_8$ be the vertices of square together with midpoints of its sides. The resulting fractal is known as the Sierpinski carpet.

This image was obtained by calling the scilab function given below as Scarpet(1/3, 100000). The function is essentially a translation of Colin’s code to scilab. Caution: if you copy and paste this code, watch out for line breaks and encoding of quote marks.

function Scarpet(scale,steps)
b=1-scale;
x = [1,0,-1,-1,-1,0,1,1];
y = [1,1,1,0,-1,-1,-1,0];
sides=length(x);
point = zeros(steps,2);
vert = grand(1,steps,'uin',1,sides);
for j = 2:steps
point(j,:) = scale*point(j-1,:) + b*[x(vert(j)),y(vert(j))];
end
plot(point(:,1),point(:,2),'linestyle','none','markstyle','.','marksize',1);
endfunction

Regardless of the choice of initial point $p_0$, the set of cluster points of the sequence $(p_n)$ is exactly the invariant set $K$, namely the unique nonempty compact set such that $K=\bigcup_{j=1}^r f_j(K)$. This is proved, for example, in the book Integral, Probability, and Fractal Measures by Gerald Edgar.

The scaling factor $s=1/3$ for the carpet is chosen so that the images of the original square under the eight similarities touch, but do not overlap. With a smaller factor the fractal looks like dust (a totally disconnected set), while with $s\ge 1/2$ it becomes a solid square. The intermediate range $1/3 is tricky: I think that $K$ has measure zero, but can’t even prove that it’s nowhere dense.

It’s also possible to draw $K$ in the opposite way, by removing points rather than adding them. To this end, let $P$ be the convex hull of the set $\{v_1,\dots,v_r\}$; that is, a solid convex polygon. It’s not hard to see that $K\subset P$. Therefore, $\bigcup_{j=1}^r f_j(K)\subset \bigcup_{j=1}^r f_j(P)$, but since the set on the left is $K$ itself, we get $K\subset \bigcup_{j=1}^r f_j(P)$. By induction, $K=\bigcap_{n=1}^{\infty} P_n$ where $P_0=P$ and $P_{n+1}=\bigcup_{j=1}^r f_j(P_n)$.

The above example is ifs(3,3/5,11), calling the Scilab code below.

function ifs(sides,scale,steps)
b=1-scale; t=2*%pi*(1:sides)/sides; x=cos(t); y=sin(t);
xpols=x'; ypols=y';
for j=2:steps
xpols=scale*xpols; ypols=scale*ypols;
xpolsnew=[]; ypolsnew=[];
for k=1:sides
xpolsnew=[xpolsnew xpols+b*x(k)*ones(xpols)];
ypolsnew=[ypolsnew ypols+b*y(k)*ones(ypols)];
end
xpols=xpolsnew; ypols=ypolsnew;
end
a=gca(); a.data_bounds=[-1,-1;1,1];
[m,n]=size(xpols);
xfpolys(xpols,ypols,ones(n,1))
endfunction

The final example is an “upper bound” for the fat pentagonal fractal that Colin created with the Chaos Game: the points $v_1,\dots,v_5$ are the vertices of regular pentagon, and $s=1/2$. The function was called as ifs(5,1/2,8). Again, I think that the invariant set has measure zero, but can’t even prove that the interior is empty. (Or find a reference where this is already done.)

## For the sake of completeness

Let’s prove the completeness of $\ell^p$. The argument consists of two steps.

Claim 1. Suppose $X$ is a normed space in which every absolutely convergent series converges; that is, $\sum_{n=1}^{\infty} x_n$ converges whenever $x_n\in X$ are such that $\sum_{n=1}^{\infty} \|x_n\|$ converges. Then the space is complete.

Proof. Take a Cauchy sequence $\{y_n\}\subset X$. For $j=1,2,\dots$ find an integer $n_j$ such that $\|y_n-y_m\|<2^{-j}$ as long as $n,m\ge n_j$. (This is possible because the sequence is Cauchy.) Also let $n_0=1$ and consider the series $\sum_{j=1}^{\infty} (y_{n_{j}}-y_{n_{j-1}})$. By the hypothesis this series converges. Its partial sums simplify (telescope) to $y_{n_j}-y_1$. Hence the subsequence $\{y_{n_j}\}$ has a limit. It remains to apply a general theorem about metric spaces: if a Cauchy sequence has a convergent subsequence, then the entire sequence converges. This proves Claim 1.

Claim 2. Every absolutely convergent series in $\ell^p$ converges.

Proof. The elements of $\ell^p$ are functions from $\mathbb N$ to $\mathbb C$, so let’s write them as such: $f_j\colon \mathbb N\to \mathbb C$. (This avoids confusion of indices.) Suppose the series $\sum_{j=1}^{\infty} \|f_j\|$ converges. Then for any $n$ the series $\sum_{j=1}^{\infty} |f_j(n)|$ also converges, by Comparison Test. Hence $\sum_{j=1}^{\infty} f_j(n)$ converges (absolutely convergent implies convergent for series of real or complex numbers). Let $f(n) = \sum_{j=1}^{\infty} f_j(n)$. So far the convergence is only pointwise, so we are not done. We still have to show that the series converges in $\ell^p$, that is, its tails have small $\ell^2$ norm: $\sum_{n=1}^\infty |\sum_{j=k}^{\infty} f_j(n)|^p \to 0$ as $k\to\infty$.

What we need now is a dominating function, so that we can apply the Dominated Convergence Theorem. Namely, we need a function $g\colon \mathbb N\to [0,\infty)$ such that
(1) $\sum_{n=1}^{\infty} g(n)<\infty$, and
(2) $|\sum_{j=k}^{\infty} f_j(n)|^p \le g(n)$ for all $k,n$.

Set $g=(\sum_{j=1}^{\infty} |f_j|)^p$. Then (2) follows from the triangle inequality. Also, $g$ is the increasing limit of functions $g_k =(\sum_{j=1}^k |f_j|)^p$, for which we have
$\sum_n g_k(n) \le (\sum_{j=1}^k \|f_j\|)^p \le (\sum_{j=1}^{\infty} \|f_j\|)^p<\infty$
using the triangle inequality in $\ell^p$. Therefore, $\sum_n g(n)<\infty$ by the Monotone Convergence Theorem.

## Almost norming functionals, Part 2

Let $E$ be a real Banach space with the dual $E^*$. Fix $\delta\in (0,1)$ and call a linear functional $e^*\in E^*$ almost norming for $e$ if $|e|=|e^*|=1$ and $e^*(e)\ge \delta$. In Part 1 I showed that in any Banach space there exists a continuous selection of almost norming functionals. Here I will prove that there is no uniformly continuous selection in $\ell_1$.

Claim. Let $S$ be the unit sphere in $\ell_1^n$, the $n$-dimensional $\ell_1$-space.  Suppose that $f\colon S\to \ell_{\infty}^n$ is a map such that $f(e)$ is almost norming $e$ in the above sense. Then the modulus of continuity $\omega_f$ satisfies $\omega_f(2/n)\ge 2\delta$.

(If an uniformly continuous selection was available in $\ell_1$, it would yield selections in $\ell_1^n$ with a modulus of continuity independent of $n$.)

Proof. Write $f=(f_1,\dots,f_n)$. For any $\epsilon\in \{-1,1\}^n$ we have $n^{-1}\epsilon \in S$, hence

$\sum\limits_{i=1}^n \epsilon_i f_i(n^{-1}\epsilon)\ge n\delta$ for all $\epsilon\in \{-1,1\}^n$. Sum over all $\epsilon$ and change the order of summation:

$\sum\limits_{i=1}^n \sum\limits_{\epsilon}\epsilon_i f_i(n^{-1}\epsilon)\ge n2^n\delta$

There exists $i\in\{1,2,\dots,n\}$ such that

$\sum\limits_{\epsilon}\epsilon_i f_i(n^{-1}\epsilon) \ge 2^n \delta$

Fix this $i$ from now on. Define $\tilde \epsilon$ to be the same $\pm$ vector as $\epsilon$, but with the $i$th component flipped. Rewrite the previous sum as

$\sum\limits_{\epsilon} -\epsilon_i f_i(n^{-1}\tilde \epsilon)\ge 2^n\delta$

$\sum\limits_{\epsilon}\epsilon_i [f_i(n^{-1}\epsilon)-f_i(n^{-1}\tilde \epsilon)]\ge 2^{n+1}\delta$

Since $\|n^{-1}\epsilon-n^{-1}\tilde \epsilon\|=2/n$, it follows that $2^n \omega_f(2/n) \ge 2^{n+1}\delta$, as claimed.

## A relation between polynomials

This is a brief foray into algebra from a 2006 REU project at Texas A&M.

Given two polynomials $P,Q \in \mathbb C[z_1,\dots,z_n]$, we write $Q\preccurlyeq P$ if there is a differential operator $T\in \mathbb C[\frac{\partial}{\partial z_1},\dots, \frac{\partial}{\partial z_n}]$ such that $Q=T P$.

The relation $\preccurlyeq$ is reflexive and transitive, but is not antisymmetric. If both $Q\preccurlyeq P$ and $Q\preccurlyeq P$ hold, we say that $P$ and $Q$ are $\partial$-equivalent, denoted $P\thicksim Q$.

A polynomial is $\partial$-homogeneous if it is $\partial$-equivalent to a homogeneous polynomial. Obviously, any polynomial in one variable has this property. Polynomials in more than one variable usually do not have it.

The interesting thing about $\partial$-homogeneous polynomials is that they are refinable, meaning that one has a nontrivial identity of the form $P(z)=\sum_{j\in\mathbb Z^n} c_{j} P(\lambda z-j)$ where $c_{j}\in \mathbb C$, $j\in \mathbb Z^n$, and only finitely many of the coefficients $c_j$ are nonzero. The value of $\lambda$ does not matter as long as $|\lambda|\ne 0,1$. Conversely, every $\lambda$-refinable polynomial is $\partial$-homogeneous.

## Rigidity of functions

Suppose that $u\colon \mathbb R^2\to\mathbb R$ is a continuously differentiable function such that at every point of the plane at least one of the partial derivatives $u_x, u_y$ is zero.

Prove that $u$ depends on just one variable (either $x$ or $y$). In other words, either $u_x\equiv 0$ or $u_y\equiv 0$.