## More fractals: Laakso spaces and slit carpets

In early 2000s Tomi Laakso published two papers which demonstrated that metric spaces could behave in very Euclidean ways without looking Euclidean at all. One of his examples became known as the Laakso graph space, since it is the limit of a sequence of graphs. In fact, the best known version of the construction was introduced by Lang and Plaut, who modified Laakso’s original example to make it more symmetric. The building block is this graph with 6 edges:

Each edge is assigned length 1/4, so that the distance between the leftmost and rightmost points is 1. Next, replace each edge with a copy of the building block. This increases the number of edges by the factor of 6; their length goes down by the factor of 4 (so that the leftmost-rightmost distance is still 1). Repeat.

The resulting metric space is doubling: every ball can be covered by a fixed number (namely, 6) balls of half the size. This is the typical behavior of subsets of Euclidean spaces. Yet, the Laakso space does not admit a bi-Lipschitz embedding into any Euclidean space (in fact, even into any uniformly convex Banach space). It remains the simplest known example of a doubling space without such an embedding. Looking back at the building block, one recognizes the cycle $C_4$ as the source of non-embeddability (a single cycle forces a certain amount of distortion; adding cycles withing cycles ad infinitum forces infinite distortion). The extra edges on the left and right are necessary for the doubling condition.

In some sense, the Laakso space is the antipode of the Cantor set: instead of deleting the middle part repeatedly, we duplicate it. But it’s also possible to construct it with a ‘removal’ process very similar to Cantor’s. Begin with the square and slit it horizontally in the center; let the length of the slit be 1/3 of the sidelength. Then repeat as with the Cantor set, except in addition to cutting left and right of the previous cut, we also do it up and down. Like this:

Our metric space if the square minus the slits, equipped with the path metric: the distance between two points is the infimum of the length of curves connecting them within the space. Thus, the slits seriously affect the metric. This is how the set will look after a few more iterations:

I called this a slit pseudo-carpet because it has nonempty interior, unlike true Sierpinski carpets. To better see the similarity with the Laakso space, multiply the vertical coordinate by $\epsilon$ and let $\epsilon\to 0$ (equivalently, redefine the length of curves as $\int |x'|$ instead of $\int \sqrt{|x'|^2+|y'|^2}$). This collapses all vertical segments remaining in the set, leaving us with a version of the Laakso graph space.

Finally, some Scilab code. The Laakso graph was plotted using the Chaos game, calling the function below with parameters laakso(0.7,50000).

function laakso(angle,steps) s=1/(2+2*cos(angle)); xoffset = [0,1-s,s,s,1/2,1/2]; yoffset = [0,0,0,0,s*sin(angle),-s*sin(angle)]; rotation =[0,0,angle,-angle,-angle,angle]; sc=s*cos(rotation); ss=s*sin(rotation); point = zeros(steps,2); vert = grand(1,steps,'uin',1,6); for j = 2:steps point(j,:) = point(j-1,:)*[sc(vert(j)),ss(vert(j));-ss(vert(j)),sc(vert(j))] + [xoffset(vert(j)), yoffset(vert(j))]; end plot(point(:,1),point(:,2),'linestyle','none','markstyle','.','marksize',1); endfunction

## Apropos of et al

I have 111 MathSciNet reviews posted, and there are three more articles on my desk that I should be reviewing instead of blogging. Even though I think of canceling my AMS membership, I don’t mind helping the society pay their bills (MathSciNet brings about 37% of the AMS revenue, according to their 2010-11 report.)

Sure, reviews need to be edited, especially when written by non-native English speakers like myself. Still, I’m unhappy with the edited version of my recent review:

This was the approach taken in the foundational paper by J. Heinonen et al. [J. Anal. Math. 85 (2001), 87-139]

The paper was written by J. Heinonen, P. Koskela, N. Shanmugalingam, and J. T. Tyson. Yes, it’s four names. Yes, the 14-letter name is not easy to pronounce without practice. But does the saving of 45 bytes justify omitting the names of people who spent many months, if not years, working on the paper? Absolutely not. The tradition of using “et al” for papers with more than 3 authors belongs to the age of typewriters.

P.S. I don’t think MathSciNet editors read my blog, so I emailed them.

P.P.S. The names are now restored. In the future I’ll be sure to add in “comments to the editor” that names should not be replaced by et al.

## This ain’t like dusting crops, boy

The hyperspace is a set of sets equipped with a metric or at least with a topology. Given a metric space $X$, let $\mathcal{H}(X)$ be the set of all nonempty closed subsets of $X$ with the Hausdorff metric: $d(A,B) if no matter where you are in one set, you can jump into the other by traveling less than $r$. So, the distance between letters S and U is the length of the longer green arrow.

The requirement of closedness ensures $d(A,B)>0$ for $A\ne B$. If $X$ is unbounded, then $d(A,B)$ will be infinite for some pairs of sets, which is natural: the hyperspace contains infinitely many parallel universes which do not interact, being at infinite distance from one another.

Every continuous surjection $f\colon X\to Y$ has an inverse $f^{-1}\colon Y\to \mathcal{H}(X)$ defined in the obvious way: $f^{-1}(y)=f^{-1}(y)$. Yay ambiguous notation! The subset of $\mathcal{H}(X)$ that consists of the singletons is naturally identified with $X$, so for bijective maps we recover the usual inverse.

Exercise: what conditions on $f$ guarantee that $f^{-1}$ is (a) continuous; (b) Lipschitz? After the previous post it should not be surprising that

• Even if $f$ is open and continuous, $f^{-1}$ may be discontinuous.
• If $f$ is a Lipschitz quotient, then $f^{-1}$ is Lipschitz.

Proofs are not like dusting crops—they are easier.

## Continuous:Lipschitz :: Open:?

A map is continuous if the preimage of every open set is open. If the topology is defined by a metric, we can reformulate this as: the inverse image of an open ball $B_R(f(x))$ contains an open ball $B_r(x)$. Like this:

But bringing these radii $R$ and $r$ into the picture will not serve any purpose unless we use them to quantify continuity. For example, if we insist that $r\ge cR$ for a fixed constant $c>0$, we arrive at the definition of a Lipschitz map.

But why do we look at the inverse image; what happens if we take the direct image instead? Then we get the definition of an open map: the image of every open set is open. Recasting this in metric terms: the image of an open ball $B_R(x)$ contains an open ball $B_r(f(x))$. Like this:

If we quantify openness by requiring $r\ge cR$ for a fixed $c>0$, we arrive at the definition of a co-Lipschitz map. [Caution: some people use “co-Lipschitz” to mean $|f(a)-f(b)|\ge c|a-b|$, which is a different condition. They coincide if $f$ is bijective.]

I don’t know if openness without continuity is good for anything other than torturing students with exercises such as: “Construct an open discontinuous map from $\mathbb R$ to $\mathbb R$.” We probably want both. At first one can hope that open continuous maps will have reasonable fibers $f^{-1}(x)$: something $(m-n)$-dimensional when going from $m$ dimensions to $n$, with $m\ge n$. The hope is futile: an open continuous map $f\colon \mathbb R^2\to\mathbb R^2$ can squeeze a line segment to a point (construction left as an exercise).

A map that is both Lipschitz and co-Lipschitz is called a Lipschitz quotient; this is a quantitative analog of “open continuous”. It turns out that for any Lipschitz quotient $f\colon \mathbb R^2\to\mathbb R^2$ the preimage of every point is a finite set. Moreover, $f$ factors as $f=g\circ h$ where $g$ is a complex polynomial and $h$ is a homeomorphism.

This is encouraging… but going even one dimension higher, it remains unknown whether a Lipschitz quotient $f\colon \mathbb R^3\to\mathbb R^3$ must have discrete fibers. For an overview of the subject, see Bill Johnson’s slides.

## Sweetened and flavored dessert made from gelatinous or starchy ingredients and milk

Takagi (高木) curves are fractals that are somehow less known than Cantor sets and Sierpinski carpets, yet they can also be useful as (counter-)examples. The general 高木 curve is the graph $y=f(x)$ of a function $f$ that is built from triangular waves. The $n$th generation wave has equation $y=2^{-n} \lbrace 2^n x \rbrace$ where $\lbrace\cdot\rbrace$ means the distance to the nearest integer. Six of these waves are pictured below.

Summation over $n$ creates the standard 高木 curve $T$, also known as the blancmange curve:

$\displaystyle y=\sum_{n=0}^{\infty} 2^{-n} \lbrace 2^n x\rbrace$

Note the prominent cusps at dyadic rationals: more on this later.

General 高木 curves are obtained by attaching coefficients $c_n$ to the terms of the above series. The simplest of these, and the one of most interest to me, is the alternating 高木 curve $T_{alt}$:

$\displaystyle y=\sum_{n=0}^{\infty} (-2)^{-n} \lbrace 2^n x\rbrace$

The alternation of signs destroys the cusps that are so prominent in $T$. Quantitatively speaking, the diameter of any subarc of $T_{alt}$ is bounded by the distance between its endpoints times a fixed constant. The curves with this property are called quasiarcs, and they are precisely the quasiconformal images of line segments.

Both $T$ and $T_{alt}$ have infinite length. More precisely, the length of the $n$th generation of either curve is between $\sqrt{(n+1)/2}$ and $\sqrt{n+1}+1$. Indeed, the derivative of $x\mapsto 2^{-k}\lbrace 2^k x\rbrace$ is just the Rademacher function $r_k$. Therefore, the total variation of the sum $\sum_{k=0}^n c_k 2^{-k}\lbrace 2^k x\rbrace$ is the $L^1$ norm of $\sum_{k=0}^n c_k r_k$. With $c_k=\pm 1$ the sharp form of the Хинчин inequality from the previous post yields

$\displaystyle 2^{-1/2}\sqrt{n+1} \le \left\|\sum_{k=0}^n c_k r_k\right\|_{L^1} \le \sqrt{n+1}$

For the upper bound I added 1 to account for the horizontal direction. Of course, the bound of real interest is the lower one, which proves unrectifiability. So far, a construction involving these curves shed a tiny bit of light on the following questions:

Which sets $K\subset \mathbb R^n$ have the property that any quasiconformal image of $K$ contains a rectifiable curve?

I won’t go (yet) into the reasons why this question arose. Any set with nonempty interior has the above property, since quasiconformal maps are homeomorphisms. A countable union of lines in the plane does not; this is what 高木 curves helped to show. The wide gap between these results remains to be filled.

## The Khintchine inequality

Today’s technology should make it possible to use the native transcription of names like Хинчин without inventing numerous ugly transliterations. The inequality is extremely useful in both analysis and probability: it says that the $L^p$ norm of a linear combination of Rademacher functions (see my post on the Walsh basis) can be computed from its coefficients, up to a multiplicative error that depends only on $p$. Best of all, this works even for the troublesome $p=1$; in fact for all $0. Formally stated, the inequality is

$\displaystyle A_p\sqrt{\sum c_n^2} \le \left\|\sum c_n r_n\right\|_{L^p} \le B_p\sqrt{\sum c_n^2}$

where the constants $A_p,B_p$ depend only on $p$. The orthogonality of Rademacher functions tells us that $A_2=B_2=1$, but what are the other constants? They were not found until almost 60 years after the inequality was proved. The precise values, established by Haagerup in 1982, behave in a somewhat unexpected way. Actually, only $A_p$ does. The upper bound is reasonably simple:

$\displaystyle B_p=\begin{cases} 1, \qquad 0

The lower bound takes an unexpected turn:

$\displaystyle A_p=\begin{cases} 2^{\frac{1}{2}-\frac{1}{p}},\qquad 0

The value of $p_0$ is determined by the continuity of $A_p$, and is not far from $2$: precisely, $p_0\approx 1.84742$. Looks like a bug in the design of the Universe.

For a concrete example, I took random coefficients $c_0...c_4$ and formed the linear combination shown above. Then computed its $L^p$ norm and the bounds in the Khintchine inequality. The norm is in red, the lower bound is green, the upper bound is yellow.

It’s a tight squeeze near $p=2$

## Linear independence, matroids, and fun with Fano

Take a finite set of vectors $v_1,\dots,v_n$ in some vector space. Call a set $A\subset \lbrace 1,2,\dots,n\rbrace$ independent if the corresponding vectors are linearly independent. The collection of all independent sets is denoted $\mathcal{I}$; what properties must it have? Well, it must be a matroid:

A matroid with ground set $E$ is a nonempty collection of subsets $\mathcal{I}\subseteq 2^E$ which is both hereditary and augmentable.

1. Hereditary: if $A\in \mathcal{I}$, then any subset of $A$ is in $\mathcal{I}$
2. Augmentable: if $A,B\in \mathcal{I}$ and $A$ has fewer elements than $B$, then $A$ can take an element from $B$ and still remain in $\mathcal{I}$.

The augmentation property brings to mind wealth redistribution (at least if you grew up in the USSR). Anyway, it is fairly obvious that any collection $\mathcal{I}$ that arises from vectors is a matroid. Is the converse true? Not obvious at all.

Behold the Fano plane, the smallest projective plane in existence: 7 points, 7 lines, any two lines meet at one point, through any two points there is a unique line.

The Fano matroid has these 7 vertices as the ground set. A set of vertices is independent if it has at most three points and is not a line. The hereditary property is clear. The augmentation isn’t hard either: given a 2-point set $A$, let $L$ be the line through it; since $B$ has 3 points and is not a line, it must contain a point outside of $L$, and this is the point we add to $A$.

Can the Fano matroid be realized by 7 vectors in a Euclidean space? Guess what… no.

Suppose we do have such a set of 7 vectors. Then they span a 3-dimensional space, since there is no linearly independent subset with 4 vectors. The three vertices of the triangle are independent, and we may assume they are realized by the standard basis vectors $v_1,v_2,v_3$. Enumerate the other points/vectors $v_4,\dots,v_7$ so that $v_{j+3}$ is the midpoint opposite $v_j$ for $j=1,2,3$. Observe that $v_4$ is a linear combination of $v_2,v_3$, etc. Thus, $v_1,\dots, v_7$ are the columns of the matrix

$\displaystyle M= \begin{pmatrix} 1&0&0&0&*&*&* \\ 0&1&0&*&0&*&* \\ 0&0&1&*&*&0&* \end{pmatrix}$

But wait, there’s more! Denote $v_7=(x,y,z)^T$ and observe that the linear dependence of $v_1,v_4,v_7$ forces $v_4$ to be a scalar multiple of $(0,y,z)^T$. Similarly for $v_5,v_6$. So the matrix $M$ must be of the form

$\displaystyle M= \begin{pmatrix} 1&0&0&0&bx&cx&x \\ 0&1&0&ay&0&cy&y \\ 0&0&1&az&bz&0&z \end{pmatrix}$

But wait, there’s even more! The vectors $v_4,v_5,v_6$ are also dependent (that inscribed circle is actually a line). The determinant of the 4,5,6 columns is $2abcxyz$. Can any of $x,y,z$ be zero? No, because that would make $v_7$ a linear combination of two of $v_1,v_2,v_3$, which is not supposed to happen. Can any of $a,b,c$ be zero? No, because that would create a zero vector, and all of the 1- and 2-subsets are independent. So, the Fano matroid cannot be realized by vectors…

unless 2=0. Over $\mathbb F_2$ there is no problem:

$\displaystyle M= \begin{pmatrix} 1&0&0&0&1&1&1 \\ 0&1&0&1&0&1&1 \\ 0&0&1&1&1&0&1 \end{pmatrix}$

(P.S. All of this stuff is in Whitney’s 1935 paper that introduced matroids.)