An inequality is a statement of the form or . (What's up with vertical alignment of formulas in WP?) An inequation is . I can’t think of any adequate Russian word for “inequation”, but that’s besides the point. The literal analog “неравенство” is already used for “inequality”.

Suppose we want to prove that a map is Lipschitz, that is, such that for all . All we know is that the map is injective for all with a large modulus, i.e., for . In the following we consider only such values of .

Fix distinct and record the inequation: . Rearrange as . Note that we can multiply by any unimodular complex number, since the inequation holds whenever . Thus, we have a stronger inequation: . Yes, putting absolute values in an inequation makes it stronger, opposite to what happens with equations.

Still keeping and fixed, increase the modulus of until the inequality holds. Continuity with respect to tells us that was indeed all the way. So we bring back down to , and happily record the inequality , the desired Lipschitz continuity.

Let’s say that a set contains an singular matrix if there exists an matrix with determinant zero whose entries are distinct elements of . For example, the set of prime numbers does not contain any singular matrices; however, for every it contains infinitely many singular matrices.

I don’t know of an elementary proof of the latter fact. By a 1939 theorem of van der Corput, the set of primes contains infinitely many progressions of length 3. (Much more recently, Green and Tao replaced 3 with an arbitrary .) If every row of a matrix begins with a 3-term arithmetic progression, the matrix is singular.

goes through all integers 000 through 999, skipping only 998. Maple confirms this:

This fraction made rounds on the internet a while ago.

I begin with two general claims:

Integer numbers are more complicated than polynomials

Decimal fractions are more complicated than power series

To illustrate the first one: multiplying 748 by 367 takes more effort than multiplying the corresponding polynomials,

The reason is that there’s no way for the product of and to “roll over” into . It’s going to stay in the group. Mathematically speaking, polynomials form a graded ring and integers don’t. At the same time, one can recover the integers from polynomials by setting in . The result is .

Moving on to the second claim, consider the power series

I prefer to use instead of here, because we often want to replace with an expression in . For example, setting gives us

which is nothing surprising. But if we now set (and, for neatness, divide both sides by 1000), the result is

which looks like a magical fraction producing powers of 2. But in reality, setting did nothing but mess things up. Now we have a complicated decimal number, in which powers of 2 break down starting with “513” because of the extra digits rolled over from 1024. In contrast, the neat power series keeps generating powers of 2 forever.

By the way, is the generating function for the numbers 1,2,4,8,16…, i.e., the powers of 2.

So, if you want to cook up a ‘magical’ fraction, all you need to do is find the generating function for the numbers you want, and set the variable to be some negative power of 10. E.g., the choice avoids digits rolling over until the desired numbers reach 1000. But we could take and get many more numbers at the cost of a more complicated fraction.

For example, how would one come up with 1/998001? We need a generating function for 1,2,3,4,…, that is, we need a formula for the power series . No big deal: just take the derivative of (*):

and multiply both sides by to restore the exponent:

Now set , which is easiest if you expand the denominator as and multiply both the numerator and denominator by . The result is

Dropping 1000 in the numerator is a matter of taste (cf. xkcd 163).

Let’s cook up something else. For example, again take the derivative in (**) and multiply by :

Now set to get

Admittedly, this fraction is less likely to propagate around the web than 1/998001.

For the last example, take the Fibonacci numbers 1,1,2,3,5,8,13,… The recurrence relation can be used to find the generating function, . Setting yields

This is a continuation of the series on submetries. First, an example: the distance function to a closed convex set is a submetry from onto . Note that the best regularity we can expect from such distance function is , the Lipschitzness of first derivatives. The second derivative does not exist at the points where level curves make the transition from straight to circular.

Now back to an attempt to introduce duality between immetries (=isometric embeddings) and submetries. Recall that the Lipschitz dual of a pointed metric space is the set of all Lipschitz functions such that . Given the Lipschitz norm, becomes a Banach space. Any closed ball can be written as
(*) .
Indeed, is obvious, and the reverse inclusion follows by considering .

It’s natural to use (*) to relate immetries to submetries, because the former are defined by and the latter by .

In the previous post I considered four implications:

If is an immetry, then is a submetry

If is a submetry, then is an immetry

If is an immetry, then is a submetry

If is a submetry, then is an immetry

I proved (1) already. Implication (2) is also easy to prove: if is a submetry, then for every , because pairs of points that (almost) realize can be lifted through .

Here is a proof of (4). If is a submetry, then . We can use this in (*) to obtain, for any and ,
.
The latter set is nothing but , which means is an immetry.

I already noted that (3) fails for general metric spaces, the inclusion being a counterexample. However, this counterexample is not impressive, because is dense in . One could still hope that (3) holds for proper metric spaces. By definition, a metric space is proper if every closed ball is compact. (Or, equivalently, every bounded sequence has a convergent subsequence.) In metric geometry properness is a very common assumption which excludes incomplete spaces and infinite-dimensional spaces. Its relevance to submetries can be seen from their definition . It requires the image to be closed, which is not very likely to happen unless is compact. (No such issues arise on the immetry side, because is always closed.)

However, it turns out (3) is false even for compact spaces. The counterexample was already present on this blog:

Indeed, the Lipschitz norm of a function defined on an interval is equal to the essential supremum of . The composition with the metric quotient map given above preserves . Hence, the induced map is an immetry.

Continuation of an earlier post that considered submetries, namely the maps between metric spaces such that for all and all . (Here and in what follows is a closed ball.) The dual notion is an immetry: a map such that for all and all . Immetries can be characterized by the condition , which means they are nothing but isometric embeddings. I just made up the word to introduce symmetry between submetry and immetry. And then tried to look it up.

In what sense are these dual? Recall that reversal of arrows in a “categorical” definition of an immetry did not produce a submetry. Let’s try another approach. Let our metric spaces be pointed: they all contain the point which is fixed by all maps under consideration. The Lipschitz dual of a metric space consists of all Lipschitz maps . This is naturally a vector space. Moreover, it is a Banach space with the norm if we identify the functions that differ by a constant. A Lipschitz map induces a bounded linear operator . If is 1-Lipschitz, then so is (i.e., its operator norm is at most 1).

It would be nice to have the following:

is an immetry iff is a submetry

is a submetry iff is an immetry

but that’s too much to hope for. For example, the inclusion of into is not a submetry, but it induces the identity map . Let’s go through these one by one.

Suppose is an immetry. Then we think of as a subset of , and is simply the restriction operator. To prove that it’s a submetry, we should verify the 2-point lifting property: given any and any that extends , we must find that extends and satisfies . This is easy: extend in a norm-preserving way (by McShane-Whitney) and add .

I also wrote down an (easy) proof that being a submetry implies that is an immetry, but WP ate it. Specifically, having pressed “Save Draft”, I was asked to re-login (the cookie expired). Having done so, I was presented with a 10 min old draft.

We already know that being an immetry does not imply that is a submetry. Whether being an submetry implies that is an immetry is left as an exercise for the reader.

A slightly modified version of the riddle discussed in detail by Colin Carroll. I recap the key parts of his description below. Rule 4 was added by me in order to eliminate any probabilistic issues from the problem.

You are the captain of a pirate ship with a crew of N people ordered by rank. Your crew just managed to plunder 100 Pieces of Eight. Now you are to propose a division of the 100 PoE, and the crew will vote on the division. The captain doesn’t vote except to break a tie. If the proposal fails, the captain walks the plank, and the first mate becomes captain, the third in command becomes first mate, and so on. Each pirate votes according to the following ordered priorities:

They do not want to die.

They want to maximize their own profit.

They like to kill people, including crewmates.

They prefer to be on good terms with the higher ranked crewmates.

The question is: what division do you, as the captain, suggest?

In early 2000s Tomi Laakso published two papers which demonstrated that metric spaces could behave in very Euclidean ways without looking Euclidean at all. One of his examples became known as the Laakso graph space, since it is the limit of a sequence of graphs. In fact, the best known version of the construction was introduced by Lang and Plaut, who modified Laakso’s original example to make it more symmetric. The building block is this graph with 6 edges:

Each edge is assigned length 1/4, so that the distance between the leftmost and rightmost points is 1. Next, replace each edge with a copy of the building block. This increases the number of edges by the factor of 6; their length goes down by the factor of 4 (so that the leftmost-rightmost distance is still 1). Repeat.

The resulting metric space is doubling: every ball can be covered by a fixed number (namely, 6) balls of half the size. This is the typical behavior of subsets of Euclidean spaces. Yet, the Laakso space does not admit a bi-Lipschitz embedding into any Euclidean space (in fact, even into any uniformly convex Banach space). It remains the simplest known example of a doubling space without such an embedding. Looking back at the building block, one recognizes the cycle as the source of non-embeddability (a single cycle forces a certain amount of distortion; adding cycles withing cycles ad infinitum forces infinite distortion). The extra edges on the left and right are necessary for the doubling condition.

In some sense, the Laakso space is the antipode of the Cantor set: instead of deleting the middle part repeatedly, we duplicate it. But it’s also possible to construct it with a ‘removal’ process very similar to Cantor’s. Begin with the square and slit it horizontally in the center; let the length of the slit be 1/3 of the sidelength. Then repeat as with the Cantor set, except in addition to cutting left and right of the previous cut, we also do it up and down. Like this:

Our metric space if the square minus the slits, equipped with the path metric: the distance between two points is the infimum of the length of curves connecting them within the space. Thus, the slits seriously affect the metric. This is how the set will look after a few more iterations:

I called this a slit pseudo-carpet because it has nonempty interior, unlike true Sierpinski carpets. To better see the similarity with the Laakso space, multiply the vertical coordinate by and let (equivalently, redefine the length of curves as instead of ). This collapses all vertical segments remaining in the set, leaving us with a version of the Laakso graph space.

Finally, some Scilab code. The Laakso graph was plotted using the Chaos game, calling the function below with parameters laakso(0.7,50000).

function laakso(angle,steps)
s=1/(2+2*cos(angle));
xoffset = [0,1-s,s,s,1/2,1/2];
yoffset = [0,0,0,0,s*sin(angle),-s*sin(angle)];
rotation =[0,0,angle,-angle,-angle,angle];
sc=s*cos(rotation);
ss=s*sin(rotation);
point = zeros(steps,2);
vert = grand(1,steps,'uin',1,6);
for j = 2:steps
point(j,:) = point(j-1,:)*[sc(vert(j)),ss(vert(j));-ss(vert(j)),sc(vert(j))] + [xoffset(vert(j)), yoffset(vert(j))];
end
plot(point(:,1),point(:,2),'linestyle','none','markstyle','.','marksize',1);
endfunction

I have 111 MathSciNet reviews posted, and there are three more articles on my desk that I should be reviewing instead of blogging. Even though I think of canceling my AMS membership, I don’t mind helping the society pay their bills (MathSciNet brings about 37% of the AMS revenue, according to their 2010-11 report.)

Sure, reviews need to be edited, especially when written by non-native English speakers like myself. Still, I’m unhappy with the edited version of my recent review:

This was the approach taken in the foundational paper by J. Heinonen et al. [J. Anal. Math. 85 (2001), 87-139]

The paper was written by J. Heinonen, P. Koskela, N. Shanmugalingam, and J. T. Tyson. Yes, it’s four names. Yes, the 14-letter name is not easy to pronounce without practice. But does the saving of 45 bytes justify omitting the names of people who spent many months, if not years, working on the paper? Absolutely not. The tradition of using “et al” for papers with more than 3 authors belongs to the age of typewriters.

P.S. I don’t think MathSciNet editors read my blog, so I emailed them.

P.P.S. The names are now restored. In the future I’ll be sure to add in “comments to the editor” that names should not be replaced by et al.

The hyperspace is a set of sets equipped with a metric or at least with a topology. Given a metric space , let be the set of all nonempty closed subsets of with the Hausdorff metric: if no matter where you are in one set, you can jump into the other by traveling less than . So, the distance between letters S and U is the length of the longer green arrow.

The requirement of closedness ensures for . If is unbounded, then will be infinite for some pairs of sets, which is natural: the hyperspace contains infinitely many parallel universes which do not interact, being at infinite distance from one another.

Every continuous surjection has an inverse defined in the obvious way: . Yay ambiguous notation! The subset of that consists of the singletons is naturally identified with , so for bijective maps we recover the usual inverse.

Exercise: what conditions on guarantee that is (a) continuous; (b) Lipschitz? After the previous post it should not be surprising that

Even if is open and continuous, may be discontinuous.

If is a Lipschitz quotient, then is Lipschitz.

Proofs are not like dusting crops—they are easier.