Re: “Democracy on the high seas” by Colin Carroll

A slightly modified version of the riddle discussed in detail by Colin Carroll. I recap the key parts of his description below. Rule 4 was added by me in order to eliminate any probabilistic issues from the problem.

You are the captain of a pirate ship with a crew of N people ordered by rank. Your crew just managed to plunder 100 Pieces of Eight. Now you are to propose a division of the 100 PoE, and the crew will vote on the division. The captain doesn’t vote except to break a tie. If the proposal fails, the captain walks the plank, and the first mate becomes captain, the third in command becomes first mate, and so on. Each pirate votes according to the following ordered priorities:

  1. They do not want to die.
  2. They want to maximize their own profit.
  3. They like to kill people, including crewmates.
  4. They prefer to be on good terms with the higher ranked crewmates.

The question is: what division do you, as the captain, suggest?

Continue reading “Re: “Democracy on the high seas” by Colin Carroll”

The exits are North South and East but are blocked by an infinite number of mutants

I used the word mutation in the previous post because of the (implicit) connection to quiver mutation. The quiver mutation is easy to define: take an oriented graph (quiver) where multiple edges are allowed, but must have consistent orientation (i.e., no 2-edge oriented cycles are allowed). Mutation at vertex v is done in three steps:

  1. v is removed and each oriented path of length two through v is contracted into an edge. That is, the stopover at v is eliminated.
  2. Step 1 may create some 2-edge oriented cycles, which must be deleted. That is, we cancel the pairs of arrows going in opposite directions.
  3. The replacement vertex v’ is inserted, connected to the rest of the graph in the same way that v was, but with opposite orientation. In practice, one simply reuses v for this purpose.

Some quivers have a finite set of mutation equivalent ones; others an infinite one. Perhaps the simplest nontrivial case is the oriented 3-cycle with edges of multiplicities x,y,z. The finiteness of its equivalents has to do with the Markov constant C(x,y,z)=x^2+y^2+z^2-xyz (not 3xyz this time), which is invariant under mutation. This is investigated in the paper “Cluster-Cyclic Quivers with Three Vertices and the Markov Equation” by Beineke, Brűstle and Hille. The appendix by Kerner relates the Markov constant to Hochschild cohomology, which I take as a clue for me to finish this post.

So I’ll leave you to play with the mutation applet linked in the embedded tweet below.

“Let me say a few words about the solutions in positive integers of the equation

x^2+y^2+z^2=3xyz.

This equation is symmetric with respect to unknown terms x, y, z; therefore, knowing one of its solutions

x=\alpha, \quad y=\beta, \quad z=\gamma,

it is easy to find the following five:

x=\alpha, \quad y=\gamma, \quad z=\beta;
x=\beta, \quad y=\gamma, \quad z=\alpha;
x=\beta, \quad y=\alpha, \quad z=\gamma;
x=\gamma, \quad y=\alpha, \quad z=\beta;
x=\gamma, \quad y=\beta, \quad z=\alpha.

Although these six solutions may be different, we will consider them as one, denoted by

x,y,z=\alpha,\beta,\gamma.


The above is a quote from A.A.Markov’s paper “Sur les formes quadratiques binaires indéfinies” (part 2). Back in 1880 people were patient enough to write out all permutations of three symbols… If we fix x=\alpha and y=\beta, the equation for z becomes

z^2-3\alpha \beta z +(\alpha^2+\beta^2)=0

which admits the second integer root \gamma'=3\alpha\beta-\gamma. We can also write \gamma\gamma'=\alpha^2+\beta^2 which does not immediately tell that \gamma' is an integer, but it does tell us that \gamma' is positive. For example, from the obvious solution (1,1,1) we get (1,1,2). Now it would not do us any good to mutate in the third variable again, for it would bring us back to (1,1,1). But we can mutate in the second variable, replacing it with 6-1=5. Having understood so well the symmetry of the equation, we write this new solution as (1,2,5), in the increasing order. Now we can mutate either 1 or 2, and so it goes…

Markov number tree, from http://en.wikipedia.org/wiki/File:MarkoffNumberTree.png

All triples, except for the two at the beginning, consist of distinct numbers (thus, they do generate six distinct solutions if order matters). The tree contains all solutions of the Markov equation. The Wikipedia article also points out the occurrence of Fibonacci numbers along the top branch, as well as a curious identity discovered by Don Zagier: let f(t)=\cosh^{-1}(3t/2); then (for triples written in increasing order)

\displaystyle x^2+y^2+z^2=3xyz+\frac{4}{9} \quad \Leftrightarrow \quad f(x)+f(y)=f(z)

Looks like a fun problem on simplification of inverse hyperbolic trigonometric functions.

Also, it’s still unknown whether two distinct Markov triples can have the same maximum \max(\alpha,\beta,\gamma). Looks like a fun problem for amateur number theorists.

To wrap this up, I will describe how Markov (the one of Markov chains fame, not his identically-named son of 4-manifold undecidability fame) came across the equation. Let Q(m,n)=am^2+2bmn+cn^2 be a quadratic form with real coefficients a,b,c normalized by b^2-ac=1. What is the best upper bound on

\displaystyle \min_{(m,n)\in \mathbb{Z}^2\setminus \lbrace(0,0)\rbrace} |Q(m,n)|?

In 1873 Korkin and Zolotarev published a paper showing that the best bound is 2/\sqrt{5}, attained by the form \displaystyle Q_1=\frac{2}{\sqrt{5}}(m^2-mn-n^2). Looks like the case is closed. But what if Q is not Q_1 (precisely, not equivalent to Q_1 under the action of SL(2,\mathbb Z))? Then the best bound improves to 1/\sqrt{2}, attained by \displaystyle Q_2=\frac{1}{\sqrt{2}}(m^2-2mn-n^2) (this is also due to KZ). Well, what if Q is not equivalent to either Q_1 or Q_2? Then the bound improves to \sqrt{\frac{100}{221}} (found by Markov), and we could go on…

But rather than continue in this fashion, Markov looked for the threshold \mu at which the number of inequivalent forms with minimum \ge \mu becomes infinite. And he found it: \mu =2/3 (for comparison, \sqrt{\frac{100}{221}}=0.67267\dots). Specifically, there are only finitely many forms with minimum above 2/3+\epsilon, for every \epsilon>0. But there are infinitely many forms with minimum exactly 2/3, such as \frac{2}{3}(x^2-\sqrt{5}xy-y^2). It was the iterative process of getting more and more of these forms that led Markov to the Diophantine equation x^2+y^2+z^2=3xyz.

The number 2/3 and its square also appear in Zagier’s identity with f(t)=\cosh^{-1}(3t/2)… But enough numerology for today.

More fractals: Laakso spaces and slit carpets

In early 2000s Tomi Laakso published two papers which demonstrated that metric spaces could behave in very Euclidean ways without looking Euclidean at all. One of his examples became known as the Laakso graph space, since it is the limit of a sequence of graphs. In fact, the best known version of the construction was introduced by Lang and Plaut, who modified Laakso’s original example to make it more symmetric. The building block is this graph with 6 edges:

Building block

Each edge is assigned length 1/4, so that the distance between the leftmost and rightmost points is 1. Next, replace each edge with a copy of the building block. This increases the number of edges by the factor of 6; their length goes down by the factor of 4 (so that the leftmost-rightmost distance is still 1). Repeat.

Laakso graph (click to magnify)

The resulting metric space is doubling: every ball can be covered by a fixed number (namely, 6) balls of half the size. This is the typical behavior of subsets of Euclidean spaces. Yet, the Laakso space does not admit a bi-Lipschitz embedding into any Euclidean space (in fact, even into any uniformly convex Banach space). It remains the simplest known example of a doubling space without such an embedding. Looking back at the building block, one recognizes the cycle C_4 as the source of non-embeddability (a single cycle forces a certain amount of distortion; adding cycles withing cycles ad infinitum forces infinite distortion). The extra edges on the left and right are necessary for the doubling condition.

In some sense, the Laakso space is the antipode of the Cantor set: instead of deleting the middle part repeatedly, we duplicate it. But it’s also possible to construct it with a ‘removal’ process very similar to Cantor’s. Begin with the square and slit it horizontally in the center; let the length of the slit be 1/3 of the sidelength. Then repeat as with the Cantor set, except in addition to cutting left and right of the previous cut, we also do it up and down. Like this:

Slit pseudo-carpet: beginning

Our metric space if the square minus the slits, equipped with the path metric: the distance between two points is the infimum of the length of curves connecting them within the space. Thus, the slits seriously affect the metric. This is how the set will look after a few more iterations:

Slit pseudo-carpet (click to magnify)

I called this a slit pseudo-carpet because it has nonempty interior, unlike true Sierpinski carpets. To better see the similarity with the Laakso space, multiply the vertical coordinate by \epsilon and let \epsilon\to 0 (equivalently, redefine the length of curves as \int |x'| instead of \int \sqrt{|x'|^2+|y'|^2}). This collapses all vertical segments remaining in the set, leaving us with a version of the Laakso graph space.

Finally, some Scilab code. The Laakso graph was plotted using the Chaos game, calling the function below with parameters laakso(0.7,50000).

function laakso(angle,steps)
s=1/(2+2*cos(angle));
xoffset = [0,1-s,s,s,1/2,1/2];
yoffset = [0,0,0,0,s*sin(angle),-s*sin(angle)];
rotation =[0,0,angle,-angle,-angle,angle];
sc=s*cos(rotation);
ss=s*sin(rotation);
point = zeros(steps,2);
vert = grand(1,steps,'uin',1,6);
for j = 2:steps
point(j,:) = point(j-1,:)*[sc(vert(j)),ss(vert(j));-ss(vert(j)),sc(vert(j))] + [xoffset(vert(j)), yoffset(vert(j))];
end
plot(point(:,1),point(:,2),'linestyle','none','markstyle','.','marksize',1);
endfunction

Apropos of et al

I have 111 MathSciNet reviews posted, and there are three more articles on my desk that I should be reviewing instead of blogging. Even though I think of canceling my AMS membership, I don’t mind helping the society pay their bills (MathSciNet brings about 37% of the AMS revenue, according to their 2010-11 report.)

Sure, reviews need to be edited, especially when written by non-native English speakers like myself. Still, I’m unhappy with the edited version of my recent review:

This was the approach taken in the foundational paper by J. Heinonen et al. [J. Anal. Math. 85 (2001), 87-139]

The paper was written by J. Heinonen, P. Koskela, N. Shanmugalingam, and J. T. Tyson. Yes, it’s four names. Yes, the 14-letter name is not easy to pronounce without practice. But does the saving of 45 bytes justify omitting the names of people who spent many months, if not years, working on the paper? Absolutely not. The tradition of using “et al” for papers with more than 3 authors belongs to the age of typewriters.

P.S. I don’t think MathSciNet editors read my blog, so I emailed them.

P.P.S. The names are now restored. In the future I’ll be sure to add in “comments to the editor” that names should not be replaced by et al.

This ain’t like dusting crops, boy

The hyperspace is a set of sets equipped with a metric or at least with a topology. Given a metric space X, let \mathcal{H}(X) be the set of all nonempty closed subsets of X with the Hausdorff metric: d(A,B)<r if no matter where you are in one set, you can jump into the other by traveling less than r. So, the distance between letters S and U is the length of the longer green arrow.

The requirement of closedness ensures d(A,B)>0 for A\ne B. If X is unbounded, then d(A,B) will be infinite for some pairs of sets, which is natural: the hyperspace contains infinitely many parallel universes which do not interact, being at infinite distance from one another.

Imagine that

Every continuous surjection f\colon X\to Y has an inverse f^{-1}\colon Y\to \mathcal{H}(X) defined in the obvious way: f^{-1}(y)=f^{-1}(y). Yay ambiguous notation! The subset of \mathcal{H}(X) that consists of the singletons is naturally identified with X, so for bijective maps we recover the usual inverse.

Exercise: what conditions on f guarantee that f^{-1} is (a) continuous; (b) Lipschitz? After the previous post it should not be surprising that

  • Even if f is open and continuous, f^{-1} may be discontinuous.
  • If f is a Lipschitz quotient, then f^{-1} is Lipschitz.

Proofs are not like dusting crops—they are easier.

Continuous:Lipschitz :: Open:?

A map is continuous if the preimage of every open set is open. If the topology is defined by a metric, we can reformulate this as: the inverse image of an open ball B_R(f(x)) contains an open ball B_r(x). Like this:

Continuous map

But bringing these radii R and r into the picture will not serve any purpose unless we use them to quantify continuity. For example, if we insist that r\ge cR for a fixed constant c>0, we arrive at the definition of a Lipschitz map.

But why do we look at the inverse image; what happens if we take the direct image instead? Then we get the definition of an open map: the image of every open set is open. Recasting this in metric terms: the image of an open ball B_R(x) contains an open ball B_r(f(x)). Like this:

Open map

If we quantify openness by requiring r\ge cR for a fixed c>0, we arrive at the definition of a co-Lipschitz map. [Caution: some people use “co-Lipschitz” to mean |f(a)-f(b)|\ge c|a-b|, which is a different condition. They coincide if f is bijective.]

I don’t know if openness without continuity is good for anything other than torturing students with exercises such as: “Construct an open discontinuous map from \mathbb R to \mathbb R.” We probably want both. At first one can hope that open continuous maps will have reasonable fibers f^{-1}(x): something (m-n)-dimensional when going from m dimensions to n, with m\ge n. The hope is futile: an open continuous map f\colon \mathbb R^2\to\mathbb R^2 can squeeze a line segment to a point (construction left as an exercise).

A map that is both Lipschitz and co-Lipschitz is called a Lipschitz quotient; this is a quantitative analog of “open continuous”. It turns out that for any Lipschitz quotient f\colon \mathbb R^2\to\mathbb R^2 the preimage of every point is a finite set. Moreover, f factors as f=g\circ h where g is a complex polynomial and h is a homeomorphism.

This is encouraging… but going even one dimension higher, it remains unknown whether a Lipschitz quotient f\colon \mathbb R^3\to\mathbb R^3 must have discrete fibers. For an overview of the subject, see Bill Johnson’s slides.