The exits are North South and East but are blocked by an infinite number of mutants

I used the word mutation in the previous post because of the (implicit) connection to quiver mutation. The quiver mutation is easy to define: take an oriented graph (quiver) where multiple edges are allowed, but must have consistent orientation (i.e., no 2-edge oriented cycles are allowed). Mutation at vertex v is done in three steps:

  1. v is removed and each oriented path of length two through v is contracted into an edge. That is, the stopover at v is eliminated.
  2. Step 1 may create some 2-edge oriented cycles, which must be deleted. That is, we cancel the pairs of arrows going in opposite directions.
  3. The replacement vertex v’ is inserted, connected to the rest of the graph in the same way that v was, but with opposite orientation. In practice, one simply reuses v for this purpose.

Some quivers have a finite set of mutation equivalent ones; others an infinite one. Perhaps the simplest nontrivial case is the oriented 3-cycle with edges of multiplicities x,y,z. The finiteness of its equivalents has to do with the Markov constant C(x,y,z)=x^2+y^2+z^2-xyz (not 3xyz this time), which is invariant under mutation. This is investigated in the paper “Cluster-Cyclic Quivers with Three Vertices and the Markov Equation” by Beineke, Brűstle and Hille. The appendix by Kerner relates the Markov constant to Hochschild cohomology, which I take as a clue for me to finish this post.

So I’ll leave you to play with the mutation applet linked in the embedded tweet below.

“Let me say a few words about the solutions in positive integers of the equation


This equation is symmetric with respect to unknown terms x, y, z; therefore, knowing one of its solutions

x=\alpha, \quad y=\beta, \quad z=\gamma,

it is easy to find the following five:

x=\alpha, \quad y=\gamma, \quad z=\beta;
x=\beta, \quad y=\gamma, \quad z=\alpha;
x=\beta, \quad y=\alpha, \quad z=\gamma;
x=\gamma, \quad y=\alpha, \quad z=\beta;
x=\gamma, \quad y=\beta, \quad z=\alpha.

Although these six solutions may be different, we will consider them as one, denoted by


The above is a quote from A.A.Markov’s paper “Sur les formes quadratiques binaires indéfinies” (part 2). Back in 1880 people were patient enough to write out all permutations of three symbols… If we fix x=\alpha and y=\beta, the equation for z becomes

z^2-3\alpha \beta z +(\alpha^2+\beta^2)=0

which admits the second integer root \gamma'=3\alpha\beta-\gamma. We can also write \gamma\gamma'=\alpha^2+\beta^2 which does not immediately tell that \gamma' is an integer, but it does tell us that \gamma' is positive. For example, from the obvious solution (1,1,1) we get (1,1,2). Now it would not do us any good to mutate in the third variable again, for it would bring us back to (1,1,1). But we can mutate in the second variable, replacing it with 6-1=5. Having understood so well the symmetry of the equation, we write this new solution as (1,2,5), in the increasing order. Now we can mutate either 1 or 2, and so it goes…

Markov number tree, from

All triples, except for the two at the beginning, consist of distinct numbers (thus, they do generate six distinct solutions if order matters). The tree contains all solutions of the Markov equation. The Wikipedia article also points out the occurrence of Fibonacci numbers along the top branch, as well as a curious identity discovered by Don Zagier: let f(t)=\cosh^{-1}(3t/2); then (for triples written in increasing order)

\displaystyle x^2+y^2+z^2=3xyz+\frac{4}{9} \quad \Leftrightarrow \quad f(x)+f(y)=f(z)

Looks like a fun problem on simplification of inverse hyperbolic trigonometric functions.

Also, it’s still unknown whether two distinct Markov triples can have the same maximum \max(\alpha,\beta,\gamma). Looks like a fun problem for amateur number theorists.

To wrap this up, I will describe how Markov (the one of Markov chains fame, not his identically-named son of 4-manifold undecidability fame) came across the equation. Let Q(m,n)=am^2+2bmn+cn^2 be a quadratic form with real coefficients a,b,c normalized by b^2-ac=1. What is the best upper bound on

\displaystyle \min_{(m,n)\in \mathbb{Z}^2\setminus \lbrace(0,0)\rbrace} |Q(m,n)|?

In 1873 Korkin and Zolotarev published a paper showing that the best bound is 2/\sqrt{5}, attained by the form \displaystyle Q_1=\frac{2}{\sqrt{5}}(m^2-mn-n^2). Looks like the case is closed. But what if Q is not Q_1 (precisely, not equivalent to Q_1 under the action of SL(2,\mathbb Z))? Then the best bound improves to 1/\sqrt{2}, attained by \displaystyle Q_2=\frac{1}{\sqrt{2}}(m^2-2mn-n^2) (this is also due to KZ). Well, what if Q is not equivalent to either Q_1 or Q_2? Then the bound improves to \sqrt{\frac{100}{221}} (found by Markov), and we could go on…

But rather than continue in this fashion, Markov looked for the threshold \mu at which the number of inequivalent forms with minimum \ge \mu becomes infinite. And he found it: \mu =2/3 (for comparison, \sqrt{\frac{100}{221}}=0.67267\dots). Specifically, there are only finitely many forms with minimum above 2/3+\epsilon, for every \epsilon>0. But there are infinitely many forms with minimum exactly 2/3, such as \frac{2}{3}(x^2-\sqrt{5}xy-y^2). It was the iterative process of getting more and more of these forms that led Markov to the Diophantine equation x^2+y^2+z^2=3xyz.

The number 2/3 and its square also appear in Zagier’s identity with f(t)=\cosh^{-1}(3t/2)… But enough numerology for today.

Multiplication by consensus, and some arrows

Let’s admit it: it’s hard to keep track of signs when multiplying numbers. Being lazy people, mathematicians seek ways to avoid this chore. One popular way is to work in the enchanted world of \mathbb Z_2, where -1=1. I’ll describe another way, which is to redefine multiplication by letting the factors reach a consensus on what the sign of their product should be.

If both a and b are positive, let their product be positive. And if they are both negative, the product should also be negative. Finally, if the factors can’t agree on which sign they like, they compromise at 0.

In a formula, this operation can be written as a\odot b=\frac{a|b|+|a|b}{2}, but who wants to see that kind of formulas? Just try using it to check that the operation is associative (which it is).

But I hear someone complaining that \odot is just an arbitrary operation that does not make any sense. So I’ll reformulate it. Represent real numbers by ordered pairs (a_+,a_-)\in [0,\infty)\times [0,\infty), for example 5 becomes (5,0) and -6 becomes (0,6). Define multiplication component-wise. Better now? You don’t have to keep track of minus signs because there aren’t any.

This \odot comes in handy when multiplying the adjancency matrices of quivers. The Wikipedia article on Quiver illustrates the concept with this picture:


But in mathematics, a quiver is a directed graph such as this one:

Quiver = directed graph

Recall that the adjancency matrix A of a graph on vertices \lbrace 1,2,\dots,n\rbrace has A_{ij}=1 if there is an edge between i and j, and A_{ij}=0 otherwise. For a directed graph we modify this definition by letting A_{ij}=1 if the arrow goes from i to j, and A_{ij}=-1 if it goes in the opposite direction. So, for the quiver shown above we get

A=\displaystyle \begin{pmatrix} 0 & 1 & -1 & 1 \\   -1 & 0 & -1 & 1 \\ 1 & 1 & 0 & -1 \\ -1 & -1 & 1 & 0 \end{pmatrix}

For an undirected graph the square A^2 counts the number of ways to get from i to j in exactly 2 steps (and one can replace 2 by n). To make this work for the directed graph, we represent numbers as pairs 1=(1,0) and -1=(0,1) and carry on multiplying and adding:

A\odot A=\displaystyle \begin{pmatrix} (0,0) & (0,0) & (1,0) & (1,1) \\   (0,0) & (0,0) & (1,1) & (0,1) \\ (0,1) & (1,1) & (0,0) & (2,0) \\ (1,1) & (1,0) & (0,2) & (0,0) \end{pmatrix}

For instance, there are two ways to get from 3 to 4 in two steps, but none in the opposite direction. This works for any powers, and also for multigraphs (with more than one edge between same vertices). Logically, this is the same as separating the adjacency matrix into its positive and negative parts, and multiplying them separately.


The last example is matrix mutation from the theory of cluster algebras. Given a (usually, integer) m\times n matrix A and a positive integer k\le \min(m,n), we can mutate A in the direction k by doing the following:

  1. Dump nuclear waste on the kth row and kth column
  2. To each non-radioactive element a_{ij} add a_{ik}\odot a_{kj}, that is, the \odot product of the radioactive elements to which a_{ij} is exposed.
  3. Clean up by flipping the signs of all radioactive elements.

The properties of \odot should make it clear that each mutation is an involution: mutating for the second time in the same direction recovers the original matrix. However, applying mutations in different directions, one can obtain a large, or even infinite, class of mutation-equivalent matrices.

Triangulation of polygons, Whitehead moves, and associahedron

To triangulate a polygon means to cut it into triangles by diagonals. Let us assume that the polygon is convex, which means it may as well be the regular n-gon. There is nothing to do if n=3 (it’s already a triangle), so we begin with n=4. There are two ways to cut a square with a diagonal… well that wasn’t much to say.

Next, n=5. There are five ways to triangulate a pentagon: just choose a vertex and draw two diagonals starting from it. Actually, the number of triangulations of any convex n-gon is the kth Catalan number C_k=frac{1}{k+1}binom{2k}{k} with k=n-2. With k=5-2=3 we get 5 as expected.

But we should do something beyond counting. A basic operation that one can perform on a triangulation is to remove one diagonal and replace it with another one (the choice of replacement is unique). This is called a Whitehead move. We can draw a graph in which vertices represent triangulations and edges correspond to Whitehead moves. Doing this for a pentagon gives… a pentagon. Funny.

Note that any triangulation of an n-gon involves exactly k-1=n-3 diagonals. Hence, there are k-1 possible Whitehead moves for any triangulation. This means that each vertex in our graph will have k-1 neighbors; in mathematical terms the degree of each vertex is k-1. As a consequence, the graph has (k-1) C_k /2 edges.

The case n=6 yields this pretty graph.

Triangulations of hexagon and Whitehead moves
Triangulations of hexagon and Whitehead moves

Note that there are no triangles in this graph. There can’t be any, because two subsequent Whitehead moves result in removal of two diagonals, and this can’t be undone in one move. So, the shortest cycle has length 4. Looking closely at the faces (some with 4 vertices, some with 5) you notice that all their triangulations have one diagonal in common. The faces can actually be represented by polygons, making the graph into a polytope known as an associahedron:

Associahedron from

Is there a natural way to color the vertices, for any n? What is the chromatic number anyway? But it’s getting late.