The least distorted curves and surfaces

Every subset {A\subset \mathbb R^n} inherits the metric from {\mathbb R^n}, namely {d(a,b)=|a-b|}. But we can also consider the intrinsic metric on {A}, defined as follows: {\rho_A(a,b)} is the infimum of the lengths of curves that connect {a} to {b} within {A}. Let’s assume there is always such a curve of finite length, and therefore {\rho_A} is always finite. All the properties of a metric hold, and we also have {|a-b|\le \rho_A(a,b)} for all {a,b\in A}.

If {A} happens to be convex, then {\rho_A(a,b)=|a-b|} because any two points are joined by a line segment. There are also some nonconvex sets for which {\rho_A} coincides with the Euclidean distance: for example, the punctured plane {\mathbb R^2\setminus \{(0,0)\}}. Although we can’t always get from {a} to {b} in a straight line, the required detour can be as short as we wish.

On the other hand, for the set {A=\{(x,y)\in \mathbb R^2 : y\le |x|\}} the intrinsic distance is sometimes strictly greater than Euclidean distance.

Nontrivial distortion
Oops, the equation was supposed to be y=|x|, without the square

For example, the shortest curve from {(-1,1)} to {(1,1)} has length {2\sqrt{2}}, while the Euclidean distance is {2}. This is the worst ratio for pairs of points in this set, although proving this claim would be a bit tedious. Following Gromov (Metric structures on Riemannian and non-Riemannian spaces), define the distortion of {A} as the supremum of the ratios {\rho_A(a,b)/|a-b|} over all pairs of distinct points {a,b\in A}. (Another term in use for this concept: optimal constant of quasiconvexity.) So, the distortion of the set {\{(x,y) : y\le |x|\}} is {\sqrt{2}}.

Gromov observed (along with posing the Knot Distortion Problem) that every simple closed curve in a Euclidean space (of any dimension) has distortion at least {\pi/2}. That is, the least distorted closed curve is the circle, for which the half-length/diameter ratio is exactly {\pi/2}.

Distortion of a closed curve
Distortion of a closed curve

Here is the proof. Parametrize the curve by arclength: {\gamma\colon [0,L]\rightarrow \mathbb R^n}. For {0\le t\le L/2} define {\Gamma(t)=\gamma(t )-\gamma(t+L/2) } and let {r=\min_t|\Gamma(t)|}. The curve {\Gamma} connects two antipodal points of magnitude at least {r}, and stays outside of the open ball of radius {r} centered at the origin. Therefore, its length is at least {\pi r} (projection onto a convex subset does not increase the length). On the other hand, {\Gamma} is a 2-Lipschitz map, which implies {\pi r\le 2(L/2)}. Thus, {r\le L/\pi}. Take any {t} that realizes the minimum of {|\Gamma|}. The points {a=\gamma(t)} and {b=\gamma(t+L/2)} satisfy {|a-b|\le L/\pi} and {\rho_A(a,b)=L/2}. Done.

Follow-up question: what are the least distorted closed surfaces (say, in {\mathbb R^3})? It’s natural to expect that a sphere, with distortion {\pi/2}, is the least distorted. But this is false. An exercise from Gromov’s book (which I won’t spoil): Find a closed convex surface in {\mathbb R^3} with distortion less than { \pi/2}. (Here, “convex” means the surface bounds a convex solid.)

Higher order reflections

Mathematical reflections, not those supposedly practiced in metaphilosophy.

Given a function {f} defined for {x\ge 0}, we have two basic ways to reflect it about {x=0}: even reflection {f(-x)=f(x)} and odd reflection {f(-x)=-f(x)}. Here is the even reflection of the exponential function {e^x}:

Even reflection
Even reflection

The extended function is not differentiable at {0}. The odd reflection, pictured below, is not even continuous at {0}. But to be fair, it has the same slope to the left and to the right of {0}, unlike the even reflection.

Odd reflection
Odd reflection

Can we reflect a function preserving both continuity and differentiability? Yes, this is what higher-order reflections are for. They define {f(-x)} not just in terms of {f(x)} but also involve values at other points, like {f(x/2)}. Here is one such smart reflection:

\displaystyle    f(-x) = 4f(x/2)-3f(x)  \qquad\qquad\qquad (1)

Differentiable reflection
Differentiable reflection

Indeed, letting {x\rightarrow 0^+}, we observe continuity: both sides converge to {f(0)}. Taking derivatives of both sides, we get

\displaystyle  -f'(-x) = 2f'(x/2) - 3f'(x)

where the limits of both sides as {x\rightarrow 0^+} again agree: they are {-f'(0)}.

A systematic way to obtain such reflection formulas is to consider what they do to monomials: {1}, {x}, {x^2}, etc. A formula that reproduces the monomials up to degree {d} will preserve the derivatives up to order {d}. For example, plugging {f(x)=1} or {f(x)=x} into (1) we get a valid identity. With {f(x)=x^2} the equality breaks down: {x^2} on the left, {-2x^2} on the right. As a result, the curvature of the graph shown above is discontinuous: at {x=0} it changes the sign without passing through {0}.

To fix this, we’ll need to use a third point, for example {x/4}. It’s better not to use points like {2x}, because when the original domain of {f} is a bounded interval {[0,b]}, we probably want the reflection to be defined on all of {[-b,b]}.

So we look for coefficients {A,B,C} such that {f(-x)=Af(x/4)+Bf(x/2)+Cf(x)} holds as identity for {f(x)=1,x,x^2}. The linear system {A+B+C=1}, {A/4+B/2+C=-1}, {A/16+B/4+C=1} has the solution {A=16}, {B=-20}, {C=5}. This is our reflection formula, then:

\displaystyle  f(-x) = 16f(x/4)-20f(x/2)+5f(x)  \qquad\qquad\qquad (2)

And this is the result of reflecting {\exp(x)} according to (2):

Twice differentiable reflection
Twice differentiable reflection

Now the curvature of the graph is continuous. One could go on, but since human eye is not sensitive to discontinuities of the third derivative, I’ll stop here.


In case you don’t believe the last paragraph, here is the reflection with three continuous derivatives, given by

\displaystyle  f(-x) = \frac{640}{7} f(x/8) - 144f(x/4)+60f(x/2)-\frac{45}{7}f(x)

and below it, the extension given by (2). For these plots I used Desmos because plots in Maple (at least in my version) have pretty bad aliasing.

Three continuous derivatives
Three continuous derivatives
Two continuous derivatives
Two continuous derivatives

Also, cubic splines have only two continuous derivatives and they connect dots naturally.

Walking dogs and comparing sticks

Then he dropped two in at once, and leant over the bridge to see which of them would come out first; and one of them did; but as they were both the same size, he didn’t know if it was the one which he wanted to win, or the other one. – A. A. Milne

It’s useful to have a way of measuring how different two sticks (or fir cones) are in size, shape, and their position in a river. Yes, we have the Hausdorff distance {d_H} between sets, but it does not take into account the orientation of sticks. And it performs poorly when the sticks are broken: the Hausdorff distance between these blue and red curves does not capture the disparity of their shapes:

Small Hausdorff distance, totally different curves
Small Hausdorff distance, totally different curves

Indeed, {d_H} is relatively small here, because from any point of red curve one can easily jump to some point of the blue curve, and the other way around. However, this kind of measurement completely ignores the fact that curves are meant to be traveled along in a continuous, monotone way.

There is a concept of distance that is better suited for comparing curves: the Fréchet distance {d_F}. Wikipedia gives this (folklore) description:

Imagine a dog walking along one curve and the dog’s owner walking along the other curve, connected by a leash. Both walk continuously along their respective curve from the prescribed start point to the prescribed end point of the curve. Both may vary their speed, and even stop, at arbitrary positions and for arbitrarily long. However, neither can backtrack. The Fréchet distance between the two curves is the length of the shortest leash that is sufficient for traversing both curves in this manner.


To get started, let’s compute this distance for two oriented line segments {AB} and {CD}. The length of the leash must be at least {|AC|} in order to begin the walk, and at least {|BD|} to finish. So,

\displaystyle    d_F(AB,CD) \ge \max(|AC|, |BD|)

In fact, equality holds here. In order to bound {d_F} from above, we just need one parametrization of the segments. Take the parametrization proportional to length:

\displaystyle    P=(1-t)A+tB,\quad Q=(1-t)C+tD

Then {|PQ|^2} is the quadratic polynomial of {t}. Without doing any computations, we can say the coefficient of {t^2} is nonnegative, because {|PQ|^2} cannot be negative for any {t\in\mathbb R}. Hence, this polynomial is a convex function of {t}, which implies that its maximum on the interval {[0,1]} is attained at an endpoints. And the endpoints we already considered. (By the way, this proof works in every CAT(0) metric space.)


In general, the Fréchet distance is not realized by constant-speed parametrization. Consider these two curves, each with a long detour:

Symmetry breaking
Symmetry breaking

It would be impractical for the dog and the owner to go on the detour at the same time. One should go first while the other waits for his/her/its turn. In particular, we see symmetry breaking here: even for two perfectly symmetric curves, the Fréchet-optimal parametrizations would not be symmetric to each other.


It is not obvious from the definition of {d_F} whether it is a metric; as usual, it’s the triangle inequality that is suspect. However, {d_F} indeed satisfies the triangle inequality. To prove this, we should probably formalize the definition of {d_F}. Given two continuous maps {f,g} from {[0,1]} into {\mathbb R^2} (or any metric space), define

\displaystyle    d_F(f,g) = \inf_{\phi,\psi}\max_{[0,1]} |f\circ \phi-g\circ \psi|

where {\phi} and {\psi} range over all nondecreasing functions from {[0,1]} onto itself. Actually, we can require {\phi} and {\psi} to be strictly increasing (it only takes a small perturbation), which in dog/owner terms means they are not allowed to stop, but can mosey along as slowly as they want. Then we don’t need both {\phi} and {\psi}, since

\displaystyle    \max_{[0,1]} |f\circ \phi-g\circ \psi| = \max_{[0,1]} |f -g\circ (\psi\circ \phi^{-1})|

So, given {f,g,h} we can pick {\phi} such that {\max_{[0,1]} |f-g\circ \phi|} is within {\epsilon} of {d_F(f,g)}; then pick {\psi} such that {\max_{[0,1]} | g\circ \phi - h\circ \psi | } is within {\epsilon} of {d_F(g,h)}. Then

\displaystyle    d_F(f,h)\le \max_{[0,1]} |f- g\circ \phi|+\max_{[0,1]} |g\circ \phi - h\circ \psi|   \le d_F(f,g)+d_F(g,h)+2\epsilon

and the triangle inequality follows.

Continuity of circumcenter and circumradius

For a bounded set {A} on the plane (or in any Euclidean space) one can define the circumcenter {c(A)} and circumradius {r(A)} as follows: {r(A)} is the smallest radius of a closed disk containing {A}, and {c(A)} is the center of such a disk. (Other terms in use: Chebyshev center and Chebyshev radius.)

Smallest disk covering the points
Smallest disk covering the points

The fact that {c(A)} is well-defined may not be obvious: what if there are multiple disks of radius {r(A)} that contain {A}? To investigate, introduce the farthest distance function {f_A(x) = \sup_{a\in A} |x-a|}. By definition, {c(A)} is where {f_A} attains its minimum. The function {f_A} is convex, being the supremum of a family of convex functions. However, that does not guarantee the uniqueness of its minimum. We have two issues here:

  • {x\mapsto |x-a|} is not strictly convex
  • the supremum of an infinite family of strictly convex functions can fail to be strictly convex (like {\sup_{1<p<2} x^p = x} on the interval {[0,1]}).

The first issue is resolved by squaring {f_A}. Indeed, {f_A^2} attains its minimum at the same place where {f_A} does, and {f_A(x)^2 = \sup_{a\in A} |x-a|^2} where each term {|x-a|^2} is strictly convex.

Also, we don’t want to lose strict convexity when taking the supremum over {a\in A}. For this purpose, we must replace strict inequality by something more robust. The appropriate substitute is strong convexity: a function {f} is strongly convex if there is {\lambda>0} such that {f(x)-\lambda |x|^2 } is convex. Let’s say that {f} is {\lambda}-convex in this case.

Since {|x-a|^2-|x|^2 = -2\langle x,a\rangle + |a|^2} is a convex (in fact linear) function of {x}, we see that {|x-a|^2} is {1}-convex. This property passes to supremum: subtracting {|x|^2} from the supremum is the same as subtracting it from each term. Strong convexity implies strict convexity and with it, the uniqueness of the minimum point. So, {c(A)}, the minimum of {f_A^2}, is uniquely defined. (Finding it in practice may be difficult. The spherical version of this problem is considered in Covering points with caps).


Having established uniqueness, it is natural to ask about stability, or more precisely, the continuity of {c(A)} and {r(A)} with respect to {A}. Introduce the Hausdorff distance {d_{\mathcal H}} on the set of bounded subsets. By definition, {d_{\mathcal H}(A,B)\le \delta} if {A} is contained in {\delta}-neighborhood of {B}, and {B} is contained in {\delta}-neighborhood of {A}. It is easy to see that {r(B)\le r(A) + d_{\mathcal H}(A,B)}, and therefore

\displaystyle  |r(A)-r(B)|\le d_{\mathcal H}(A,B)

In words, the circumradius is a {1}-Lipschitz function of the set.

What about the circumcenter? If the set {A} is shifted by {d} units in some direction, the circumcenter moves by the same amount. So it may appear that it should also be a {1}-Lipschitz function of {A}. But this is false.

Observe (or recall from middle-school geometry) that the circumcenter of a right triangle is the midpoint of its hypotenuse:

Circumcenter of right triangle
Circumcenter of a right triangle

Consider two right triangles:

  • Vertices {(-1,0), (1,0), (1,\epsilon)}. The right angle is at {(1,0)}, and the circumvcenter is the midpoint of opposite side: {(0,\epsilon/2)}.
  • Vertices {(-1,0), (1,0), (\sqrt{1-\epsilon^2},\epsilon)}. The right angle is at
    {(\sqrt{1-\epsilon^2},\epsilon)} and the circumcenter is at {(0,0)}.

The Hausdorff distance between these two triangles is merely {1-\sqrt{1-\epsilon^2} < \epsilon^2}, yet the distance between their circumcenters is {\epsilon/2}. So, Lipschitz continuity fails, and the most we can hope for is Hölder continuity with exponent {1/2}.

And indeed, the circumcenter is locally {1/2}-Hölder continuous. To prove this, suppose {|c(A)-c(B)|\ge \epsilon}. The {1}-convexity of {f_A^2} implies that

\displaystyle  f_A(c(B))^2 \ge f_A(c(A))^2+|c(A)-c(B)|^2 \ge r(A)^2 + \epsilon^2

On the other hand, since {|f_A-f_B|\le d_{\mathcal H}(A,B)} everywhere,

\displaystyle  f_A(c(B))\le f_B(c(B)) + d_{\mathcal H}(A,B) = r(B) + d_{\mathcal H}(A,B) \le r(A) + 2 d_{\mathcal H}(A,B)

Putting things together,

\displaystyle  d_{\mathcal H}(A,B) \ge \frac12 (\sqrt{r(A)^2 + \epsilon^2} - r(A))  = \frac{\epsilon^2}{2( \sqrt{r(A)^2 + \epsilon^2} + r(A) )}

Thus, as long as {r(A)} remains bounded above, we have an inequality of the form {d_{\mathcal H}(A,B) \ge c\, \epsilon^2}, which is exactly {1/2}-Hölder continuity.


Remark. The proof uses no information about {\mathbb R^n} other than the {1}-convexity of the squared distance function. As such, it applies to every CAT(0) space.

Infinite beatitude of non-existence: a journey into Nothingland

In the novella Flatland by Edwin A. Abbott, the Sphere leads the Square “downward to the lowest depth of existence, even to the realm of Pointland, the Abyss of No dimensions”:

I caught these words, “Infinite beatitude of existence! It is; and there is nothing else beside It.” […] “It fills all Space,” continued the little soliloquizing Creature, “and what It fills, It is. What It thinks, that It utters; and what It utters, that It hears; and It itself is Thinker, Utterer, Hearer, Thought, Word, Audition; it is the One, and yet the All in All. Ah, the happiness, ah, the happiness of Being!”

Indeed, Pointland (a one-point space) is zero-dimensional by every concept of dimension that I know of. Yet there is something smaller: Nothingland — empty space, {\varnothing} — whose non-existent inhabitants must be perpetually enjoying the happiness of Non-Being.

What is the dimension of Nothingland?

In topology, the empty set has dimension {-1}. This fits the inductive definition of topological dimension, which is the smallest number {d} such that the space can be minced by removing a subset of dimension {\le d-1}. (Let’s say a space has been minced if what’s left has no connected subsets other than points.)

Thus, a nonempty finite (or countable) set has dimension {0}: it’s minced already, so we remove nothing, a set of dimension {-1}. A line or a curve is one-dimensional: they can be minced by removing a zero-dimensional subset, like rational numbers.

A curve minced by removing a zero-dimensional set
One-dimensional curve minced by removing zero-dimensional points.

The Flatland itself can be minced by removing a one-dimensional subset (e.g., circles with rational radius and rational coordinates of the center), so it is two-dimensional. And so on.

Flatland minced by removing a one-dimensional subset
Flatland minced by removing a one-dimensional subset

The convention {\mathrm{dim}\,\varnothing = -1}, helpful in the definition, gets in the way later. For example, the topological dimension is subadditive under products: {\mathrm{dim}\,(A\times B)\le \mathrm{dim}\,A + \mathrm{dim}\,B} … unless both {A} and {B} are empty, because then {-1\le -2} is false. So the case {A=B=\varnothing} must be excluded from the product theorem. We would not have to do this if {\mathrm{dim}\,\varnothing } was defined to be {-\infty}.

Next, consider the Hausdorff dimension. Its definition is not inductive, but one has to introduce other concepts first. First, define the {d}-dimensional premeasure on scale {\delta>0}:

\displaystyle    \mathcal H^d_\delta (X) = \inf \sum_j (\mathrm{diam}\,{U_j})^{d}

where the infimum is taken over all covers of {X} by nonempty subsets {U_j} with {\mathrm{diam}\,{U_j}\le \delta}. Requiring {U_j} to be nonempty avoids the need to define the diameter of Nothingland, which would be another story. The empty space can be covered by empty family of nonempty subsets. The sum of empty set of numbers is {0}, and so {\mathcal H^d_\delta (\varnothing) = 0}.

Then we define the {d}-dimensional Hausdorff measure:

\displaystyle    \mathcal H^d (X) = \lim_{\delta\rightarrow0} \mathcal H^d_\delta (X)

and finally,

\displaystyle    \mathrm{dim}_H (X) = \inf \{ d \colon \mathcal H^d (X)=0\}

If in this last infimum we require {d>0}, the result is {\mathrm{dim}_H (\varnothing) =0}. But why make this restriction? The {d}-dimensional pre-measures and measures make sense for all real {d}. It’s just that for nonempty {X}, we are raising some small (or even zero) numbers to negative power, getting something large as a result. Consequently, every nonempty space has {\mathcal H^d = \infty} for all {d < 0}.

But {\mathcal H^d_\delta (\varnothing) = 0}, from the sum of empty collection of numbers being zero. Hence, {\mathcal H^d (\varnothing) = 0} for all real {d}, and this leads to {\mathrm{dim}_H\,\varnothing = -\infty}.

To have {\mathrm{dim}_H\,\varnothing = -\infty} is also convenient because the Hausdorff dimension is superadditive under products: {\mathrm{dim}_H\,(A\times B)\ge \mathrm{dim}_H\,A + \mathrm{dim}_H\,B}. This inequality was proved for general metric spaces as recently as 1995, by John Howroyd. If we don’t have {\mathrm{dim}_H\,\varnothing = -\infty}, then both factors {A} and {B} must be assumed nonempty.

So… should Nothingland have topological dimension {-1} and Hausdorff dimension {-\infty}? But that would violate the inequality {\mathrm{dim} (X)\le \mathrm{dim}_H (X)} which holds for every other separable metric space. In fact, for such spaces the topological dimension is simply the infimum of the Hausdorff dimension over all metrics compatible with the topology.

I am inclined to let the dimension of Nothingland be {-\infty} for every concept of dimension.

Words that contain UIO, and best-fitting lines

In Calculus I we spend a fair amount of time talking about how nicely the tangent line fits a smooth curve.

Tangent line to exponential function at x=0
Tangent line to exponential function at x=0

But truth be told, it fits only near the point of tangency. How can we find the best approximating line for a function {f} on a given interval?

Best linear fit to the exponential function on [0,1]
Best linear fit to the exponential function on [0,1]

A natural measure of quality of approximation is the maximum deviation of the curve from the line, {E(f;\alpha,\beta) = \max_{[a, b]} |f(x) - \alpha x-\beta|} where {\alpha,\beta} are the coefficients in the line equation, to be determined. We need {\alpha,\beta} that minimize {E(f;\alpha,\beta)}.

The Chebyshev equioscillation theorem is quite useful here. For one thing, its name contains the letter combination uio, which Scrabble players may appreciate. (Can you think of other words with this combination?) Also, its statement does not involve concepts outside of Calculus I. Specialized to the case of linear fit, it says that {\alpha,\beta} are optimal if and only if there exist three numbers { x_1<x_2<x_3} in {[a, b]} such that the deviations {\delta_i = f(x_i) - \alpha x_i-\beta}

  • are equal to {E(f;\alpha,\beta)} in absolute value: {|\delta_i| = E(f;\alpha,\beta)} for {i=1,2,3}
  • have alternating signs: {\delta_1 = -\delta_2 = \delta_3}

Let’s consider what this means. First, {f'(x_i) =\alpha} unless {x_i} is an endpoint of {[a,b]}. Since {x_2} cannot be an endpoint, we have {f'(x_2)=\alpha}.

Furthermore, {f(x) - \alpha x } takes the same value at {x_1} and {x_3}. This gives an equation for {x_2}

\displaystyle    f(x_1)-f'(x_2)x_1 = f(x_3)-f'(x_2) x_3    \qquad \qquad (1)

which can be rewritten in the form resembling the Mean Value Theorem:

\displaystyle    f'(x_2) = \frac{f(x_1)-f(x_3)}{x_1-x_3}   \qquad \qquad (2)

If {f'} is strictly monotone, there can be only one {x_i} with {f'(x_i)=\alpha}. Hence {x_1=a} and {x_3=b} in this case, and we find {x_2} by solving (2). This gives {\alpha = f'(x_2)}, and then {\beta} is not hard to find.

Here is how I did this in Sage:

var('x a b')
f = sin(x)  # or another function
df = f.diff(x)
a = # left endpoint
b = # right endpoint

That was the setup. Now the actual computation:

var('x1 x2 x3')
x1 = a
x3 = b
x2 = find_root(f(x=x1)-df(x=x2)*x1 == f(x=x3)-df(x=x2)*x3, a, b)
alpha = df(x=x2)
beta = 1/2*(f(x=x1)-alpha*x1 + f(x=x2)-alpha*x2)
show(plot(f,a,b)+plot(alpha*x+beta,a,b,color='red'))
Fitting a line to the sine curve
Fitting a line to the sine

However, the algorithm fails to properly fit a line to the sine function on {[0,3\pi/2]}:

Not the best fit, obviously
Not the best fit, obviously

The problem is, {f'(x)=\cos x} is no longer monotone, making it possible for two of {x_i} to be interior points. Recalling the identities for cosine, we see that these points must be symmetric about {x=\pi}. One of {x_i} must still be an endpoint, so either {x_1=a} (and {x_3=2\pi-x_2}) or {x_3=b} (and {x_1=2\pi-x_2}). The first option works:

Best fit on [0,3pi/2]
Best fit on [0,3pi/2]

This same line is also the best fit on the full period {[0,2\pi]}. It passes through {(\pi,0)} and has the slope of {-0.2172336...} which is not a number I can recognize.

On the interval {[0,4\pi]}, all three of the above approaches fail:

Wrong!
Wrong!
Also wrong!
Also wrong!
What were you thinking?
What were you thinking?

Luckily we don’t need a computer in this case. Whenever {|f|} has at least three points of maximum with alternating signs of {f}, the Chebyshev equioscillation theorem implies that the best linear fit is the zero function.

Unreasonable effectiveness of the left endpoint rule

The left endpoint rule, and its twin right endpoint rule, are ugly ducklings of integration methods. The left endpoint rule is just the average of the values of the integrand over left endpoints of equal subintervals:

\displaystyle \int_a^b f(x)\,dx \approx \frac{b-a}{n} \sum_{i=0}^{n-1} f(a+i/n)

Here is its graphical representation with {n=10} on the interval {[-1,1]}: the sample points are marked with vertical lines, the length of each line representing the weight given to that point. Every point has the same weight, actually.

Left endpoint rule
Left endpoint rule

Primitive, ineffective, with error {O(1/n)} for {n} points used.

Simpson’s rule is more sophisticated, with error {O(1/n^4)}. It uses weights of three sizes:

Simpson's rule
Simpson’s rule

Gaussian quadrature uses specially designed (and difficult to compute) sample points and weights: more points toward the edges, larger weights in the middle.

Gaussian quadrature
Gaussian quadrature

Let’s compare these quadrature rules on the integral {\int_{-1}^1 e^x \cos \pi x\,dx}, using {10} points as above. Here is the function:

Nice smooth function
Nice smooth function
  • The exact value of the integral is {\dfrac{e^{-1}-e}{1+\pi^2}}, about -0.216.
  • Simpson’s rule gets within 0.0007 of the exact value. Well done!
  • Gaussian quadrature gets within 0.000000000000003 of the exact value. Amazing!
  • And the lame left endpoint rule outputs… a positive number, getting even the sign wrong! This is ridiculous. The error is more than 0.22.

Let’s try another integral: {\int_{-1}^1 \sqrt{4+\cos \pi x +\sin \pi x} \,dx}, again using {10} points. The function looks like this:

Another nice function
Another nice function

The integral can be evaluated exactly… sort of. In terms of elliptic integrals. And preferably not by hand:

Maple output, "simplified"
Maple output, “simplified”
  • Simpson’s rule is within 0.00001 of the exact value, even better than the first time.
  • Gaussian quadrature is within 0.00000003, not as spectacular as in the first example.
  • And the stupid left endpoint rule is … accurate within 0.00000000000000005. What?

The integral of a smooth periodic function over its period amounts to integration over a circle. When translated to the circle, the elaborate placement of Gaussian sample points is… simply illogical. There is no reason to bunch them up at any particular point: there is nothing special about (-1,0) or any other point of the circle.

Gaussian nodes on a circle
Gaussian nodes on a circle

The only natural approach here is the simplest one: equally spaced points, equal weights. Left endpoint rule uses it and wins.

Equally spaced points
Equally spaced points

The Nelder-Mead minimization algorithm

It is easy to find the minimum of {f(x,y) = x^2+16y^2} if you are human. For a computer this takes more work:

Search for the minimum of x^2+16y^2
Search for the minimum of x^2+16y^2

The animation shows a simplified form of the Nelder-Mead algorithm: a simplex-based minimization algorithm that does not use any derivatives of {f}. Such algorithms are easy to come up with for functions of one variable, e.g., the bisection method. But how to minimize a function of two variables?

A natural way to look for minimum is to slide along the graph in the direction opposite to {\nabla f}; this is the method of steepest descent. But for computational purposes we need a discrete process, not a continuous one. Instead of thinking of a point sliding down, think of a small tetrahedron tumbling down the graph of {f}; this is a discrete process of flips and flops. The process amounts to the triangle of contact being replaced by another triangle with an adjacent side. The triangle is flipped in the direction away from the highest vertex.

This is already a reasonable minimization algorithm: begin with a triangle {T}; find the values of {f} at the vertices of {T}; reflect the triangle away from the highest value; if the reflected point {R} has a smaller value, move there; otherwise stop.

But there’s a problem: the size of triangle never changes in this process. If {T} is large, we won’t know where the minimum is even if {T} eventually covers it. If {T} is small, it will be moving in tiny steps.

Perhaps, instead of stopping when reflection does not work anymore, we should reduce the size of {T}. It is natural to contract it toward the “best” vertex (the one with the smallest value of {f}), replacing two other vertices with the midpoints of corresponding sides. Then repeat. The stopping condition can be the values of {f} at all vertices becoming very close to one another.

This looks clever, but the results are unspectacular. The algorithm is prone to converge to a non-stationary point where just by an accident the triangle attains a nearly horizontal position. The problem is that the triangle, while changing its size, does not change its shape to fit the geometry of the graph of {f}.

The Nelder-Mead algorithm adapts the shape of the triangle by including the possibility of stretching while flipping. Thus, the triangle can grow smaller and larger, moving faster when the path is clear, or becoming very thin to fit into a narrow passage. Here is a simplified description:

  • Begin with some triangle {T}.
  • Evaluate the function {f} at each vertex. Call the vertices {W,G,B} where {W} is the worst one (the largest value of {f}) and {B} is the best.
  • Reflect {W} about the midpoint of the good side {GB}. Let {R} be the reflected point.
  • If {f(R)<f(B)}, then we consider moving even further in the same direction, extending the line {WR} beyond {R} by half the length of {WR}. Choose between {R} and {E} based on where {f} is smaller, and make the chosen point a new vertex of our triangle, replacing {W}.
  • Else, do not reflect and instead shrink the triangle toward {B}.
  • Repeat, stopping when we either exceed the number of iterations or all values of {f} at the vertices of triangle become nearly equal.

(The full version of the Nelder-Mead algorithm also includes the comparison of {R} with {G}, and also involves trying a point inside the triangle.)

Rosenbrock's function
Rosenbrock’s function

This is Rosenbrock’s function {f(x,y)=100(x^2-y)^2 + (x-1)^2}, one of standard torture tests for minimization algorithms. Its graph has a narrow valley along the parabola {y=x^2}. At the bottom of the valley, the incline toward the minimum {(1,1)} is relatively small, compared to steep walls surrounding the valley. The steepest descent trajectory quickly reaches the valley but dramatically slows down there, moving in tiny zig-zagging steps.

The algorithm described above gets within {0.001} of the minimum in 65 steps.

Minimizing Rosenbrock's function
Minimizing Rosenbrock’s function

In conclusion, Scilab code with this algorithm.

x = -0.4:0.1:1.6; y = -2:0.1:1.4          // viewing window  
[X,Y] = meshgrid(x,y); contour(x,y,f(X,Y)',30)  // contour plot
plot([1],[1],'r+')                             // minimum point
tol = 10^(-6)
n = 0
T = [0, -1.5 ; 1.4, -1.5; 1.5, 0.5]        //  initial triangle
for i=1:3 
    values(i) = f(T(i,1), T(i,2))
end
while (%T) 
    xpoly(T(:,1),T(:,2),'lines',1)         // draw the triangle  
    [values, index] = gsort(values)          // sort the values 
    T = T(index,:)       
    if values(1)-values(3) < tol            // close enough?
        mfprintf(6, "Minimum at (%.3f, %.3f)",T(3,1),T(3,2))
        break 
    end
    R = T(2,:) + T(3,:) - T(1,:)             // reflected  
    fR = f(R(1),R(2))
    if fR < values(3)                         
        E = 1.5*T(2,:) + 1.5*T(3,:) - 2*T(1,:)  // extended  
        fE = f(E(1),E(2))
        if fE < fR 
            T(1,:)=E; values(1)=fE     // pick extended 
        else
            T(1,:)=R; values(1)=fR     // pick reflected 
        end
    else 
        for i=1:2
            T(i,:) = (T(i,:)+T(3,:))/2      // shrink
            values(i) = f(T(i,1), T(i,2))      
        end
    end
    n = n+1
    if n >= 200
        disp('Failed to converge'); break    //  too bad 
    end
end

Hertzsprung–Russell diagram of Stack Exchange sites

The Hertzsprung–Russell diagram is a scatter plot of stars by temperature and luminosity.

Hertzsprung–Russell diagram*

Traditionally, it is shown with temperature axis pointing from right to left, which I don’t really like.

Stack Exchange family of sites is not (yet) as numerous as stars in the Universe; there are only 120 or so of them. But they can also be organized on a two-parameter log-log scatterplot. The two parameters are: total number of questions (intrinsic characteristic, like surface temperature) and the number of daily visits (luminosity in Internet terms).

The linear scale chart was not going to work, due to the supergiant size of Stack Overflow:

Linear scale
Linear scale

Even on the log-log scale Stack Overflow is an outlier, but within reason:

log-log scale
log-log scale

The colors follow Stack Exchange classification: Technology, Science, Culture & Recreation, Life & Arts, Business, Professional. The largest of science sites, and the second largest overall, is Mathematics, although it trails several non-science sites in luminosity.

Annotated version of the above diagram:

Annotated diagram
Annotated diagram

Both Mathematics and MathOverflow have low traffic compared to their size: perhaps the Internet audience is just not that into math? On the other hand, the young Mathematics Educators site is way up in the traffic category.

Ironically, Astronomy is a low-luminosity site, trailing in traffic the much smaller Earth Science. Other annotated science sites are well established by now: Physics, Statistics, and (slow-moving) Theoretical Computer Science.

The only two non-technology sites that beat Mathematics in traffic are Gaming and English. Gaming isn’t really an exception here, since all large “technology” sites revolve around computers.

Three of the sites have extremely low traffic currently: Beer, Italian, and Expatriates. If this does not improve, they may be shut down… or perhaps merged into one: Beer for Italian Expatriates.


(*) “Hertzsprung-Russel StarData” by ESOhttp://www.eso.org/public/images/. Licensed under CC BY 3.0 via Wikimedia Commons.