Parsing calculus with regular expressions

As a service* to math students everywhere (especially those taking calculus), I started Mathematics.SE Index. The plan is to have a thematic catalog of common exercises in college-level mathematics, each linked to a solution posted on Math.SE.

As of now, the site has reasonably complete sections on Limits and Series, with a rudimentary section on binomial sums. All lists are automatically generated. Initial filtering was done with Data Explorer SQL query, using tags and keywords in question body. The query also took into account the view count (i.e., how often the problem is searched for), and the existence of upvoted answers.

The results of the query were processed with a Google Sheets script: a bunch of regular expressions extracted LaTeX markup with a desired pattern, checked its integrity [not of academic kind] and transformed into WordPress-compatible markup.

Plans for near future: integrals (especially improper), basic proofs by induction, {\epsilon-\delta}, maybe some group theory and differential equations… depends on how easy it is to teach these topics to regular expressions.

(*) Or disservice, I wouldn’t know.

Squarish polynomials

For some reason I wanted to construct polynomials approximating this piecewise constant function {f}:

How to approximate this with polynomials?
So square

Of course approximation cannot be uniform, since the function is not continuous. But it can be achieved in the sense of convergence of graphs in the Hausdorff metric: their limit should be the “graph” shown above, with the vertical line included. In concrete terms, this means for every {\epsilon>0} there is {N} such that for {n\ge N} the polynomial {p_n} satisfies

\displaystyle  |p_n-f|\le \epsilon\quad \text{ on }\ [0,2]\setminus [1-\epsilon,1+\epsilon]

and also

\displaystyle  -\epsilon\le  p_n  \le 1+\epsilon\quad \text{ on }\ [1-\epsilon,1+\epsilon]

How to get such {p_n} explicitly? I started with the functions {f_m(x) = \exp(-x^m)} when {m} is large. The idea is that as {m\rightarrow\infty}, the limit of {\exp(-x^m)} is what is wanted: {1} when {x<1}, {0} when {x>1}. Also, for each {m} there is a Taylor polynomial {T_{m,n}} that approximates {f_m} uniformly on {[0,2]}. Since the Taylor series is alternating, it is not hard to find suitable {n}. Let’s shoot for {\epsilon=0.01} in the Taylor remainder and see where this leads:

  • Degree {7} polynomial for {\exp(-x)}
  • Degree {26} polynomial for {\exp(-x^2)}
  • Degree {69} polynomial for {\exp(-x^3)}
  • Degree {180} polynomial for {\exp(-x^4)}
  • Degree {440} polynomial for {\exp(-x^5)}

The results are unimpressive, though:

Taylor polynomials of exp(-x^m) are not so square
Taylor polynomials of exp(-x^m) are not so square

To get within {0.01} of the desired square-ness, we need {\exp(-1.01^m)<0.01}. This means {m\ge 463}. Then, to have the Taylor remainder bounded by {0.01} at {x=2}, we need {2^{463n}/n! < 0.01}. Instead of messing with Stirling’s formula, just observe that {2^{463n}/n!} does not even begin to decrease until {n} exceeds {2^{463}}, which is more than {10^{139}}. That’s a … high degree polynomial. I would not try to ask a computer algebra system to plot it.

Bernstein polynomials turn out to work better. On the interval {[0,2]} they are given by

\displaystyle    p_n(x) = 2^{-n} \sum_{k=0}^n f(2k/n) \binom{n}{k} x^k (2-x)^{n-k}

To avoid dealing with {f(1)}, it is better to use odd degrees. For comparison, I used the same or smaller degrees as above: {7, 25, 69, 179, 439}.

Squarish Bernstein polynomials
Squarish Bernstein polynomials

Looks good. But I don’t know of a way to estimate the degree of Bernstein polynomial required to obtain Hausdorff distance less than a given {\epsilon} (say, {0.01}) from the square function.

Winding map and local injectivity

The winding map {W} is a humble example that is conjectured to be extremal in a long-standing open problem. Its planar version is defined in polar coordinates {(r,\theta)} by

\displaystyle    (r,\theta) \mapsto (r,2\theta)

All this map does it stretch every circle around the origin by the factor of two — tangentially, without changing its radius. As a result, the circle winds around itself twice. The map is not injective in any neighborhood of the origin {r=0}.

3D winding
2D winding

The 3D version of the winding map has the same formula, but in cylindrical coordinates. It winds the space around the {z}-axis, like this:

2D winding
3D winding

In the tangential direction the space is stretched by the factor of {2}; the radial coordinate is unchanged. More precisely: the singular values of the derivative matrix {DW} (which exists everywhere except when {r=0}) are {2,1,1}. Hence, the Jacobian determinant {\det DW} is {2}, which makes sense since the map covers the space by itself, twice.

In general, when the singular values of the matrix {A} are {\sigma_1\ge \dots \ge\sigma_n}, the ratio {\sigma_n^{-n} \det A} is called the inner distortion of {A}. The word “inner” refers to the fact that {\sigma_n} is the radius of the ball inscribed into the image of unit ball under {A}; so, the inner distortion compares this inner radius of the image of unit ball to its volume.

For a map, like {W} above, the inner distortion is the (essential) supremum of the inner distortion of its derivative matrices over its domain. So, the inner distortion of {W} is {2}, in every dimension. Another example: the linear map {(x,y)\mapsto (3x,-2y)} has inner distortion {3/2}.

It is known that there is a constant {K>1} such that if the inner distortion of a map {F} is less than {K}, the map is locally injective: every point has a neighborhood in which {F} is injective. This was proved by Martio, Rickman, and Väisälä in 1971. They conjectured that {K=2} is optimal: that is, the winding map has the least inner distortion among all maps that are not locally injective.

But at present, there is still no explicit nontrivial lower estimate for {K}, for example we don’t know if inner distortion less than {1.001} implies local injectivity.

Using a paraboloid to cover points with a disk

Find the equation of tangent line to parabola {y=x^2}borrring calculus drill.

Okay. Draw two tangent lines to the parabola, then. Where do they intersect?

Two tangent lines
Two tangent lines

If the points of tangency are {a} and {b}, then the tangent lines are
{y=2a(x-a)+a^2} and {y=2b(x-b)+b^2}. Equate and solve:

\displaystyle    2a(x-a)+a^2 = 2b(x-b)+b^2 \implies x = \frac{a+b}{2}

Neat! The {x}-coordinate of the intersection point is midway between {a} and {b}.

What does the {y}-coordinate of the intersection tell us? It simplifies to

\displaystyle    2a(b-a)/2+a^2 = ab

the geometric meaning of which is not immediately clear. But maybe we should look at the vertical distance from intersection to the parabola itself. That would be

\displaystyle    x^2 - y = \left(\frac{a+b}{2}\right)^2 -ab = \left(\frac{a-b}{2}\right)^2

This is the square of the distance from the midpoint to {a} and {b}. In other words, the squared radius of the smallest “disk” covering the set {\{a,b\}}.


Same happens in higher dimensions, where parabola is replaced with the paraboloid {z=|\mathbf x|^2}, {\mathbf x = (x_1,\dots x_n)}.

Paraboloid
Paraboloid

Indeed, the tangent planes at {\mathbf a} and {\mathbf b} are
{z=2\mathbf a\cdot (\mathbf x-\mathbf a)+|\mathbf a|^2} and {z=2\mathbf b\cdot (\mathbf x-\mathbf b)+|\mathbf b|^2}. Equate and solve:

\displaystyle    2(\mathbf a-\mathbf b)\cdot \mathbf x = |\mathbf a|^2-|\mathbf b|^2 \implies \left(\mathbf x-\frac{\mathbf a+\mathbf b}{2}\right)\cdot (\mathbf a-\mathbf b) =0

So, {\mathbf x} lies on the equidistant plane from {\mathbf a} and {\mathbf b}. And, as above,

\displaystyle    |\mathbf x|^2 -z = \left|\frac{\mathbf a-\mathbf b}{2}\right|^2

is the square of the radius of smallest disk covering both {\mathbf a} and {\mathbf b}.


The above observations are useful for finding the smallest disk (or ball) covering given points. For simplicity, I stick to two dimensions: covering points on a plane with the smallest disk possible. The algorithm is:

  1. Given points {(x_i,y_i)}, {i=1,\dots,n}, write down the equations of tangent planes to paraboloid {z=x^2+y^2}. These are {z=2(x_i x+y_i y)-(x_i^2+y_i^2)}.
  2. Find the point {(x,y,z)} that minimizes the vertical distance to paraboloid, that is {x^2+y^2-z}, and lies (non-strictly) below all of these tangent planes.
  3. The {x,y} coordinates of this point is the center of the smallest disk covering the points. (Known as the Chebyshev center of the set). Also, {\sqrt{x^2+y^2-z}} is the radius of this disk; known as the Chebyshev radius.

The advantage conferred by the paraboloid model is that at step 2 we are minimizing a quadratic function subject to linear constraints. Implementation in Sage:

points = [[1,3], [1.5,2], [3,2], [2,-1], [-1,0.5], [-1,1]] 
constraints = [lambda x, p=q: 2*x[0]*p[0]+2*x[1]*p[1]-p[0]^2-p[1]^2-x[2] for q in points]
target = lambda x: x[0]^2+x[1]^2-x[2]
m = minimize_constrained(target,constraints,[0,0,0]) 
circle((m[0],m[1]),sqrt(m[0]^2+m[1]^2-m[2]),color='red') + point(points)

Smallest disk covering the points
Smallest disk covering the points

Credit: this post is an expanded version of a comment by David Speyer on last year’s post Covering points with caps, where I considered the same problem on a sphere.

The least distorted curves and surfaces

Every subset {A\subset \mathbb R^n} inherits the metric from {\mathbb R^n}, namely {d(a,b)=|a-b|}. But we can also consider the intrinsic metric on {A}, defined as follows: {\rho_A(a,b)} is the infimum of the lengths of curves that connect {a} to {b} within {A}. Let’s assume there is always such a curve of finite length, and therefore {\rho_A} is always finite. All the properties of a metric hold, and we also have {|a-b|\le \rho_A(a,b)} for all {a,b\in A}.

If {A} happens to be convex, then {\rho_A(a,b)=|a-b|} because any two points are joined by a line segment. There are also some nonconvex sets for which {\rho_A} coincides with the Euclidean distance: for example, the punctured plane {\mathbb R^2\setminus \{(0,0)\}}. Although we can’t always get from {a} to {b} in a straight line, the required detour can be as short as we wish.

On the other hand, for the set {A=\{(x,y)\in \mathbb R^2 : y\le |x|\}} the intrinsic distance is sometimes strictly greater than Euclidean distance.

Nontrivial distortion
Oops, the equation was supposed to be y=|x|, without the square

For example, the shortest curve from {(-1,1)} to {(1,1)} has length {2\sqrt{2}}, while the Euclidean distance is {2}. This is the worst ratio for pairs of points in this set, although proving this claim would be a bit tedious. Following Gromov (Metric structures on Riemannian and non-Riemannian spaces), define the distortion of {A} as the supremum of the ratios {\rho_A(a,b)/|a-b|} over all pairs of distinct points {a,b\in A}. (Another term in use for this concept: optimal constant of quasiconvexity.) So, the distortion of the set {\{(x,y) : y\le |x|\}} is {\sqrt{2}}.

Gromov observed (along with posing the Knot Distortion Problem) that every simple closed curve in a Euclidean space (of any dimension) has distortion at least {\pi/2}. That is, the least distorted closed curve is the circle, for which the half-length/diameter ratio is exactly {\pi/2}.

Distortion of a closed curve
Distortion of a closed curve

Here is the proof. Parametrize the curve by arclength: {\gamma\colon [0,L]\rightarrow \mathbb R^n}. For {0\le t\le L/2} define {\Gamma(t)=\gamma(t )-\gamma(t+L/2) } and let {r=\min_t|\Gamma(t)|}. The curve {\Gamma} connects two antipodal points of magnitude at least {r}, and stays outside of the open ball of radius {r} centered at the origin. Therefore, its length is at least {\pi r} (projection onto a convex subset does not increase the length). On the other hand, {\Gamma} is a 2-Lipschitz map, which implies {\pi r\le 2(L/2)}. Thus, {r\le L/\pi}. Take any {t} that realizes the minimum of {|\Gamma|}. The points {a=\gamma(t)} and {b=\gamma(t+L/2)} satisfy {|a-b|\le L/\pi} and {\rho_A(a,b)=L/2}. Done.

Follow-up question: what are the least distorted closed surfaces (say, in {\mathbb R^3})? It’s natural to expect that a sphere, with distortion {\pi/2}, is the least distorted. But this is false. An exercise from Gromov’s book (which I won’t spoil): Find a closed convex surface in {\mathbb R^3} with distortion less than { \pi/2}. (Here, “convex” means the surface bounds a convex solid.)

Higher order reflections

Mathematical reflections, not those supposedly practiced in metaphilosophy.

Given a function {f} defined for {x\ge 0}, we have two basic ways to reflect it about {x=0}: even reflection {f(-x)=f(x)} and odd reflection {f(-x)=-f(x)}. Here is the even reflection of the exponential function {e^x}:

Even reflection
Even reflection

The extended function is not differentiable at {0}. The odd reflection, pictured below, is not even continuous at {0}. But to be fair, it has the same slope to the left and to the right of {0}, unlike the even reflection.

Odd reflection
Odd reflection

Can we reflect a function preserving both continuity and differentiability? Yes, this is what higher-order reflections are for. They define {f(-x)} not just in terms of {f(x)} but also involve values at other points, like {f(x/2)}. Here is one such smart reflection:

\displaystyle    f(-x) = 4f(x/2)-3f(x)  \qquad\qquad\qquad (1)

Differentiable reflection
Differentiable reflection

Indeed, letting {x\rightarrow 0^+}, we observe continuity: both sides converge to {f(0)}. Taking derivatives of both sides, we get

\displaystyle  -f'(-x) = 2f'(x/2) - 3f'(x)

where the limits of both sides as {x\rightarrow 0^+} again agree: they are {-f'(0)}.

A systematic way to obtain such reflection formulas is to consider what they do to monomials: {1}, {x}, {x^2}, etc. A formula that reproduces the monomials up to degree {d} will preserve the derivatives up to order {d}. For example, plugging {f(x)=1} or {f(x)=x} into (1) we get a valid identity. With {f(x)=x^2} the equality breaks down: {x^2} on the left, {-2x^2} on the right. As a result, the curvature of the graph shown above is discontinuous: at {x=0} it changes the sign without passing through {0}.

To fix this, we’ll need to use a third point, for example {x/4}. It’s better not to use points like {2x}, because when the original domain of {f} is a bounded interval {[0,b]}, we probably want the reflection to be defined on all of {[-b,b]}.

So we look for coefficients {A,B,C} such that {f(-x)=Af(x/4)+Bf(x/2)+Cf(x)} holds as identity for {f(x)=1,x,x^2}. The linear system {A+B+C=1}, {A/4+B/2+C=-1}, {A/16+B/4+C=1} has the solution {A=16}, {B=-20}, {C=5}. This is our reflection formula, then:

\displaystyle  f(-x) = 16f(x/4)-20f(x/2)+5f(x)  \qquad\qquad\qquad (2)

And this is the result of reflecting {\exp(x)} according to (2):

Twice differentiable reflection
Twice differentiable reflection

Now the curvature of the graph is continuous. One could go on, but since human eye is not sensitive to discontinuities of the third derivative, I’ll stop here.


In case you don’t believe the last paragraph, here is the reflection with three continuous derivatives, given by

\displaystyle  f(-x) = \frac{640}{7} f(x/8) - 144f(x/4)+60f(x/2)-\frac{45}{7}f(x)

and below it, the extension given by (2). For these plots I used Desmos because plots in Maple (at least in my version) have pretty bad aliasing.

Three continuous derivatives
Three continuous derivatives
Two continuous derivatives
Two continuous derivatives

Also, cubic splines have only two continuous derivatives and they connect dots naturally.

Walking dogs and comparing sticks

Then he dropped two in at once, and leant over the bridge to see which of them would come out first; and one of them did; but as they were both the same size, he didn’t know if it was the one which he wanted to win, or the other one. – A. A. Milne

It’s useful to have a way of measuring how different two sticks (or fir cones) are in size, shape, and their position in a river. Yes, we have the Hausdorff distance {d_H} between sets, but it does not take into account the orientation of sticks. And it performs poorly when the sticks are broken: the Hausdorff distance between these blue and red curves does not capture the disparity of their shapes:

Small Hausdorff distance, totally different curves
Small Hausdorff distance, totally different curves

Indeed, {d_H} is relatively small here, because from any point of red curve one can easily jump to some point of the blue curve, and the other way around. However, this kind of measurement completely ignores the fact that curves are meant to be traveled along in a continuous, monotone way.

There is a concept of distance that is better suited for comparing curves: the Fréchet distance {d_F}. Wikipedia gives this (folklore) description:

Imagine a dog walking along one curve and the dog’s owner walking along the other curve, connected by a leash. Both walk continuously along their respective curve from the prescribed start point to the prescribed end point of the curve. Both may vary their speed, and even stop, at arbitrary positions and for arbitrarily long. However, neither can backtrack. The Fréchet distance between the two curves is the length of the shortest leash that is sufficient for traversing both curves in this manner.


To get started, let’s compute this distance for two oriented line segments {AB} and {CD}. The length of the leash must be at least {|AC|} in order to begin the walk, and at least {|BD|} to finish. So,

\displaystyle    d_F(AB,CD) \ge \max(|AC|, |BD|)

In fact, equality holds here. In order to bound {d_F} from above, we just need one parametrization of the segments. Take the parametrization proportional to length:

\displaystyle    P=(1-t)A+tB,\quad Q=(1-t)C+tD

Then {|PQ|^2} is the quadratic polynomial of {t}. Without doing any computations, we can say the coefficient of {t^2} is nonnegative, because {|PQ|^2} cannot be negative for any {t\in\mathbb R}. Hence, this polynomial is a convex function of {t}, which implies that its maximum on the interval {[0,1]} is attained at an endpoints. And the endpoints we already considered. (By the way, this proof works in every CAT(0) metric space.)


In general, the Fréchet distance is not realized by constant-speed parametrization. Consider these two curves, each with a long detour:

Symmetry breaking
Symmetry breaking

It would be impractical for the dog and the owner to go on the detour at the same time. One should go first while the other waits for his/her/its turn. In particular, we see symmetry breaking here: even for two perfectly symmetric curves, the Fréchet-optimal parametrizations would not be symmetric to each other.


It is not obvious from the definition of {d_F} whether it is a metric; as usual, it’s the triangle inequality that is suspect. However, {d_F} indeed satisfies the triangle inequality. To prove this, we should probably formalize the definition of {d_F}. Given two continuous maps {f,g} from {[0,1]} into {\mathbb R^2} (or any metric space), define

\displaystyle    d_F(f,g) = \inf_{\phi,\psi}\max_{[0,1]} |f\circ \phi-g\circ \psi|

where {\phi} and {\psi} range over all nondecreasing functions from {[0,1]} onto itself. Actually, we can require {\phi} and {\psi} to be strictly increasing (it only takes a small perturbation), which in dog/owner terms means they are not allowed to stop, but can mosey along as slowly as they want. Then we don’t need both {\phi} and {\psi}, since

\displaystyle    \max_{[0,1]} |f\circ \phi-g\circ \psi| = \max_{[0,1]} |f -g\circ (\psi\circ \phi^{-1})|

So, given {f,g,h} we can pick {\phi} such that {\max_{[0,1]} |f-g\circ \phi|} is within {\epsilon} of {d_F(f,g)}; then pick {\psi} such that {\max_{[0,1]} | g\circ \phi - h\circ \psi | } is within {\epsilon} of {d_F(g,h)}. Then

\displaystyle    d_F(f,h)\le \max_{[0,1]} |f- g\circ \phi|+\max_{[0,1]} |g\circ \phi - h\circ \psi|   \le d_F(f,g)+d_F(g,h)+2\epsilon

and the triangle inequality follows.

Continuity of circumcenter and circumradius

For a bounded set {A} on the plane (or in any Euclidean space) one can define the circumcenter {c(A)} and circumradius {r(A)} as follows: {r(A)} is the smallest radius of a closed disk containing {A}, and {c(A)} is the center of such a disk. (Other terms in use: Chebyshev center and Chebyshev radius.)

Smallest disk covering the points
Smallest disk covering the points

The fact that {c(A)} is well-defined may not be obvious: what if there are multiple disks of radius {r(A)} that contain {A}? To investigate, introduce the farthest distance function {f_A(x) = \sup_{a\in A} |x-a|}. By definition, {c(A)} is where {f_A} attains its minimum. The function {f_A} is convex, being the supremum of a family of convex functions. However, that does not guarantee the uniqueness of its minimum. We have two issues here:

  • {x\mapsto |x-a|} is not strictly convex
  • the supremum of an infinite family of strictly convex functions can fail to be strictly convex (like {\sup_{1<p<2} x^p = x} on the interval {[0,1]}).

The first issue is resolved by squaring {f_A}. Indeed, {f_A^2} attains its minimum at the same place where {f_A} does, and {f_A(x)^2 = \sup_{a\in A} |x-a|^2} where each term {|x-a|^2} is strictly convex.

Also, we don’t want to lose strict convexity when taking the supremum over {a\in A}. For this purpose, we must replace strict inequality by something more robust. The appropriate substitute is strong convexity: a function {f} is strongly convex if there is {\lambda>0} such that {f(x)-\lambda |x|^2 } is convex. Let’s say that {f} is {\lambda}-convex in this case.

Since {|x-a|^2-|x|^2 = -2\langle x,a\rangle + |a|^2} is a convex (in fact linear) function of {x}, we see that {|x-a|^2} is {1}-convex. This property passes to supremum: subtracting {|x|^2} from the supremum is the same as subtracting it from each term. Strong convexity implies strict convexity and with it, the uniqueness of the minimum point. So, {c(A)}, the minimum of {f_A^2}, is uniquely defined. (Finding it in practice may be difficult. The spherical version of this problem is considered in Covering points with caps).


Having established uniqueness, it is natural to ask about stability, or more precisely, the continuity of {c(A)} and {r(A)} with respect to {A}. Introduce the Hausdorff distance {d_{\mathcal H}} on the set of bounded subsets. By definition, {d_{\mathcal H}(A,B)\le \delta} if {A} is contained in {\delta}-neighborhood of {B}, and {B} is contained in {\delta}-neighborhood of {A}. It is easy to see that {r(B)\le r(A) + d_{\mathcal H}(A,B)}, and therefore

\displaystyle  |r(A)-r(B)|\le d_{\mathcal H}(A,B)

In words, the circumradius is a {1}-Lipschitz function of the set.

What about the circumcenter? If the set {A} is shifted by {d} units in some direction, the circumcenter moves by the same amount. So it may appear that it should also be a {1}-Lipschitz function of {A}. But this is false.

Observe (or recall from middle-school geometry) that the circumcenter of a right triangle is the midpoint of its hypotenuse:

Circumcenter of right triangle
Circumcenter of a right triangle

Consider two right triangles:

  • Vertices {(-1,0), (1,0), (1,\epsilon)}. The right angle is at {(1,0)}, and the circumvcenter is the midpoint of opposite side: {(0,\epsilon/2)}.
  • Vertices {(-1,0), (1,0), (\sqrt{1-\epsilon^2},\epsilon)}. The right angle is at
    {(\sqrt{1-\epsilon^2},\epsilon)} and the circumcenter is at {(0,0)}.

The Hausdorff distance between these two triangles is merely {1-\sqrt{1-\epsilon^2} < \epsilon^2}, yet the distance between their circumcenters is {\epsilon/2}. So, Lipschitz continuity fails, and the most we can hope for is Hölder continuity with exponent {1/2}.

And indeed, the circumcenter is locally {1/2}-Hölder continuous. To prove this, suppose {|c(A)-c(B)|\ge \epsilon}. The {1}-convexity of {f_A^2} implies that

\displaystyle  f_A(c(B))^2 \ge f_A(c(A))^2+|c(A)-c(B)|^2 \ge r(A)^2 + \epsilon^2

On the other hand, since {|f_A-f_B|\le d_{\mathcal H}(A,B)} everywhere,

\displaystyle  f_A(c(B))\le f_B(c(B)) + d_{\mathcal H}(A,B) = r(B) + d_{\mathcal H}(A,B) \le r(A) + 2 d_{\mathcal H}(A,B)

Putting things together,

\displaystyle  d_{\mathcal H}(A,B) \ge \frac12 (\sqrt{r(A)^2 + \epsilon^2} - r(A))  = \frac{\epsilon^2}{2( \sqrt{r(A)^2 + \epsilon^2} + r(A) )}

Thus, as long as {r(A)} remains bounded above, we have an inequality of the form {d_{\mathcal H}(A,B) \ge c\, \epsilon^2}, which is exactly {1/2}-Hölder continuity.


Remark. The proof uses no information about {\mathbb R^n} other than the {1}-convexity of the squared distance function. As such, it applies to every CAT(0) space.

Infinite beatitude of non-existence: a journey into Nothingland

In the novella Flatland by Edwin A. Abbott, the Sphere leads the Square “downward to the lowest depth of existence, even to the realm of Pointland, the Abyss of No dimensions”:

I caught these words, “Infinite beatitude of existence! It is; and there is nothing else beside It.” [...] “It fills all Space,” continued the little soliloquizing Creature, “and what It fills, It is. What It thinks, that It utters; and what It utters, that It hears; and It itself is Thinker, Utterer, Hearer, Thought, Word, Audition; it is the One, and yet the All in All. Ah, the happiness, ah, the happiness of Being!”

Indeed, Pointland (a one-point space) is zero-dimensional by every concept of dimension that I know of. Yet there is something smaller: Nothingland — empty space, {\varnothing} — whose non-existent inhabitants must be perpetually enjoying the happiness of Non-Being.

What is the dimension of Nothingland?

In topology, the empty set has dimension {-1}. This fits the inductive definition of topological dimension, which is the smallest number {d} such that the space can be minced by removing a subset of dimension {\le d-1}. (Let’s say a space has been minced if what’s left has no connected subsets other than points.)

Thus, a nonempty finite (or countable) set has dimension {0}: it’s minced already, so we remove nothing, a set of dimension {-1}. A line or a curve is one-dimensional: they can be minced by removing a zero-dimensional subset, like rational numbers.

A curve minced by removing a zero-dimensional set
One-dimensional curve minced by removing zero-dimensional points.

The Flatland itself can be minced by removing a one-dimensional subset (e.g., circles with rational radius and rational coordinates of the center), so it is two-dimensional. And so on.

Flatland minced by removing a one-dimensional subset
Flatland minced by removing a one-dimensional subset

The convention {\mathrm{dim}\,\varnothing = -1}, helpful in the definition, gets in the way later. For example, the topological dimension is subadditive under products: {\mathrm{dim}\,(A\times B)\le \mathrm{dim}\,A + \mathrm{dim}\,B} … unless both {A} and {B} are empty, because then {-1\le -2} is false. So the case {A=B=\varnothing} must be excluded from the product theorem. We would not have to do this if {\mathrm{dim}\,\varnothing } was defined to be {-\infty}.

Next, consider the Hausdorff dimension. Its definition is not inductive, but one has to introduce other concepts first. First, define the {d}-dimensional premeasure on scale {\delta>0}:

\displaystyle    \mathcal H^d_\delta (X) = \inf \sum_j (\mathrm{diam}\,{U_j})^{d}

where the infimum is taken over all covers of {X} by nonempty subsets {U_j} with {\mathrm{diam}\,{U_j}\le \delta}. Requiring {U_j} to be nonempty avoids the need to define the diameter of Nothingland, which would be another story. The empty space can be covered by empty family of nonempty subsets. The sum of empty set of numbers is {0}, and so {\mathcal H^d_\delta (\varnothing) = 0}.

Then we define the {d}-dimensional Hausdorff measure:

\displaystyle    \mathcal H^d (X) = \lim_{\delta\rightarrow0} \mathcal H^d_\delta (X)

and finally,

\displaystyle    \mathrm{dim}_H (X) = \inf \{ d \colon \mathcal H^d (X)=0\}

If in this last infimum we require {d>0}, the result is {\mathrm{dim}_H (\varnothing) =0}. But why make this restriction? The {d}-dimensional pre-measures and measures make sense for all real {d}. It’s just that for nonempty {X}, we are raising some small (or even zero) numbers to negative power, getting something large as a result. Consequently, every nonempty space has {\mathcal H^d = \infty} for all {d < 0}.

But {\mathcal H^d_\delta (\varnothing) = 0}, from the sum of empty collection of numbers being zero. Hence, {\mathcal H^d (\varnothing) = 0} for all real {d}, and this leads to {\mathrm{dim}_H\,\varnothing = -\infty}.

To have {\mathrm{dim}_H\,\varnothing = -\infty} is also convenient because the Hausdorff dimension is superadditive under products: {\mathrm{dim}_H\,(A\times B)\ge \mathrm{dim}_H\,A + \mathrm{dim}_H\,B}. This inequality was proved for general metric spaces as recently as 1995, by John Howroyd. If we don’t have {\mathrm{dim}_H\,\varnothing = -\infty}, then both factors {A} and {B} must be assumed nonempty.

So… should Nothingland have topological dimension {-1} and Hausdorff dimension {-\infty}? But that would violate the inequality {\mathrm{dim} (X)\le \mathrm{dim}_H (X)} which holds for every other separable metric space. In fact, for such spaces the topological dimension is simply the infimum of the Hausdorff dimension over all metrics compatible with the topology.

I am inclined to let the dimension of Nothingland be {-\infty} for every concept of dimension.