Nodal lines

Wikipedia article on nodes offers this 1D illustration: a node is an interior point at which a standing wave does not move.

Standing wave and its nodes
Standing wave and its nodes

(At the endpoints the wave is forced to stay put, so I would not count them as nodes despite being marked on the plot.)

A standing wave in one dimension is described by the equation {f''+\omega^2 f=0}, where {\omega} is its (angular) frequency. The function {u(x,t) = f(x)\cos \omega t} solves the wave equation {u_{tt}=u_{xx}}: the wave vibrates without moving, hence the name. In mathematics, these are the (Dirichlet) eigenfunctions of the Laplacian.

Subject to boundary conditions {f(0)=0 = f(\pi)} (fixed ends), all standing waves on the interval {(0,\pi)} are of the form {\sin nx} for {n=1,2,3,\dots}. Their eigenvalues are exactly the perfect squares, and the nodes are equally spaced on the interval.

Things get more interesting in two dimensions. For simplicity consider the square {Q=(0,\pi)\times (0,\pi)}. Eigenfunctions with zero value on the boundary are of the form {f(x,y) = \sin mx \sin ny} for positive integers {m,n}. The set of eigenvalues has richer structure, it consists of the integers that can be expressed as the sum of two positive squares: 2, 5, 8, 10, 13, 17,…

The zero sets of eigenfunctions in two dimensions are called nodal lines. At a first glance it may appear that we have nothing interesting: the zero set of {\sin mx \sin ny} is a union of {n-1} equally spaced horizontal lines, and {m-1} equally spaced vertical lines:

Boring nodal lines
This is a square, not a tall rectangle

But there is much more, because a sum of two eigenfunctions with the same eigenvalue is also an eigenfunction. To begin with, we can form linear combinations of {\sin mx \sin ny} and {\sin nx \sin my}. Here are two examples from Partial Differential Equations by Walter Strauss:

When {f(x,y) = \sin 12x \sin y+\sin x \sin 12y }, the square is divided by nodal lines into 12 nodal domains:

Frequency 145, twelve nodal domains
Eigenvalue 145, twelve nodal domains

After slight perturbation {f(x,y) = \sin 12x \sin y+0.9\sin x \sin 12y } there is a single nodal line dividing the square into two regions of intricate geometry:

Also frequency 145, but two  nodal domains
Also eigenvalue 145, but two nodal domains

And then there are numbers that can be written as sums of squares in two different ways. The smallest is {50=1^2+7^2 = 5^2+5^2}, with eigenfunctions such as

\displaystyle    f(x,y) = \sin x\sin 7y +2\sin 5x \sin 5y+\sin 7x\sin y

pictured below.

Frequency 50
Frequency 50

This is too good not to replicate: the eigenfunctions naturally extend as doubly periodic functions with anti-period {\pi}.

Periodic extension
Periodic extension

Binary intersection property, and not fixing what isn’t broken

A metric space has the binary intersection property if every collection of closed balls has nonempty intersection unless there is a trivial obstruction: the distance between centers of two balls exceeds the sum of their radii. In other words, for every family of points {x_\alpha\in X} and numbers {r_\alpha>0} such that {d(x_\alpha,x_\beta)\le r_\alpha+r_\beta} for all {\alpha,\beta} there exists {x\in X} such that {d(x_\alpha,x)\le r_\alpha} for all {\alpha}.

For example, {\mathbb R} has this property: {x=\inf_\alpha (x_\alpha+r_\alpha)} works. But {\mathbb R^2} does not:

Failure of the binary intersection property
Failure of the binary intersection property

The space of bounded sequences {\ell^\infty} has the binary intersection property, and so does the space {B[0,1]} of all bounded functions {f:[0,1]\rightarrow\mathbb R} with the supremum norm. Indeed, the construction for {\mathbb R} generalizes: given a family of bounded functions {f_\alpha} and numbers {r_\alpha>0} as in the definition, let {f(x)=\inf_\alpha (f_\alpha(x)+r_\alpha)}.


The better known space of continuous functions {C[0,1]} has the finite version of binary intersection property, because for a finite family, the construction {\inf_\alpha (f_\alpha(x)+r_\alpha)} produces a continuous function. However, the property fails without finiteness, as the following example shows.

Example. Let {f_n\in C[0,1]} be a function such that {f_n(x)=-1} for {x\le \frac12-\frac1n}, {f_n(x)=1} for {x\ge \frac12+\frac1n}, and {f_n} is linear in between.

Since {\|f_n-f_m\| \le 1} for all {n,m}, we can choose {r_n=1/2} for all {n}. But if a function {f} is such that {\|f-f_n\|\le \frac12} for all {n}, then {f(x) \le -\frac12} for {x<\frac12} and {f(x) \ge \frac12} for {x>\frac12}. There is no continuous function that does that.

More precisely, for every {f\in C[0,1]} we have {\liminf_{n\to\infty} \|f-f_n\|\ge 1 } because {f(x)\approx f(1/2)} in a small neighborhood of {1/2}, while {f_n} change from {1} to {-1} in the same neighborhood when {n} is large.


Given a discontinuous function, one can approximate it with a continuous function in some way: typically, using a mollifier. But such approximations tend to change the function even if it was continuous to begin with. Let’s try to not fix what isn’t broken: look for a retraction of {B[0,1]} onto {C[0,1]}, that is a map {\Phi:B[0,1]\rightarrow C[0,1]} such that {\Phi(f)=f} for all {f\in C[0,1]}.

The failure of binary intersection property, as demonstrated by the sequence {(f_n)} above, implies that {\Phi} cannot be a contraction. Indeed, let {f(x)= \frac12 \,\mathrm{sign}\,(x-1/2)}. This is a discontinuous function such that {\|f-f_n\|\le 1/2} for all {n}. Since {\liminf_{n\to\infty} \|\Phi(f)-f_n\|\ge 1}, it follows that {\Phi} cannot be {L}-Lipschitz with a constant {L<2}.


It is known that there is a retraction from {B[0,1]} onto {C[0,1]} with the Lipschitz constant at most {20}: see Geometric Nonlinear Functional Analysis by Benyamini and Lindenstrauss. The gap appears to remain at present; at least I don’t know the smallest Lipschitz constant required to retract bounded functions onto continuous ones.

Graphical embedding

This post continues the theme of operating with functions using their graphs. Given an integrable function {f} on the interval {[0,1]}, consider the region {R_f} bounded by the graph {y=f(x)}, the axis {y=0}, and the vertical lines {x=0}, {x=1}.

Total area under and over the graph is the L1 norm
Total area under and over the graph is the L1 norm

The area of {R_f} is exactly {\int_0^1 |f(x)|\,dx}, the {L^1} norm of {f}. On the other hand, the area of a set is the integral of its characteristic function,

\displaystyle    \chi_f = \begin{cases}1, \quad x\in R_f, \\ 0,\quad x\notin R_f \end{cases}

So, the correspondence {f\mapsto \chi_f } is a map from the space of integrable functions on {[0,1]}, denoted {L^1([0,1])}, to the space of integrable functions on the plane, denoted {L^1(\mathbb R^2)}. The above shows that this correspondence is norm-preserving. It also preserves the metric, because integration of {|\chi_f-\chi_g|} gives the area of the symmetric difference {R_f\triangle R_g}, which in turn is equal to {\int_0^1 |f-g| }. In symbols:

\displaystyle    \|\chi_f-\chi_g\|_{L^1} = \int |\chi_f-\chi_g| = \int |f-g| = \|f-g\|_{L^1}

Distance between two functions in terms of their graphs
Distance between two functions in terms of their graphs

The map {f\mapsto \chi_f} is nonlinear: for example {2f} is not mapped to {2 \chi_f} (the function that is equal to 2 on the same region) but rather to a function that is equal to 1 on a larger region.

So far, this nonlinear embedding did not really offer anything new: from one {L^1} space we got into another. It is more interesting (and more difficult) to embed things into a Hilbert space such as {L^2(\mathbb R^2)}. But for the functions that take only the values {0,1,-1}, the {L^2} norm is exactly the square root of the {L^1} norm. Therefore,

\displaystyle    \|\chi_f-\chi_g\|_{L^2} = \sqrt{\int |\chi_f-\chi_g|^2} =    \sqrt{\int |\chi_f-\chi_g|} = \sqrt{\|f-g\|_{L^1}}

In other words, raising the {L^1} metric to power {1/2} creates a metric space that is isometric to a subset of a Hilbert space. The exponent {1/2} is sharp: there is no such embedding for the metric {d(f,g)=\|f-g\|_{L^1}^{\alpha} } with {\alpha>1/2}. The reason is that {L^1}, having the Manhattan metric, contains geodesic squares: 4-cycles where the distances between adjacent vertices are 1 and the diagonal distances are equal to 2. Having such long diagonals is inconsistent with the parallelogram law in Hilbert spaces. Taking the square root reduces the diagonals to {\sqrt{2}}, which is the length they would have in a Hilbert space.

This embedding, and much more, can be found in the ICM 2010 talk by Assaf Naor.

Graphical convergence

The space of continuous functions (say, on {[0,1]}) is usually given the uniform metric: {d_u(f,g) = \sup_{x}|f(x)-g(x)|}. In other words, this is the smallest number {\rho} such that from every point of the graph of one function we can jump to the graph of another function by moving at distance {\le \rho} in vertical direction.

Uniform metric: vertical distances
Uniform metric is based on vertical distances

Now that I put it this way, why don’t we drop “in vertical direction”? It’ll still be a metric, namely the Hausdorff metric between the graphs of {f} and {g}. It’s natural to call it the graphical metric, denoted {d_g}; from the definition it’s clear that {d_g\le d_u}.

Graphical metric: Hausdorff distance
Weird metric, weird color

Some interesting things happen when the space of continuous functions is equipped with {d_g}. For one thing, it’s no longer a complete space: the sequence {f_n(x)=x^n} is Cauchy in {d_g} but has no limit.

Sequence of x^n
Sequence of x^n

On the other hand, the bounded subsets of {(C[0,1],d_g) } are totally bounded. Indeed, given {M>0} and {\epsilon>0} we can cover the rectangle {[0,1]\times [-M,M]} with a rectangular mesh of diameter at most {\epsilon}. For each function with {\sup|f|\le M}, consider the set of rectangles that its graph visits. There are finitely many possibilities for the sets of visited rectangles. And two functions that share the same set of visited rectangles are at graphical distance at most {\epsilon} from each other.

Boxy approximation to the graph
Boxy approximation to the graph

Thus, the completion of {C[0,1]} in the graphical metric should be a nice space: bounded closed subsets will be compact in it. What is this completion, concretely?

Here is a partial answer: if {(f_n)} is a graphically Cauchy sequence, its limit is the compact set {\{(x,y): g(x)\le y\le h(x)\}} where

\displaystyle    g(x) = \inf_{x_n\rightarrow x} \liminf f_n(x_n)

(the infimum taken over all sequences converging to {x}), and

\displaystyle    h(x) = \sup_{x_n\rightarrow x} \limsup f_n(x_n)

It’s not hard to see that {g} is upper semicontinuous and {h} is lower semicontinuous. Of course, {g\le h}. It seems that the set of such pairs {(g,h)} indeed describes the graphical completion of continuous functions.

For example, the limit of {f_n(x)=x^n} is described by the pair {g(x)\equiv 0}, {h(x)=\chi_{\{1\}}}. Geometrically, it’s a broken line with horizontal and vertical segments

For another example, the limit of {f_n(x)=\sin^2 nx} is described by the pair {g(x)\equiv 0}, {h(x)\equiv 1}. Geometrically, it’s a square.

Stripey sines
Stripey sines

Parsing calculus with regular expressions

As a service* to math students everywhere (especially those taking calculus), I started Mathematics.SE Index. The plan is to have a thematic catalog of common exercises in college-level mathematics, each linked to a solution posted on Math.SE.

As of now, the site has reasonably complete sections on Limits and Series, with a rudimentary section on binomial sums. All lists are automatically generated. Initial filtering was done with Data Explorer SQL query, using tags and keywords in question body. The query also took into account the view count (i.e., how often the problem is searched for), and the existence of upvoted answers.

The results of the query were processed with a Google Sheets script: a bunch of regular expressions extracted LaTeX markup with a desired pattern, checked its integrity [not of academic kind] and transformed into WordPress-compatible markup.

Plans for near future: integrals (especially improper), basic proofs by induction, {\epsilon-\delta}, maybe some group theory and differential equations… depends on how easy it is to teach these topics to regular expressions.

(*) Or disservice, I wouldn’t know.

Squarish polynomials

For some reason I wanted to construct polynomials approximating this piecewise constant function {f}:

How to approximate this with polynomials?
So square

Of course approximation cannot be uniform, since the function is not continuous. But it can be achieved in the sense of convergence of graphs in the Hausdorff metric: their limit should be the “graph” shown above, with the vertical line included. In concrete terms, this means for every {\epsilon>0} there is {N} such that for {n\ge N} the polynomial {p_n} satisfies

\displaystyle  |p_n-f|\le \epsilon\quad \text{ on }\ [0,2]\setminus [1-\epsilon,1+\epsilon]

and also

\displaystyle  -\epsilon\le  p_n  \le 1+\epsilon\quad \text{ on }\ [1-\epsilon,1+\epsilon]

How to get such {p_n} explicitly? I started with the functions {f_m(x) = \exp(-x^m)} when {m} is large. The idea is that as {m\rightarrow\infty}, the limit of {\exp(-x^m)} is what is wanted: {1} when {x<1}, {0} when {x>1}. Also, for each {m} there is a Taylor polynomial {T_{m,n}} that approximates {f_m} uniformly on {[0,2]}. Since the Taylor series is alternating, it is not hard to find suitable {n}. Let’s shoot for {\epsilon=0.01} in the Taylor remainder and see where this leads:

  • Degree {7} polynomial for {\exp(-x)}
  • Degree {26} polynomial for {\exp(-x^2)}
  • Degree {69} polynomial for {\exp(-x^3)}
  • Degree {180} polynomial for {\exp(-x^4)}
  • Degree {440} polynomial for {\exp(-x^5)}

The results are unimpressive, though:

Taylor polynomials of exp(-x^m) are not so square
Taylor polynomials of exp(-x^m) are not so square

To get within {0.01} of the desired square-ness, we need {\exp(-1.01^m)<0.01}. This means {m\ge 463}. Then, to have the Taylor remainder bounded by {0.01} at {x=2}, we need {2^{463n}/n! < 0.01}. Instead of messing with Stirling’s formula, just observe that {2^{463n}/n!} does not even begin to decrease until {n} exceeds {2^{463}}, which is more than {10^{139}}. That’s a … high degree polynomial. I would not try to ask a computer algebra system to plot it.

Bernstein polynomials turn out to work better. On the interval {[0,2]} they are given by

\displaystyle    p_n(x) = 2^{-n} \sum_{k=0}^n f(2k/n) \binom{n}{k} x^k (2-x)^{n-k}

To avoid dealing with {f(1)}, it is better to use odd degrees. For comparison, I used the same or smaller degrees as above: {7, 25, 69, 179, 439}.

Squarish Bernstein polynomials
Squarish Bernstein polynomials

Looks good. But I don’t know of a way to estimate the degree of Bernstein polynomial required to obtain Hausdorff distance less than a given {\epsilon} (say, {0.01}) from the square function.

Winding map and local injectivity

The winding map {W} is a humble example that is conjectured to be extremal in a long-standing open problem. Its planar version is defined in polar coordinates {(r,\theta)} by

\displaystyle    (r,\theta) \mapsto (r,2\theta)

All this map does it stretch every circle around the origin by the factor of two — tangentially, without changing its radius. As a result, the circle winds around itself twice. The map is not injective in any neighborhood of the origin {r=0}.

3D winding
2D winding

The 3D version of the winding map has the same formula, but in cylindrical coordinates. It winds the space around the {z}-axis, like this:

2D winding
3D winding

In the tangential direction the space is stretched by the factor of {2}; the radial coordinate is unchanged. More precisely: the singular values of the derivative matrix {DW} (which exists everywhere except when {r=0}) are {2,1,1}. Hence, the Jacobian determinant {\det DW} is {2}, which makes sense since the map covers the space by itself, twice.

In general, when the singular values of the matrix {A} are {\sigma_1\ge \dots \ge\sigma_n}, the ratio {\sigma_n^{-n} \det A} is called the inner distortion of {A}. The word “inner” refers to the fact that {\sigma_n} is the radius of the ball inscribed into the image of unit ball under {A}; so, the inner distortion compares this inner radius of the image of unit ball to its volume.

For a map, like {W} above, the inner distortion is the (essential) supremum of the inner distortion of its derivative matrices over its domain. So, the inner distortion of {W} is {2}, in every dimension. Another example: the linear map {(x,y)\mapsto (3x,-2y)} has inner distortion {3/2}.

It is known that there is a constant {K>1} such that if the inner distortion of a map {F} is less than {K}, the map is locally injective: every point has a neighborhood in which {F} is injective. This was proved by Martio, Rickman, and Väisälä in 1971. They conjectured that {K=2} is optimal: that is, the winding map has the least inner distortion among all maps that are not locally injective.

But at present, there is still no explicit nontrivial lower estimate for {K}, for example we don’t know if inner distortion less than {1.001} implies local injectivity.

Using a paraboloid to cover points with a disk

Find the equation of tangent line to parabola {y=x^2}borrring calculus drill.

Okay. Draw two tangent lines to the parabola, then. Where do they intersect?

Two tangent lines
Two tangent lines

If the points of tangency are {a} and {b}, then the tangent lines are
{y=2a(x-a)+a^2} and {y=2b(x-b)+b^2}. Equate and solve:

\displaystyle    2a(x-a)+a^2 = 2b(x-b)+b^2 \implies x = \frac{a+b}{2}

Neat! The {x}-coordinate of the intersection point is midway between {a} and {b}.

What does the {y}-coordinate of the intersection tell us? It simplifies to

\displaystyle    2a(b-a)/2+a^2 = ab

the geometric meaning of which is not immediately clear. But maybe we should look at the vertical distance from intersection to the parabola itself. That would be

\displaystyle    x^2 - y = \left(\frac{a+b}{2}\right)^2 -ab = \left(\frac{a-b}{2}\right)^2

This is the square of the distance from the midpoint to {a} and {b}. In other words, the squared radius of the smallest “disk” covering the set {\{a,b\}}.


Same happens in higher dimensions, where parabola is replaced with the paraboloid {z=|\mathbf x|^2}, {\mathbf x = (x_1,\dots x_n)}.

Paraboloid
Paraboloid

Indeed, the tangent planes at {\mathbf a} and {\mathbf b} are
{z=2\mathbf a\cdot (\mathbf x-\mathbf a)+|\mathbf a|^2} and {z=2\mathbf b\cdot (\mathbf x-\mathbf b)+|\mathbf b|^2}. Equate and solve:

\displaystyle    2(\mathbf a-\mathbf b)\cdot \mathbf x = |\mathbf a|^2-|\mathbf b|^2 \implies \left(\mathbf x-\frac{\mathbf a+\mathbf b}{2}\right)\cdot (\mathbf a-\mathbf b) =0

So, {\mathbf x} lies on the equidistant plane from {\mathbf a} and {\mathbf b}. And, as above,

\displaystyle    |\mathbf x|^2 -z = \left|\frac{\mathbf a-\mathbf b}{2}\right|^2

is the square of the radius of smallest disk covering both {\mathbf a} and {\mathbf b}.


The above observations are useful for finding the smallest disk (or ball) covering given points. For simplicity, I stick to two dimensions: covering points on a plane with the smallest disk possible. The algorithm is:

  1. Given points {(x_i,y_i)}, {i=1,\dots,n}, write down the equations of tangent planes to paraboloid {z=x^2+y^2}. These are {z=2(x_i x+y_i y)-(x_i^2+y_i^2)}.
  2. Find the point {(x,y,z)} that minimizes the vertical distance to paraboloid, that is {x^2+y^2-z}, and lies (non-strictly) below all of these tangent planes.
  3. The {x,y} coordinates of this point is the center of the smallest disk covering the points. (Known as the Chebyshev center of the set). Also, {\sqrt{x^2+y^2-z}} is the radius of this disk; known as the Chebyshev radius.

The advantage conferred by the paraboloid model is that at step 2 we are minimizing a quadratic function subject to linear constraints. Implementation in Sage:

points = [[1,3], [1.5,2], [3,2], [2,-1], [-1,0.5], [-1,1]] 
constraints = [lambda x, p=q: 2*x[0]*p[0]+2*x[1]*p[1]-p[0]^2-p[1]^2-x[2] for q in points]
target = lambda x: x[0]^2+x[1]^2-x[2]
m = minimize_constrained(target,constraints,[0,0,0]) 
circle((m[0],m[1]),sqrt(m[0]^2+m[1]^2-m[2]),color='red') + point(points)

Smallest disk covering the points
Smallest disk covering the points

Credit: this post is an expanded version of a comment by David Speyer on last year’s post Covering points with caps, where I considered the same problem on a sphere.

The least distorted curves and surfaces

Every subset {A\subset \mathbb R^n} inherits the metric from {\mathbb R^n}, namely {d(a,b)=|a-b|}. But we can also consider the intrinsic metric on {A}, defined as follows: {\rho_A(a,b)} is the infimum of the lengths of curves that connect {a} to {b} within {A}. Let’s assume there is always such a curve of finite length, and therefore {\rho_A} is always finite. All the properties of a metric hold, and we also have {|a-b|\le \rho_A(a,b)} for all {a,b\in A}.

If {A} happens to be convex, then {\rho_A(a,b)=|a-b|} because any two points are joined by a line segment. There are also some nonconvex sets for which {\rho_A} coincides with the Euclidean distance: for example, the punctured plane {\mathbb R^2\setminus \{(0,0)\}}. Although we can’t always get from {a} to {b} in a straight line, the required detour can be as short as we wish.

On the other hand, for the set {A=\{(x,y)\in \mathbb R^2 : y\le |x|\}} the intrinsic distance is sometimes strictly greater than Euclidean distance.

Nontrivial distortion
Oops, the equation was supposed to be y=|x|, without the square

For example, the shortest curve from {(-1,1)} to {(1,1)} has length {2\sqrt{2}}, while the Euclidean distance is {2}. This is the worst ratio for pairs of points in this set, although proving this claim would be a bit tedious. Following Gromov (Metric structures on Riemannian and non-Riemannian spaces), define the distortion of {A} as the supremum of the ratios {\rho_A(a,b)/|a-b|} over all pairs of distinct points {a,b\in A}. (Another term in use for this concept: optimal constant of quasiconvexity.) So, the distortion of the set {\{(x,y) : y\le |x|\}} is {\sqrt{2}}.

Gromov observed (along with posing the Knot Distortion Problem) that every simple closed curve in a Euclidean space (of any dimension) has distortion at least {\pi/2}. That is, the least distorted closed curve is the circle, for which the half-length/diameter ratio is exactly {\pi/2}.

Distortion of a closed curve
Distortion of a closed curve

Here is the proof. Parametrize the curve by arclength: {\gamma\colon [0,L]\rightarrow \mathbb R^n}. For {0\le t\le L/2} define {\Gamma(t)=\gamma(t )-\gamma(t+L/2) } and let {r=\min_t|\Gamma(t)|}. The curve {\Gamma} connects two antipodal points of magnitude at least {r}, and stays outside of the open ball of radius {r} centered at the origin. Therefore, its length is at least {\pi r} (projection onto a convex subset does not increase the length). On the other hand, {\Gamma} is a 2-Lipschitz map, which implies {\pi r\le 2(L/2)}. Thus, {r\le L/\pi}. Take any {t} that realizes the minimum of {|\Gamma|}. The points {a=\gamma(t)} and {b=\gamma(t+L/2)} satisfy {|a-b|\le L/\pi} and {\rho_A(a,b)=L/2}. Done.

Follow-up question: what are the least distorted closed surfaces (say, in {\mathbb R^3})? It’s natural to expect that a sphere, with distortion {\pi/2}, is the least distorted. But this is false. An exercise from Gromov’s book (which I won’t spoil): Find a closed convex surface in {\mathbb R^3} with distortion less than { \pi/2}. (Here, “convex” means the surface bounds a convex solid.)

Higher order reflections

Mathematical reflections, not those supposedly practiced in metaphilosophy.

Given a function {f} defined for {x\ge 0}, we have two basic ways to reflect it about {x=0}: even reflection {f(-x)=f(x)} and odd reflection {f(-x)=-f(x)}. Here is the even reflection of the exponential function {e^x}:

Even reflection
Even reflection

The extended function is not differentiable at {0}. The odd reflection, pictured below, is not even continuous at {0}. But to be fair, it has the same slope to the left and to the right of {0}, unlike the even reflection.

Odd reflection
Odd reflection

Can we reflect a function preserving both continuity and differentiability? Yes, this is what higher-order reflections are for. They define {f(-x)} not just in terms of {f(x)} but also involve values at other points, like {f(x/2)}. Here is one such smart reflection:

\displaystyle    f(-x) = 4f(x/2)-3f(x)  \qquad\qquad\qquad (1)

Differentiable reflection
Differentiable reflection

Indeed, letting {x\rightarrow 0^+}, we observe continuity: both sides converge to {f(0)}. Taking derivatives of both sides, we get

\displaystyle  -f'(-x) = 2f'(x/2) - 3f'(x)

where the limits of both sides as {x\rightarrow 0^+} again agree: they are {-f'(0)}.

A systematic way to obtain such reflection formulas is to consider what they do to monomials: {1}, {x}, {x^2}, etc. A formula that reproduces the monomials up to degree {d} will preserve the derivatives up to order {d}. For example, plugging {f(x)=1} or {f(x)=x} into (1) we get a valid identity. With {f(x)=x^2} the equality breaks down: {x^2} on the left, {-2x^2} on the right. As a result, the curvature of the graph shown above is discontinuous: at {x=0} it changes the sign without passing through {0}.

To fix this, we’ll need to use a third point, for example {x/4}. It’s better not to use points like {2x}, because when the original domain of {f} is a bounded interval {[0,b]}, we probably want the reflection to be defined on all of {[-b,b]}.

So we look for coefficients {A,B,C} such that {f(-x)=Af(x/4)+Bf(x/2)+Cf(x)} holds as identity for {f(x)=1,x,x^2}. The linear system {A+B+C=1}, {A/4+B/2+C=-1}, {A/16+B/4+C=1} has the solution {A=16}, {B=-20}, {C=5}. This is our reflection formula, then:

\displaystyle  f(-x) = 16f(x/4)-20f(x/2)+5f(x)  \qquad\qquad\qquad (2)

And this is the result of reflecting {\exp(x)} according to (2):

Twice differentiable reflection
Twice differentiable reflection

Now the curvature of the graph is continuous. One could go on, but since human eye is not sensitive to discontinuities of the third derivative, I’ll stop here.


In case you don’t believe the last paragraph, here is the reflection with three continuous derivatives, given by

\displaystyle  f(-x) = \frac{640}{7} f(x/8) - 144f(x/4)+60f(x/2)-\frac{45}{7}f(x)

and below it, the extension given by (2). For these plots I used Desmos because plots in Maple (at least in my version) have pretty bad aliasing.

Three continuous derivatives
Three continuous derivatives
Two continuous derivatives
Two continuous derivatives

Also, cubic splines have only two continuous derivatives and they connect dots naturally.