Consider this linear differential equation: with boundary conditions and . Nothing looks particularly scary here. Just one nonconstant coefficient, and it’s a simple one. Entering this problem into Wolfram Alpha produces the following explicit solution:

I am not sure how anyone could use this formula for any purpose.

Let us see what simple linear algebra can do here. The differential equation can be discretized by placing, for example, equally spaces interior grid points on the interval: , . The yet-unknown values of at these points are denoted . Standard finite-difference formulas provide approximate values of and :

where is the step size, in our case. Stick all this into the equation: we get 4 linear equations, one for each interior point. Namely, at it is

(notice how the condition is used above), at it is

and so on. Clean up this system and put it in matrix form:

This isn’t too hard to solve even with pencil and paper. The solution is

It can be visualized by plotting 4 points :

Not particularly impressive is it? And why are all these negative y-values in a problem with boundary condition ? They do not really look like they want to approach at the left end of the interval. But let us go ahead and plot them together with the boundary conditions, using linear interpolation in between:

Or better, use cubic spline interpolation, which only adds another step of linear algebra (see Connecting dots naturally) to our computations.

This begins to look believable. For comparison, I used a heavier tool: BVP solver from SciPy. Its output is the red curve below.

Those four points we got from a 4-by-4 system, solvable by hand, pretty much tell the whole story. At any rate, they tell a better story than the explicit solution does.

Graphics made with: SciPy and Matplotlib using Google Colab.

A natural way to measure the nonlinearity of a function , where is an interval, is the quantity which expresses the deviation of from a line, divided by the size of interval . This quantity was considered in Measuring nonlinearity and reducing it.

Let us write where the supremum is taken over all intervals in the domain of definition of . What functions have finite ? Every Lipschitz function does, as was noted previously: . But the converse is not true: for example, is finite for the non-Lipschitz function , where .

The function looks nice, but is clearly unbounded. What makes finite? Note the scale-invariant feature of NL: for any the scaled function satisfies , and more precisely . On the other hand, our function has a curious scaling property where the linear term does not affect NL at all. This means that it suffices to bound for intervals of unit length. The plot of shows that not much deviation from the secant line happens on such intervals, so I will not bother with estimates.

The class of functions with is precisely the Zygmund class defined by the property with independent of . Indeed, since the second-order difference is unchanged by adding an affine function to , we can replace by with suitable and use the triangle inequality to obtain

where . Conversely, suppose that . Given an interval , subtract an affine function from to ensure . We may assume attains its maximum on at a point . Applying the definition of with and , we get , hence . This shows . The upshot is that is equivalent to the Zygmund seminorm of (i.e., the smallest possible M in the definition of ).

A function in may be nowhere differentiable: it is not difficult to construct so that is bounded between two positive constants. The situation is different for the small Zygmund class whose definition requires that as . A function is differentiable at any point of local extremum, since the condition forces its graph to be tangent to the horizontal line through the point of extremum. Given any two points we can subtract the secant line from and thus create a point of local extremum between and . It follows that is differentiable on a dense set of points.

The definitions of and apply equally well to complex-valued functions, or vector-valued functions. But there is a notable difference in the differentiability properties: a complex-valued function of class may be nowhere differentiable [Ullrich, 1993]. Put another way, two real-valued functions in need not have a common point of differentiability. This sort of thing does not often happen in analysis, where the existence of points of “good” behavior is usually based on the prevalence of such points in some sense, and therefore a finite collection of functions is expected to have common points of good behavior.

The key lemma in Ullrich’s paper provides a real-valued VMO function that has infinite limit at every point of a given set of measure zero. Although this is a result of real analysis, the proof is complex-analytic in nature and involves a conformal mapping. It would be interesting to see a “real” proof of this lemma. Since the antiderivative of a VMO function belongs to , the lemma yields a function that is not differentiable at any point of . Consider the lacunary series . One theorem of Zygmund shows that when , while another shows that is almost nowhere differentiable when . It remains to apply the lemma to get a function that is not differentiable at any point where is differentiable.

We know that every bounded sequence of real numbers has a convergent subsequence. For a sequence of functions, say , the notion of boundedness can be stated as: there exists a constant such that for every and for all . Such a sequence is called uniformly bounded.

Once we fix some point , the boundedness assumption provides a subsequence which converges at that point. But since different points may require different subsequences, it is not obvious whether we can pick a subsequence which converges for all . (Such a subsequence is called pointwise convergent.)

It is easy to give an example of a uniformly bounded sequence with no uniformly convergent subsequence (uniform convergence requires , which is stronger than for every ). Indeed, does the job. This sequence is uniformly bounded (by ) and converges pointwise to a discontinuous function such that and elsewhere. Any subsequence has the same pointwise limit and since is not continuous, the convergence cannot be uniform.

But what would be an example of a uniformly bounded sequence of continuous functions with no pointwise convergent subsequence? In Principles of Mathematical Analysis Rudin gives as such an example but then uses the Lebesgue Dominated Convergence Theorem to prove the non-existence of pointwise convergent subsequences. I do not want to use the DCT.

The simplest example I could think of is based on the letter-folding function defined by

or by a magic one-line formula if you prefer: .

Let be the sequence of the iterates of , that is and . This means , , , and so on.

By construction, the sequence is uniformly bounded (). It is somewhat similar to the example in that we have increasingly rapid oscillations. But the proof that has no pointwise convergent subsequence is elementary. It is based on the following observations.

(A) Suppose that is a sequence such that for each . Then the number satisfies if and if .

The proof of (A) amounts to summing a geometric series. Incidentally, observe that are the digits of in base .

(B) For as above we have . In other words, shifts the ternary digits of to the left. As a consequence, shifts them to the left by places.

(C) Given any subsequence , let if for some , and otherwise. By part (B), which means the first ternary digit of is . By construction, this digit is when is even, and when is odd. By part (A) we have when is even, and when is odd. Thus, does not converge. This completes the proof.

Remarks

The set of all points of the form considered in (A), (B), (C), i.e., those with all ternary digits or , is precisely the standard Cantor set .

The function magnifies each half of by the factor of , and maps it onto .

The important part of the formula for is that when . The rest of it could be some other continuous extension to .

A similar example could be based on the tent map , whose graph is shown below.

However, in case of the tent map it is more convenient to use the sequence of even iterates: , , and so on.

Indeed, since when , one can simply replace base with base in all of the above computations and arrive at the same conclusion, except the orbit of will now be jumping between the intervals and .

The tent map is conjugate to the logistic map , shown below. This conjugacy is where .

The conjugacy shows that the th iterate is . Therefore, the sequence of the even iterates of has no pointwise convergent subsequence. This provides an example with smooth functions, even polynomials.

One could use the sequence of all iterates (not just the even ones) in the constructions based on and , it just takes a bit of extra work that I prefer to avoid.

This is a brief return to the topic of Irrational Sunflowers. The sunflower associated with a real number is the set of points with polar coordinates and , . A sunflower reduces to equally spaced rays if and only if is a rational number written in lowest terms as .

Here is the sunflower of of size .

Seven rays emanate from the center because , then they become spirals, and spirals rearrange themselves into 113 rays because . Counting these rays is boring, so here is a way to do this automatically with Python (+NumPy as np):

a = np.pi
n = 5000
x = np.mod(a*np.arange(n, 2*n), 1)
np.sum(np.diff(np.sort(x)) > 1/n)

This code computes the polar angles of sunflower points indexed , sorts them and counts the relatively large gaps between the sorted values. These correspond to the gaps between sunflower rays, except that one of the gaps gets lost when the circle is cut and straightened onto the interval . So the program output (112) means there are 113 rays.

Here is the same sunflower with the points alternatively colored red and blue.

The colors blur into purple when the rational approximation pattern is strong. But they are clearly seen in the transitional period from 22/7 approximation to 355/113.

How many points would we need to see the next rational approximation after 355/113?

What will that approximation be? Yes, 22/7 and 355/113 and among the convergent of the continued fraction of . But so is 333/106 which I do not see in the sunflower. Are some convergents better than others?

Finally, the code I used to plot sunflowers.

import numpy as np
import matplotlib.pyplot as plt
a = np.pi
k = np.arange(10000)
r = np.sqrt(k)
t = a*2*np.pi*k
plt.axes().set_aspect('equal')
plt.plot(r*np.cos(t), r*np.sin(t), '.')
plt.show()

Calculus books tend to introduce transcendental functions (trigonometric, exponential, logarithm) early. Analysis textbooks such as Principles of Mathematical Analysis by Rudin tend to introduce them later, because of how long it takes to develop enough of the theory of power series.

The Riemann-Lebesgue lemma involves either trigonometric or exponential functions. But the following version works with the “late transcendentals” approach.

Transcendental-free Riemann-Lebesgue Lemma

TFRLL. Suppose that and are continuously differentiable functions, and is bounded. Then as .

The familiar form of the lemma is recovered by letting or .

Proof. By the chain rule, is the derivative of . Integrate by parts:

By assumption, there exists a constant such that everywhere. Hence , , and . By the triangle inequality,

completing the proof.

As a non-standard example, TFRLL applies to, say, for which . The conclusion is that , that is, which seems somewhat interesting. When , the factor of can be removed by applying the result to , leading to .

What if we tried less smoothness?

Later in Rudin’s book we encounter the Weierstrass theorem: every continuous function on is a uniform limit of polynomials. Normally, this would be used to make the Riemann-Lebesgue lemma work for any continuous function . But the general form given above, with an unspecified , presents a difficulty.

Indeed, suppose is continuous on . Given , choose a polynomial such that on . Since has continuous derivative, it follows that . It remains to show that is close to . By the triangle inequality,

which is bounded by … um. Unlike for and , we do not have a uniform bound for or for its integral. Indeed, with the integrals grow linearly with . And this behavior would be even worse with , for example.

At present I do not see a way to prove TFRLL for continuous , let alone for integrable . But I do not have a counterexample either.

There are many ways to approximate a given continuous function (I will consider the interval for convenience.) For example, one can use piecewise linear interpolation through the points , where . The resulting piecewise linear function has some nice properties: for example, it is increasing if is increasing. But it is not smooth.

A convenient way to represent piecewise linear interpolation is the sum where the functions are the triangles shown below: .

The functions form a partition of unity, meaning that and all are nonnegative. This property leads to the estimate

The latter sum is small because when is close to , the first factor is small by virtue of continuity, while the second factor is bounded by . When is far from , the second factor is zero, so the first one is irrelevant. The upshot is that is uniformly small.

But if we want a smooth approximation , we need a smooth partition of unity . But not just any set of smooth nonnegative functions that add up to is equally good. One desirable property is preserving monotonicity: if is increasing, then should be increasing, just as this works for piecewise linear interpolation. What does this condition require of our partition of unity?

An increasing function can be expressed as a limit of sums of the form where and is the Iverson bracket: 1 if true, 0 if false. By linearity, it suffices to have increasing for the case . In this case is simply for some , . So we want all to be increasing functions. Which is the case for the triangular partition of unity, when each looks like this:

One smooth choice is Bernstein basis polynomials: . These are nonnegative on , and the binomial formula shows . Are the sums increasing with ? Let’s find out. By the product rule,

In the second sum the term with vanishes, and the terms with can be rewritten as , which is , which is . After the index shift this becomes identical to the terms of the first sum and cancels them out (except for the first one). Thus,

To summarize: the Bernstein polynomials are monotone whenever is. On the other hand, the proof that uniformly is somewhat complicated by the fact that the polynomial basis functions are not localized the way that the triangle basis functions are: the factors do not vanish when is far from . I refer to Wikipedia for a proof of convergence (which, by the way, is quite slow).

Is there some middle ground between non-smooth triangles and non-localized polynomials? Yes, of course: piecewise polynomials, splines. More specifically, B-splines which can be defined as follows: B-splines of degree are the triangle basis functions shown above; a B-spline of degree is the moving averages of a -spline of degree with a window of length . The moving average of can be written as . We get a partition of unity because the sum of moving averages is the moving average of a sum, and averaging a constant function does not change it.

The splines of even degrees are awkward to work with… they are obtained from the triangles by taking those integrals with an odd number of times, which makes their knots fall in the midpoints of the uniform grid instead of the grid points themselves. But I will use anyway, because this degree is enough for -smooth approximation.

Recall that a triangular basis function has slope and is supported on an interval where . Accordingly, its moving average will be supported on . Since , the second derivative is when , is when , and is again when . This is enough to figure out the formula for :

These look like:

Nice! But wait a moment, the sum near the endpoints is not constant: it is less than 1 because we do not get a contributions of two splines to the left and right of the interval. To correct for this boundary effect, replace with and with , using “ghost” elements of the basis that lie outside of the actual grid. Now the quadratic B-spline basis is correct:

Does this partition of unity preserve monotinicity? Yes, it does: which is nonnegative because the sum is an increasing piecewise linear function, as noted previously. Same logic works for B-splines of higher degree.

In conclusion, here is a quadratic B-spline approximation (orange) to a tricky increasing function (blue).

One may wonder why the orange curve deviates from the line at the end – did we miss some boundary effect there? Yes, in a way… the spline actually approximates the continuous extension of our original function by constant values on the left and right. Imagine the blue graph continuing to the right as a horizontal line: this creates a corner at and the spline is smoothing that corner. To avoid this effect, one may want to extend in a better way and then work with the extended function, not folding the ghosts into .

But even so, B-spline achieves a better approximation than the Bernstein polynomial with the same number of basis functions (eight):

The reason is the non-local nature of the polynomial basis , which was noted above. Bernstein polynomials do match the function perfectly at the endpoints, but this is small consolation.

Consider the space of all bounded continuous functions , with the uniform norm . Let be its subset that consists of all periodic continuous functions: recall that is periodic if there exists such that for all .

The set is not closed in the topology of . Indeed, let be the distance from to nearest integer. The function is periodic with . Therefore, each sum of the form is periodic with . Hence the sum of the infinite series is a uniform limit of periodic functions. Yet, is not periodic, because and for (for every there exists such that is not an integer).

The above example (which was suggested to me by Yantao Wu) is somewhat similar to the Takagi function, which differs from it by the minus sign in the exponent: . Of course, the Takagi function is periodic with period .

Do we really need an infinite series to get such an example? In other words, does the set contain an elementary function?

A natural candidate is the sum of trigonometric waves with incommensurable periods (that is, the ratio of periods must be irrational). For example, consider the function whose graph is shown below.

Since and for all , the function is not periodic. Its graph looks vaguely similar to the graph of . Is a uniform limit of periodic functions?

Suppose is a -periodic function such that . Then , hence for all , hence . By the definition of this implies and for all . The following lemma shows a contradiction between these properties.

Lemma. If a real number satisfies for all , then is an integer multiple of .

Proof. Suppose is not an integer multiple of . We can assume without loss of generality, because can be replaced by to get it in the interval and then by to get it in . Since , we have . Let be the smallest positive integer such that . The minimality of implies , hence . But then , a contradiction.

The constant in the lemma is best possible, since for all .

Returning to the paragraph before the lemma, choose so that . The lemma says that both and must be integer multiples of , which is impossible since they are incommensurable. This contradiction shows that for any periodic function , hence is not a uniform limit of periodic functions.

The above result can be stated as . I guess is actually . It cannot be greater than since for all . (Update: Yantao pointed out that the density of irrational rotations implies the distance is indeed equal to 1.)

Note: the set is a proper subset of the set of (Bohr / Bochner / uniform) almost periodic functions (as Yemon Choi pointed out in a comment). The latter is a linear space while is not. I was confused by the sentence “Bohr defined the uniformly almost-periodic functions as the closure of the trigonometric polynomials with respect to the uniform norm” on Wikipedia. To me, a trigonometric polynomial is a periodic function of particular kind. What Bohr called Exponentialpolynom is a finite sum of the form where can be any real numbers. To summarize: the set considered above is the closure of while the set of almost periodic functions is the closed linear span of . The function is an example of the latter, not of the former.

In Calculus I students are taught how to find the points at which the graph of a function has zero curvature (that is, the points of inflection). The points of maximal curvature are usually not discussed. This post attempts to rectify this omission.

The (signed) curvature of is

We want to maximize the absolute value of , whether the function is positive or negative there (so both maxima and minima of can be of interest). The critical points of are the zeros of

So we are lead to consider a polynomial of the first three derivatives of , namely .

Begin with some simple examples:

has so the curvature of a parabola is maximal at its vertex. No surprise there.

has , indicating two symmetric points of maximal curvature, , pretty close to the point of inflection.

has . This has three real roots, but actually minimizes curvature (it vanishes there).

More generally, with positive integer yields indicating two points which tend to as grows.

The graph of a polynomial of degree can have at most points of zero curvature, because the second derivative vanishes at those. How many points of maximal curvature can it have? The degree of expression above is but it is not obvious whether all of its roots can be real and distinct, and also be the maxima of (unlike for ). For we do get point of maximal curvature. But for , there can be at most such points, not . Edwards and Gordon (Extreme curvature of polynomials, 2004) conjectured that the graph of a polynomial of degree has at most points of maximal curvature. This remains open despite several partial results: see the recent paper Extreme curvature of polynomials and level sets (2017).

A few more elementary functions:

has , so the curvature is maximal at . Did I expect the maximum of curvature to occur for a negative ? Not really.

has . The first factor is irrelevant: the points of maximum curvature of a sine wave are at its extrema, as one would guess.

has which is zero at… ahem. The expression factors as

Writing we can get a cubic equation in , but it is not a nice one. Or we could do some trigonometry and reduce to the equation . Either way, a numerical solution is called for: (and for other periods).

This is a meta-post which collects links to other posts on this blog with (sometimes implicit) questions that were left unanswered there. This does not necessarily mean that nobody has an answer, just that I did not have one when writing the post. The collection is in reverse chronological order.

Multiple kinds of branching here. First, the motorsport content has been moved to formula7.blog. Two blogs? Well, it became clear that my Stack Exchange activity, already on hiatus since 2018, is not going to resume (context: January 14, January 15, January 17). But typing words in boxes is still a hobby of mine.

There may be yet more branching in the knowledge market space, with Codidact and TopAnswers attempting to rise from the ashes of Stack Exchange. (I do not expect either project to have much success.)

Also, examples of branching in complex analysis are often limited to the situations where any two branches differ either by an additive constant like or by a multiplicative constant like . But different branches can even have different branch sets. Consider the dilogarithm, which has a very nice power series in the unit disk:

The series even converges on the unit circle , providing a continuous extension there. But this circle is also the boundary of the disk of convergence, so some singularity has to appear. And it does, at . Going around this singularity and coming back to the unit disk, we suddenly see a function with a branch point at , where there was no branching previously.

What gives? Consider the derivative:

As long as the principal branch of logarithm is considered, there is no singularity at since cancels the denominator. But once we move around , the logarithm acquires a multiple of , and so gets an additional term , and integrating that results in logarithmic branching at .

Of course, this does not even begin the story of the dilogarithm, so I refer to Zagier’s expanded survey which has a few branch points itself.

Thus the dilogarithm is one of the simplest non-elementary functions one can imagine. It is also one of the strangest. It occurs not quite often enough, and in not quite an important enough way, to be included in the Valhalla of the great transcendental functions—the gamma function, Bessel and Legendre- functions, hypergeometric series, or Riemann’s zeta function. And yet it occurs too often, and in far too varied contexts, to be dismissed as a mere curiosity. First defined by Euler, it has been studied by some of the great mathematicians of the past—Abel, Lobachevsky, Kummer, and Ramanujan, to name just a few—and there is a whole book devoted to it. Almost all of its appearances in mathematics, and almost all the formulas relating to it, have something of the fantastical in them, as if this function alone among all others possessed a sense of humor.