Completely monotone imitation of 1/x

I wanted an example of a function {f} that behaves mostly like {1/x} (the product {xf(x)} is bounded between two positive constants), but such that {xf(x)} does not have a limit as {x\rightarrow 0}.

The first thing that comes to mind is {(2+\sin(1/x))/x}, but this function does not look very much like {1/x}.

sin(1/x) makes it too wiggly
sin(1/x) makes it too wiggly

Then I tried {f(x)=(2+\sin\log x)/x}, recalling an example from Linear Approximation and Differentiability. It worked well:

I can't believe it's not a hyperbola!
I can’t believe it’s not a hyperbola!

In fact, it worked much better than I expected. Not only if {f'} of constant sign, but so are {f''} and {f'''}. Indeed,

\displaystyle    f'(x) = \frac{\cos \ln x - \sin \log x - 2}{x^2}

is always negative,

\displaystyle    f''(x) = \frac{4 -3\cos \log x + \sin \log x}{x^3}

is always positive,

\displaystyle    f'''(x) = \frac{10\cos \log x -12}{x^4}

is always negative. The sign becomes less obvious with the fourth derivative,

\displaystyle    f^{(4)}(x) = \frac{48-40\cos\log x - 10\sin \cos \ln x}{x^5}

because the triangle inequality isn’t conclusive now. But the amplitude of {A\cos t+B\sin t} is {\sqrt{A^2+B^2}}, and {\sqrt{40^2+10^2}<48}.

So, it seems that {f} is completely monotone, meaning that {(-1)^n f^{(n)}(x)\ge 0} for all {x>0} and for all {n=0,1,2,\dots}. But we already saw that this sign pattern can break after many steps. So let’s check carefully.

Direct calculation yields the neat identity

\displaystyle    \left(\frac{1+a\cos \log x+b\sin\log x}{x^n}\right)' = -n\,\frac{1+(a-b/n)\cos\log x+(b+a/n) \sin\log x}{x^{n+1}}

With its help, the process of differentiating the function {f(x) = (1+a\cos \log x+b\sin\log x)/x} can be encoded as follows: {a_1=a}, {b_1=b}, then {a_{n+1}=a_n-b_n/n} and {b_{n+1} = b_n+a_n/n}. The presence of {1/n} is disconcerting because the harmonic series diverges. But orthogonality helps: the added vector {(-b_n/n, a_n/n)} is orthogonal to {(a_n,b_n)}.

The above example, rewritten as {f(x)=(1+\frac12\sin\log x)/x}, corresponds to starting with {(a,b) = (0,1/2)}. I calculated and plotted {10000} iterations: the points {(a_n,b_n)} are joined by piecewise linear curve.

Harmonic spiral
Harmonic spiral

The total length of this curve is infinite, since the harmonic series diverges. The question is, does it stay within the unit disk? Let’s find out. By the above recursion,

\displaystyle    a_{n+1}^2 + b_{n+1}^2 = \left(1+\frac{1}{n^2}\right) (a_n^2+b_n^2)

Hence, the squared magnitude of {(a_n,b_n)} will always be less than

\displaystyle    \frac14 \prod_{n=1}^\infty \left(1+\frac{1}{n^2}\right)

with {1/4} being {a^2+b^2}. The infinite product evaluates to {\frac{\sinh \pi}{\pi}a\approx 3.7} (explained here), and thus the polygonal spiral stays within the disk of radius {\frac12 \sqrt{\frac{\sinh \pi}{\pi}}\approx 0.96}. In conclusion,

\displaystyle    (-1)^{n} \left(\frac{1+(1/2)\sin\log x}{x}\right)^{(n)} = n!\,\frac{1+a_{n+1}\cos\log x+b_{n+1} \sin\log x}{x^{n+1}}

where the trigonometric function {a_{n+1}\cos\log x+b_{n+1} \sin\log x} has amplitude strictly less than {1}. Since the expression on the right is positive, {f} is completely monotone.

The plot was generated in Sage using the code below.

a,b,c,d = var('a b c d')
a = 0
b = 1/2
l = [(a,b)]
for k in range(1,10000):
    c = a-b/k
    d = b+a/k
    l.append((c,d))
    a = c
    b = d
show(line(l),aspect_ratio=1)

2014 syllabus

This is a sample of what you could have learned by taking Calculus VII in 2014. One post from each month.

Alternating lacunary series and 1-1+1-1+1-1+…

The series {1-1+1-1+1-1+\cdots} diverges. A common way to make it convergent is to replace each {1} with a power of {x}; the new series will converge when {|x|<1} and maybe its sum will have a limit as {x\rightarrow 1}. And indeed,

\displaystyle    x-x^2+x^3-x^4+x^5-x^5+\cdots = \frac{x}{1+x}

which tends to {1/2} as {x} approaches {1} from the left.

x/(1+x) has limit 1/2
x/(1+x) has limit 1/2

Things get more interesting if instead of consecutive integers as exponents, we use consecutive powers of {2}:

\displaystyle    f(x) = x-x^2+x^4-x^8+x^{16} -x^{32} +\cdots

On most of the interval {(0,1)} it behaves just like the previous one:

Lacunary series on (0,1)
Lacunary series on (0,1)

But there appears to be a little blip near {1}. Let’s zoom in:

Lacunary series near 1
… near 1

And zoom in more:

... and very near 1
… and very near 1

Still there.

This function was considered by Hardy in 1907 paper Theorems concerning infinite series. On pages 92–93 he shows that it “oscillates between finite limits of indetermination for {x=1}“. There is also a footnote: “The simple proof given above was shown to be by Mr. J. H. Maclagan-Wedderburn. I had originally obtained the result by means of a contour integral.”

Okay, but what are these “finite limits of indetermination”? The Alternating Series Estimation shows {0<f(x)<1} for {x\in (0,1)}, but the above plots suggest that {f} oscillates between much tighter bounds. Let’s call them {A= \liminf_{x\rightarrow 1-} f(x)} and {B=\limsup_{x\rightarrow 1-} f(x)}.

Since {f(x)+f(x^2)\equiv x^2}, it follows that {f(x)+f(x^2)\rightarrow 1} as {x\rightarrow 1^-}. Hence, {B = \limsup_{x\rightarrow 1-}(1-f(x^2)) = 1-A}. In other words, {A} and {B} are symmetric about {1/2}. But what are they?

I don’t have an answer, but here is a simple estimate. Let {g(x)=x-x^2} and observe that

\displaystyle    f(x) = g(x) + g(x^4)+g(x^{16}) + g(x^{64})+\dots \qquad \qquad (1)

The function {g} is not hard to understand: its graph is a parabola.

Yes, a parabola
Yes, a parabola

Since {g} is positive on {(0,1)}, any of the terms in the sum (1) gives a lower bound for {f}. Each individual term is useless for this purpose, since it vanishes at {1}. But we can pick {n} in {g(x^{4^n})} depending on {x}.

Let {x_0\in(0,1)} be the unique solution of the equation {x_0^4=1-x_0}. It could be written down explicitly, but this is not a pleasant experience; numerically {x_0\approx 0.7245}. For every {x>x_0} there is an integer {n\ge 1} such that {x^{4^n}\in [1-x_0,x_0]}, namely the smallest integer such that {x^{4^n} \le x_0}. Hence,

\displaystyle f(x)> g(x^{4^n})\ge \min_{[x_0,1-x_0]}g = x_0-x_0^2>0.1996 \qquad \qquad (2)

which gives a nontrivial lower bound {A>0.1996} and symmetrically {B<0.8004}. Frustratingly, this falls just short of neat {1/5} and {4/5}.

One can do better than (2) by using more terms of the series (1). For example, study the polynomial {g(t)+g(t^4)} and find a suitable interval {[t_0^4,t_0]} on which its minimum is large (such an interval will no longer be symmetric). Or use {3,4,5...} consecutive terms of the series… which quickly gets boring. This approach gives arbitrarily close approximations to {A} and {B}, but does not tell us what these values really are.

Tossing a continuous coin

To generate a random number {X} uniformly distributed on the interval {[0,1]}, one can keep tossing a fair coin, record the outcomes as an infinite sequence {(d_k)} of 0s and 1s, and let {X=\sum_{k=1}^\infty 2^{-k} d_k}. Here is a histogram of {10^6} samples from the uniform distribution… nothing to see here, except maybe an incidental interference pattern.

Sampling the uniform distribution
Sampling the uniform distribution

Let’s note that {X=\frac12 (d_1+x_1)} where {X_1=\sum_{k=1}^\infty 2^{-k} d_{k+1}} has the same distribution as {X} itself, and {d_1} is independent of {X_1}. This has an implication for the (constant) probability density function of {X}:

\displaystyle    p(x) = p(2x) + p(2x-1)

because {2 p(2x)} is the p.d.f. of {\frac12 X_1} and {2p(2x-1)} is the p.d.f. of {\frac12(1+X_1)}. Simply put, {p} is equal to the convolution of the rescaled function {2p(2x)} with the discrete measure {\frac12(\delta_0+\delta_1)}.


Let’s iterate the above construction by letting each {d_k} be uniformly distributed on {[0,1]} instead of being constrained to the endpoints. This is like tossing a “continuous fair coin”. Here is a histogram of {10^6} samples of {X=\sum_{k=1}^\infty 2^{-k} d_k}; predictably, with more averaging the numbers gravitate toward the middle.

Sampling the Fabius distribution
Sampling the Fabius distribution

This is not a normal distribution; the top is too flat. The plot was made with this Scilab code, putting n samples put into b buckets:

n = 1e6
b = 200
z = zeros(1,n)
for i = 1:10
    z = z + rand(1,n)/2^i
end
c = histplot(b,z)

If this plot too jagged, look at the cumulative distribution function instead:

Fabius function
Fabius function

It took just more line of code: plot(linspace(0,1,b),cumsum(c)/sum(c))

Compare the two plots: the c.d.f. looks very similar to the left half of the p.d.f. It turns out, they are identical up to scaling.


Let’s see what is going on here. As before, {X=\frac12 (d_1+X_1)} where {X_1=\sum_{k=1}^\infty 2^{-k} d_{k+1}} has the same distribution as {X} itself, and the summands {d_1,X_1} are independent. But now that {d_1} is uniform, the implication for the p.d.f of {X} is different:

\displaystyle    p(x) = \int_0^{1} 2p(2x-t)\,dt

This is a direct relation between {p} and its antiderivative. Incidentally, if shows that {p} is infinitely differentiable because the right hand side always has one more derivative than the left hand side.


To state the self-similarity property of {X} in the cleanest way possible, one introduces the cumulative distribution function {F} (the Fabius function) and extends it beyond {[0,1]} by alternating even and odd reflections across the right endpoint. The resulting function satisfies the delay-differential equation {F\,'(x)=2F(2x)}: the derivative is a rescaled copy of the function itself.

Since {F} vanishes at the even integers, it follows that at every dyadic rational, all but finitely many derivatives of {F} are zero. The Taylor expansion at such points is a polynomial, while {F} itself is not. Thus, {F} is nowhere analytic despite being everywhere {C^\infty}.

This was, in fact, the motivation for J. Fabius to introduce this construction in 1966 paper Probabilistic Example of a Nowhere Analytic {C^\infty}-Function.

Linear approximation and differentiability

If a function {f\colon \mathbb R\rightarrow \mathbb R} is differentiable at {a\in \mathbb R}, then it admits good linear approximation at small scales. Precisely: for every {\epsilon>0} there is {\delta>0} and a linear function {\ell(x)} such that {|f(x)-\ell(x)|<\epsilon \,\delta} for all {|x|<\delta}. Having {\delta} multiplied by {\epsilon} means that the deviation from linearity is small compared to the (already small) scale {\delta} on which the function is considered.

For example, this is a linear approximation to {f(x)=e^x} near {0} at scale {\delta=0.1}.

Linear approximation to exponential function
Linear approximation to exponential function

As is done on this graph, we can always take {\ell} to be the secant line to the graph of {f} based on the endpoints of the interval of consideration. This is because if {L} is another line for which {|f(x)-L(x)|<\epsilon \,\delta} holds, then {|\ell-L|\le \epsilon \,\delta} at the endpoints, and therefore on all of the interval (the function {x\mapsto |\ell(x)-L(x)|} is convex).


Here is a non-differentiable function that obviously fails the linear approximation property at {0}.

Self-similar graph
Self-similar graph

(By the way, this post is mostly about me trying out SageMathCloud.) A nice thing about {f(x)=x\sin \log |x|} is self-similarity: {f(rx)=rf(x)} with the similarity factor {r=e^{2\pi}}. This implies that no matter how far we zoom in on the graph at {x=0}, the graph will not get any closer to linear.

I like {x\sin \log |x|} more than its famous, but not self-similar, cousin {x\sin(1/x)}, pictured below.

Standard example from intro to real analysis
Standard example from intro to real analysis

Interestingly, linear approximation property does not imply differentiability. The function {f(x)=x\sin \sqrt{-\log|x|}} has this property at {0}, but it lacks derivative there since {f(x)/x} does not have a limit as {x\rightarrow 0}. Here is how it looks.

Now with the square root!
Now with the square root!

Let’s look at the scale {\delta=0.1}

scale 0.01
scale 0.1

and compare to the scale {\delta=0.001}

scale 0.001
scale 0.001

Well, that was disappointing. Let’s use math instead. Fix {\epsilon>0} and consider the function {\phi(\delta)=\sqrt{-\log \delta}-\sqrt{-\log (\epsilon \delta)}}. Rewriting it as

\displaystyle    \frac{\log \epsilon}{\sqrt{-\log \delta}+\sqrt{-\log (\epsilon \delta)}}

shows {\phi(\delta)\rightarrow 0} as {\delta\rightarrow 0}. Choose {\delta} so that {|\phi(\delta)|<\epsilon} and define {\ell(x)=x\sqrt{-\log \delta}}. Then for {\epsilon \,\delta\le |x|< \delta} we have {|f(x)-\ell(x)|\le \epsilon |x|<\epsilon\,\delta}, and for {|x|<\epsilon \delta} the trivial bound {|f(x)-\ell(x)|\le |f(x)|+|\ell(x)|} suffices.

Thus, {f} can be well approximated by linear functions near {0}; it’s just that the linear function has to depend on the scale on which approximation is made: its slope {\sqrt{-\log \delta}} does not have a limit as {\delta\to0}.

The linear approximation property does not become apparent until extremely small scales. Here is {\delta = 10^{-30}}.

Scale 1e-30
Scale 1e-30

Nodal lines

Wikipedia article on nodes offers this 1D illustration: a node is an interior point at which a standing wave does not move.

Standing wave and its nodes
Standing wave and its nodes

(At the endpoints the wave is forced to stay put, so I would not count them as nodes despite being marked on the plot.)

A standing wave in one dimension is described by the equation {f''+\omega^2 f=0}, where {\omega} is its (angular) frequency. The function {u(x,t) = f(x)\cos \omega t} solves the wave equation {u_{tt}=u_{xx}}: the wave vibrates without moving, hence the name. In mathematics, these are the (Dirichlet) eigenfunctions of the Laplacian.

Subject to boundary conditions {f(0)=0 = f(\pi)} (fixed ends), all standing waves on the interval {(0,\pi)} are of the form {\sin nx} for {n=1,2,3,\dots}. Their eigenvalues are exactly the perfect squares, and the nodes are equally spaced on the interval.

Things get more interesting in two dimensions. For simplicity consider the square {Q=(0,\pi)\times (0,\pi)}. Eigenfunctions with zero value on the boundary are of the form {f(x,y) = \sin mx \sin ny} for positive integers {m,n}. The set of eigenvalues has richer structure, it consists of the integers that can be expressed as the sum of two positive squares: 2, 5, 8, 10, 13, 17,…

The zero sets of eigenfunctions in two dimensions are called nodal lines. At a first glance it may appear that we have nothing interesting: the zero set of {\sin mx \sin ny} is a union of {n-1} equally spaced horizontal lines, and {m-1} equally spaced vertical lines:

Boring nodal lines
This is a square, not a tall rectangle

But there is much more, because a sum of two eigenfunctions with the same eigenvalue is also an eigenfunction. To begin with, we can form linear combinations of {\sin mx \sin ny} and {\sin nx \sin my}. Here are two examples from Partial Differential Equations by Walter Strauss:

When {f(x,y) = \sin 12x \sin y+\sin x \sin 12y }, the square is divided by nodal lines into 12 nodal domains:

Frequency 145, twelve nodal domains
Eigenvalue 145, twelve nodal domains

After slight perturbation {f(x,y) = \sin 12x \sin y+0.9\sin x \sin 12y } there is a single nodal line dividing the square into two regions of intricate geometry:

Also frequency 145, but two  nodal domains
Also eigenvalue 145, but two nodal domains

And then there are numbers that can be written as sums of squares in two different ways. The smallest is {50=1^2+7^2 = 5^2+5^2}, with eigenfunctions such as

\displaystyle    f(x,y) = \sin x\sin 7y +2\sin 5x \sin 5y+\sin 7x\sin y

pictured below.

Frequency 50
Frequency 50

This is too good not to replicate: the eigenfunctions naturally extend as doubly periodic functions with anti-period {\pi}.

Periodic extension
Periodic extension

Binary intersection property, and not fixing what isn’t broken

A metric space has the binary intersection property if every collection of closed balls has nonempty intersection unless there is a trivial obstruction: the distance between centers of two balls exceeds the sum of their radii. In other words, for every family of points {x_\alpha\in X} and numbers {r_\alpha>0} such that {d(x_\alpha,x_\beta)\le r_\alpha+r_\beta} for all {\alpha,\beta} there exists {x\in X} such that {d(x_\alpha,x)\le r_\alpha} for all {\alpha}.

For example, {\mathbb R} has this property: {x=\inf_\alpha (x_\alpha+r_\alpha)} works. But {\mathbb R^2} does not:

Failure of the binary intersection property
Failure of the binary intersection property

The space of bounded sequences {\ell^\infty} has the binary intersection property, and so does the space {B[0,1]} of all bounded functions {f:[0,1]\rightarrow\mathbb R} with the supremum norm. Indeed, the construction for {\mathbb R} generalizes: given a family of bounded functions {f_\alpha} and numbers {r_\alpha>0} as in the definition, let {f(x)=\inf_\alpha (f_\alpha(x)+r_\alpha)}.


The better known space of continuous functions {C[0,1]} has the finite version of binary intersection property, because for a finite family, the construction {\inf_\alpha (f_\alpha(x)+r_\alpha)} produces a continuous function. However, the property fails without finiteness, as the following example shows.

Example. Let {f_n\in C[0,1]} be a function such that {f_n(x)=-1} for {x\le \frac12-\frac1n}, {f_n(x)=1} for {x\ge \frac12+\frac1n}, and {f_n} is linear in between.

Since {\|f_n-f_m\| \le 1} for all {n,m}, we can choose {r_n=1/2} for all {n}. But if a function {f} is such that {\|f-f_n\|\le \frac12} for all {n}, then {f(x) \le -\frac12} for {x<\frac12} and {f(x) \ge \frac12} for {x>\frac12}. There is no continuous function that does that.

More precisely, for every {f\in C[0,1]} we have {\liminf_{n\to\infty} \|f-f_n\|\ge 1 } because {f(x)\approx f(1/2)} in a small neighborhood of {1/2}, while {f_n} change from {1} to {-1} in the same neighborhood when {n} is large.


Given a discontinuous function, one can approximate it with a continuous function in some way: typically, using a mollifier. But such approximations tend to change the function even if it was continuous to begin with. Let’s try to not fix what isn’t broken: look for a retraction of {B[0,1]} onto {C[0,1]}, that is a map {\Phi:B[0,1]\rightarrow C[0,1]} such that {\Phi(f)=f} for all {f\in C[0,1]}.

The failure of binary intersection property, as demonstrated by the sequence {(f_n)} above, implies that {\Phi} cannot be a contraction. Indeed, let {f(x)= \frac12 \,\mathrm{sign}\,(x-1/2)}. This is a discontinuous function such that {\|f-f_n\|\le 1/2} for all {n}. Since {\liminf_{n\to\infty} \|\Phi(f)-f_n\|\ge 1}, it follows that {\Phi} cannot be {L}-Lipschitz with a constant {L<2}.


It is known that there is a retraction from {B[0,1]} onto {C[0,1]} with the Lipschitz constant at most {20}: see Geometric Nonlinear Functional Analysis by Benyamini and Lindenstrauss. The gap appears to remain at present; at least I don’t know the smallest Lipschitz constant required to retract bounded functions onto continuous ones.

Graphical embedding

This post continues the theme of operating with functions using their graphs. Given an integrable function {f} on the interval {[0,1]}, consider the region {R_f} bounded by the graph {y=f(x)}, the axis {y=0}, and the vertical lines {x=0}, {x=1}.

Total area under and over the graph is the L1 norm
Total area under and over the graph is the L1 norm

The area of {R_f} is exactly {\int_0^1 |f(x)|\,dx}, the {L^1} norm of {f}. On the other hand, the area of a set is the integral of its characteristic function,

\displaystyle    \chi_f = \begin{cases}1, \quad x\in R_f, \\ 0,\quad x\notin R_f \end{cases}

So, the correspondence {f\mapsto \chi_f } is a map from the space of integrable functions on {[0,1]}, denoted {L^1([0,1])}, to the space of integrable functions on the plane, denoted {L^1(\mathbb R^2)}. The above shows that this correspondence is norm-preserving. It also preserves the metric, because integration of {|\chi_f-\chi_g|} gives the area of the symmetric difference {R_f\triangle R_g}, which in turn is equal to {\int_0^1 |f-g| }. In symbols:

\displaystyle    \|\chi_f-\chi_g\|_{L^1} = \int |\chi_f-\chi_g| = \int |f-g| = \|f-g\|_{L^1}

Distance between two functions in terms of their graphs
Distance between two functions in terms of their graphs

The map {f\mapsto \chi_f} is nonlinear: for example {2f} is not mapped to {2 \chi_f} (the function that is equal to 2 on the same region) but rather to a function that is equal to 1 on a larger region.

So far, this nonlinear embedding did not really offer anything new: from one {L^1} space we got into another. It is more interesting (and more difficult) to embed things into a Hilbert space such as {L^2(\mathbb R^2)}. But for the functions that take only the values {0,1,-1}, the {L^2} norm is exactly the square root of the {L^1} norm. Therefore,

\displaystyle    \|\chi_f-\chi_g\|_{L^2} = \sqrt{\int |\chi_f-\chi_g|^2} =    \sqrt{\int |\chi_f-\chi_g|} = \sqrt{\|f-g\|_{L^1}}

In other words, raising the {L^1} metric to power {1/2} creates a metric space that is isometric to a subset of a Hilbert space. The exponent {1/2} is sharp: there is no such embedding for the metric {d(f,g)=\|f-g\|_{L^1}^{\alpha} } with {\alpha>1/2}. The reason is that {L^1}, having the Manhattan metric, contains geodesic squares: 4-cycles where the distances between adjacent vertices are 1 and the diagonal distances are equal to 2. Having such long diagonals is inconsistent with the parallelogram law in Hilbert spaces. Taking the square root reduces the diagonals to {\sqrt{2}}, which is the length they would have in a Hilbert space.

This embedding, and much more, can be found in the ICM 2010 talk by Assaf Naor.

Graphical convergence

The space of continuous functions (say, on {[0,1]}) is usually given the uniform metric: {d_u(f,g) = \sup_{x}|f(x)-g(x)|}. In other words, this is the smallest number {\rho} such that from every point of the graph of one function we can jump to the graph of another function by moving at distance {\le \rho} in vertical direction.

Uniform metric: vertical distances
Uniform metric is based on vertical distances

Now that I put it this way, why don’t we drop “in vertical direction”? It’ll still be a metric, namely the Hausdorff metric between the graphs of {f} and {g}. It’s natural to call it the graphical metric, denoted {d_g}; from the definition it’s clear that {d_g\le d_u}.

Graphical metric: Hausdorff distance
Weird metric, weird color

Some interesting things happen when the space of continuous functions is equipped with {d_g}. For one thing, it’s no longer a complete space: the sequence {f_n(x)=x^n} is Cauchy in {d_g} but has no limit.

Sequence of x^n
Sequence of x^n

On the other hand, the bounded subsets of {(C[0,1],d_g) } are totally bounded. Indeed, given {M>0} and {\epsilon>0} we can cover the rectangle {[0,1]\times [-M,M]} with a rectangular mesh of diameter at most {\epsilon}. For each function with {\sup|f|\le M}, consider the set of rectangles that its graph visits. There are finitely many possibilities for the sets of visited rectangles. And two functions that share the same set of visited rectangles are at graphical distance at most {\epsilon} from each other.

Boxy approximation to the graph
Boxy approximation to the graph

Thus, the completion of {C[0,1]} in the graphical metric should be a nice space: bounded closed subsets will be compact in it. What is this completion, concretely?

Here is a partial answer: if {(f_n)} is a graphically Cauchy sequence, its limit is the compact set {\{(x,y): g(x)\le y\le h(x)\}} where

\displaystyle    g(x) = \inf_{x_n\rightarrow x} \liminf f_n(x_n)

(the infimum taken over all sequences converging to {x}), and

\displaystyle    h(x) = \sup_{x_n\rightarrow x} \limsup f_n(x_n)

It’s not hard to see that {g} is upper semicontinuous and {h} is lower semicontinuous. Of course, {g\le h}. It seems that the set of such pairs {(g,h)} indeed describes the graphical completion of continuous functions.

For example, the limit of {f_n(x)=x^n} is described by the pair {g(x)\equiv 0}, {h(x)=\chi_{\{1\}}}. Geometrically, it’s a broken line with horizontal and vertical segments

For another example, the limit of {f_n(x)=\sin^2 nx} is described by the pair {g(x)\equiv 0}, {h(x)\equiv 1}. Geometrically, it’s a square.

Stripey sines
Stripey sines

Parsing calculus with regular expressions

As a service* to math students everywhere (especially those taking calculus), I started Mathematics.SE Index. The plan is to have a thematic catalog of common exercises in college-level mathematics, each linked to a solution posted on Math.SE.

As of now, the site has reasonably complete sections on Limits and Series, with a rudimentary section on binomial sums. All lists are automatically generated. Initial filtering was done with Data Explorer SQL query, using tags and keywords in question body. The query also took into account the view count (i.e., how often the problem is searched for), and the existence of upvoted answers.

The results of the query were processed with a Google Sheets script: a bunch of regular expressions extracted LaTeX markup with a desired pattern, checked its integrity [not of academic kind] and transformed into WordPress-compatible markup.

Plans for near future: integrals (especially improper), basic proofs by induction, {\epsilon-\delta}, maybe some group theory and differential equations… depends on how easy it is to teach these topics to regular expressions.

(*) Or disservice, I wouldn’t know.