## Fundamental limits as differences of reciprocals

The three fundamental limits (as they were taught to me) are

${\displaystyle \lim_{x\to 0} \frac{e^x-1}{x} = \lim_{x\to 0} \frac{\log(1+x)}{x} = \lim_{x\to 0} \frac{\sin x}{x} = 1}$

(exponential, logarithmic, trigonometric). Usually they come with pictures like

which show that the graph of the numerator in each limit, say ${y=f(x)}$, is indeed close to the line ${y=x}$.

But there are so many different degrees of “close”: for example, ${0.0103\approx 0.01}$ but the reciprocals of these numbers differ by almost ${3}$. As a stress-test of the approximation ${f(x)\approx x}$, let us consider the behavior of ${\displaystyle \frac{1}{f(x)} - \frac{1}{x}}$ for each of the fundamental limits.

## Exponential limit

Expressed as a difference: ${\displaystyle \frac{1}{e^x - 1} - \frac{1}{x}}$

Always negative, which is obvious once we recall that the graph of ${e^x-1}$ lies above its tangent line. The graph has central symmetry about ${(0, -1/2)}$ which is not as obvious but is an easy algebra exercise. Since the asymptote on the right is ${y=0}$, the other asymptote is ${y=-1}$. This looks a lot like the logistic function ${1/(1+\exp(-x))}$ … but it is not, because the logistic function approaches its asymptotes exponentially fast, while our function does so at the rate ${1/x}$.
For comparison, here is the logistic curve ${\displaystyle y = \frac{1}{1+\exp(-x/3)} - 1}$ (in green) scaled to match the behavior at ${0}$.

This could be potentially useful if one needs a logistic-type function that behaves like a rational function at infinity. Simply using a rational function for this purpose would not do: it cannot have two distinct horizontal asymptotes.

## Logarithmic limit

${\displaystyle \frac{1}{\log(x+1)} - \frac{1}{x}}$

This one is always positive, and has a vertical asymptote at ${x=-1}$ in addition to the horizontal asymptote at ${0}$. At a glance it may look like shifted/scaled hyperbola ${y=1/x}$. Indeed, ${y = (1+x/3)/(2+x)}$ is a decent approximation to it, shown in green below.

## Trigonometric limit

${\displaystyle \frac{1}{\sin x} - \frac{1}{x}}$

Unlike in the previous cases, the difference of reciprocals vanished at ${0}$ due to the approximation ${\sin x \approx x}$ being of higher order: the error term is cubic rather than quadratic. The graph looks like the tangent function but it cannot be exactly that since the nearest vertical asymptotes are ${x=\pm \pi}$ rather than ${x=\pm \pi/2}$. (Not to mention other reasons such as non-periodicity.) A stretched and rescaled tangent, namely ${\displaystyle \frac{1}{3}\tan \frac{x}{2}}$, sort of fits:

## Bonus limit

${\tan x \approx x}$, with the reciprocal difference being ${\displaystyle \frac{1}{\tan x} - \frac{1}{x}}$.

The limit ${\displaystyle \lim_{x\to 0} \frac{\tan x}{x} = 1}$ adds nothing new compared to the previous one, since ${\cos 0=1}$. But the difference of reciprocals is another story. For one thing, the principal error term for it is twice as large as for the sine limit: ${-x/3}$ versis ${x/6}$. Accordingly, the graph looks more like ${\displaystyle -\frac{2}{3}\tan \frac{x}{2}}$.

## Taylor series

All of the above differences have derivatives of all orders at the origin, which is not easy to prove with the standard calculus tools. Complex analysis makes the situation clear: the reciprocal of a holomorphic function with a zero at ${z=0}$ can be expanded into a Laurent series, and subtracting ${1/z}$ eliminates the principal part of that series, leaving a convergent power series, i.e., another holomorphic function. Let us take a look at these series:

${\displaystyle \frac{1}{e^x - 1} - \frac{1}{x} = - \frac{1}{2} + \frac{x}{12} - \frac{x^{3}}{720} + \frac{x^{5}}{30240} - \frac{x^{7}}{1209600} + \frac{x^{9}}{47900160} -\cdots }$

Nice, the coefficients have alternating signs and all of them have numerator ${1}$… oh no, next term is ${\displaystyle - \frac{691 x^{11}}{1307674368000}}$. The signs do continue to alternate. Apart from the constant term, only odd powers of ${x}$ appear, according to the central symmetry noted above. The coefficient of ${x^{n-1}}$ in this series is ${B_n/n!}$ where ${B_n}$ is the ${n}$th Bernoulli number. These are the “modern” Bernoulli numbers, with ${B_1 = -1/2}$ rather than ${1/2}$.

${\displaystyle \frac{1}{\log(x+1)} - \frac{1}{x} = \frac{1}{2} - \frac{x}{12} + \frac{x^{2}}{24} - \frac{19 x^{3}}{720} + \frac{3 x^{4}}{160} - \frac{863 x^{5}}{60480} + \cdots }$

Also alternating but not omitting even powers, and not decaying nearly as fast as the coefficients of the previous series. (Of course: this function has a singularity at ${-1}$ while the nearest singularities of the exponential thing are at ${\pm 2\pi i}$.)  These are Gregory coefficients, which according to Wikipedia “often appear in works of modern authors who do not recognize them”. I would not recognize them either without OEIS.

${\displaystyle \frac{1}{\sin x} - \frac{1}{x} = \frac{x}{6} + \frac{7 x^{3}}{360} + \frac{31 x^{5}}{15120} + \frac{127 x^{7}}{604800} + \frac{73 x^{9}}{3421440} + \cdots }$

These are all positive for some reason.

${\displaystyle \frac{1}{\tan x} - \frac{1}{x} = - \frac{x}{3} - \frac{x^{3}}{45} - \frac{2 x^{5}}{945} - \frac{x^{7}}{4725} - \frac{2 x^{9}}{93555} - \cdots }$

These are all negative for some reason.

## Iterating the logistic map: limsup of nonperiodic orbits

Last time we found that when a sequence with ${x_1\in (0,1)}$ and ${x_{n+1} = 4x_n(1-x_n)}$ does not become periodic, its upper limit ${\limsup x_n}$ must be at least ${\approx 0.925}$. This time we’ll see that ${\limsup x_n}$ can be as low as ${(2+\sqrt{3})/4\approx 0.933}$ and determine for which ${x_1}$ it is equal to 1.

The quadratic polynomial ${f(x)=4x(1-x)}$ maps the interval ${[0,1]}$ onto itself. Since the linear function ${g(x) = 1-2x}$ maps ${[0,1]}$ onto ${[-1,1]}$, it follows that the composition ${h=g\circ f\circ g^{-1}}$ maps ${[-1,1]}$ onto ${[-1,1]}$. This composition is easy to compute: ${h(x) = 2x^2-1 }$.

We want to know whether the iteration of ${f}$, starting from ${x_1}$, produces numbers arbitrarily close to ${1}$. Since ${f\circ f \circ \cdots \circ f = g^{-1}\circ h \circ h \circ \cdots \circ h\circ g}$ the goal is equivalent to finding whether the iteration of ${h}$, starting from ${g(x_1)}$, produces numbers arbitrarily close to ${g(1) = -1}$. To shorten formulas, let’s write ${h_n}$ for the ${n}$th iterate of ${h}$, for example, ${h_3 = h\circ h\circ h}$.

So far we traded one quadratic polynomial ${f}$ for another, ${h}$. But ${h}$ satisfies a nice identity: ${h(\cos t)=2\cos^2 t-1 = \cos(2t)}$, hence ${h_n(\cos t) = \cos (2^n t)}$ for all ${n\in\mathbb N}$. It’s convenient to introduce ${\alpha = \frac{1}{\pi}\cos^{-1}(1-2x_1)}$, so that ${ h_n(g(x_1)) = h_n(\cos 2\pi \alpha ) = \cos(2^n\cdot 2\pi \alpha) }$.

The problem becomes to determine whether the numbers ${2^n\cdot 2\pi \alpha}$ come arbitrarily close to ${\pi}$, modulo an integer multiple of ${2\pi}$. Dividing by ${2\pi}$ rephrases this as: does the fractional part of ${2^n \alpha}$ come arbitrarily close to ${1/2}$?

A number that is close to ${1/2}$ has the binary expansion beginning either with ${0.01111111\dots}$ or with ${0.10000000\dots}$. Since the binary expansion of ${2^n\alpha}$ is just the binary expansion of ${\alpha}$ shifted ${n}$ digits to the left, the property ${\limsup x_n=1}$ is equivalent to the following: for every ${k\in\mathbb N}$ the binary expansion of ${\alpha}$ has infinitely many groups of the form “1 followed by k zeros” or “0 followed by k ones”.

A periodic expansion cannot have the above property; this, ${\alpha}$ must be irrational. The property described above can then be simplified to “irrational and has arbitrarily long runs of the same digit”, since a long run of ${0}$s will be preceded by a ${1}$, and vice versa.

For example, combining the pairs 01 and 10 in some non-periodic way, we get an irrational number ${\alpha}$ such that the fractional part of ${2^n\alpha}$ does not get any closer to 1/2 than ${0.01\overline{10}_2 = 5/12}$ or ${0.10\overline{01}_2 = 7/12}$. Hence, ${\cos 2^n 2\pi \alpha \ge -\sqrt{3}/2}$, which leads to the upper bound ${x_n\le (2+\sqrt{3})/4\approx 0.933}$ for the sequence with the starting value ${x_1=(1-\cos\pi\alpha)/2}$.

Let us summarize the above observations about ${\limsup x_n}$.

Theorem: ${\limsup x_n=1}$ if and only if (A) the number ${\alpha = \frac{1}{\pi}\cos^{-1}(1-2x_1)}$ is irrational, and (B) the binary expansion of ${\alpha}$ has arbitrarily long runs of the same digit.

Intuitively, one expects that a number that satisfies (A) will also satisfy (B) unless it was constructed specifically to fail (B). But to verify that (B) holds for a given number is not an easy task.

As a bonus, let’s prove that for every rational number ${y\in (-1,1)}$, except 0, 1/2 and -1/2, the number ${\alpha = \frac{1}{\pi}\cos^{-1}y}$ is irrational. This will imply, in particular, that ${x_1=1/3}$ yields a non-periodic sequence. The proof follows a post by Robert Israel and requires a lemma (which could be replaced with an appeal to Chebyshev polynomials, but the lemma keeps things self-contained).

Lemma. For every ${n\in \mathbb N}$ there exists a monic polynomial ${P_n}$ with integer coefficients such that ${P_n(2 \cos t) = 2\cos nt }$ for all ${t}$.

Proof. Induction, the base case ${n=1}$ being ${P_1(x)=x}$. Assuming the result for integers ${\le n}$, we have ${2 \cos (n+1)t = e^{i(n+1)t} + e^{-i(n+1)t} }$ ${ = (e^{int} + e^{-int})(e^{it} + e^{-it}) - (e^{i(n-1)t} + e^{-i(n-1)t}) }$ ${ = P_n(2 \cos t) (2\cos t) - P_{n-1}(2\cos t) }$
which is a monic polynomial of ${2\cos t}$. ${\Box}$

Suppose that there exists ${n}$ such that ${n\alpha \in\mathbb Z}$. Then ${2\cos(\pi n\alpha)=\pm 2}$. By the lemma, this implies ${P_n(2\cos(\pi \alpha)) =\pm 2}$, that is ${P_n(2y)=\pm 2}$. Since ${2y}$ is a rational root of a monic polynomial with integer coefficients, the Rational Root Theorem implies that it is an integer. ${\Box}$

## A limsup exercise: iterating the logistic map

Define the sequence ${\{x_n\}}$ as follows: ${x_1=1/3}$ and ${x_{n+1} = 4x_n(1-x_n)}$ for ${n=1,2,\dots}$. What can we say about its behavior as ${n\rightarrow\infty}$?

The logistic map ${f(x)=4x(1-x)}$ leaves the interval [0,1] invariant (as a set), so ${0\le x_n\le 1}$ for all ${n}$. There are two fixed points: 0 and 3/4.

Can ${x_n}$ ever be 0? If ${n}$ is the first index this happens, then ${x_{n-1}}$ must be ${1}$. Working backwards, we find ${x_{n-2}=1/2}$, and ${x_{n-3}\in \{1/2 \pm \sqrt{2}/4\}}$. But this is impossible since all elements of the sequence are rational. Similarly, if ${n}$ is the first index when ${x_n = 3/4}$, then ${x_{n-1}=1/4}$ and ${x_{n-2}\in \{1/2\pm \sqrt{3}/4\}}$, a contradiction again. Thus, the sequence never stabilizes.

If ${x_n}$ had a limit, it would have to be one of the two fixed points. But both are repelling: ${f'(x) = 4 - 8x}$, so ${|f'(0)|=4>1 }$ and ${|f'(3/4)| = 2 > 1}$. This means that a small nonzero distance to a fixed point will increase under iteration. The only way to converge to a repelling fixed point is to hit it directly, but we already know this does not happen. So the sequence ${\{x_n\}}$ does not converge.

But we can still consider its upper and lower limits. Let’s try to estimate ${S = \limsup x_n}$ from below. Since ${f(x)\ge x}$ for ${x\in [0,3/4]}$, the sequence ${\{x_n\}}$ increases as long as ${x_n\le 3/4}$. Since we know it doesn’t have a limit, it must eventually break this pattern, and therefore exceed 3/4. Thus, ${S\ge 3/4}$.

This can be improved. The second iterate ${f_2(x)=f(f(x))}$ satisfies ${f_2(x)\ge x}$ for ${x}$ between ${3/4}$ and ${a = (5+\sqrt{5})/8 \approx 0.9}$. So, once ${x_n>3/4}$ (which, by above, happens infinitely often), the subsequence ${x_n, x_{n+2}, x_{n+4},\dots}$ increases until it reaches ${a}$. Hence ${S\ge a}$.

The bound ${\limsup x_n\ge a}$ is best possible if the only information about ${x_1}$ is that the sequence ${x_n}$ does not converge. Indeed, ${a}$ is a periodic point of ${f}$, with the corresponding iteration sequence ${\{(5+ (-1)^n\sqrt{5})/8\}}$.

Further improvement is possible if we recall that our sequence is rational and hence cannot hit ${a}$ exactly. By doubling the number of iterations (so that the iterate also fixes ${a}$ but also has positive derivative there) we arrive at the fourth iterate ${f_4}$. Then ${f_4(x)\ge x}$ for ${a\le x\le b}$, where ${b }$ is the next root of ${f_4(x)-x}$ after ${a}$, approximately ${0.925}$. Hence ${S\ge b}$.

This is a colorful illustration of the estimation process (made with Sage): we are covering the line ${y=x}$ with the iterates of ${f}$, so that each subsequent one rises above the line the moment the previous one falls. This improves the lower bound on ${S}$ from 0.75 to 0.9 to 0.92.

Although this process can be continued, the gains diminish so rapidly that it seems unlikely one can get to 1 in this way. In fact, one cannot because we are not using any properties of ${x_1}$ other than “the sequence ${x_n}$ is not periodic.” And it’s not true that ${\limsup x_n = 1}$ for every non-periodic orbit of ${f}$. Let’s return to this later.

## Alternating lacunary series and 1-1+1-1+1-1+…

The series ${1-1+1-1+1-1+\cdots}$ diverges. A common way to make it convergent is to replace each ${1}$ with a power of ${x}$; the new series will converge when ${|x|<1}$ and maybe its sum will have a limit as ${x\rightarrow 1}$. And indeed,

$\displaystyle x-x^2+x^3-x^4+x^5-x^6+\cdots = \frac{x}{1+x}$

which tends to ${1/2}$ as ${x}$ approaches ${1}$ from the left.

Things get more interesting if instead of consecutive integers as exponents, we use consecutive powers of ${2}$:

$\displaystyle f(x) = x-x^2+x^4-x^8+x^{16} -x^{32} +\cdots$

On most of the interval ${(0,1)}$ it behaves just like the previous one:

But there appears to be a little blip near ${1}$. Let’s zoom in:

And zoom in more:

Still there.

This function was considered by Hardy in 1907 paper Theorems concerning infinite series. On pages 92–93 he shows that it “oscillates between finite limits of indetermination for ${x=1}$“. There is also a footnote: “The simple proof given above was shown to me by Mr. J. H. Maclagan-Wedderburn. I had originally obtained the result by means of a contour integral.”

Okay, but what are these “finite limits of indetermination”? The Alternating Series Estimation shows ${0 for ${x\in (0,1)}$, but the above plots suggest that ${f}$ oscillates between much tighter bounds. Let’s call them ${A= \liminf_{x\rightarrow 1-} f(x)}$ and ${B=\limsup_{x\rightarrow 1-} f(x)}$.

Since ${f(x)+f(x^2)\equiv x^2}$, it follows that ${f(x)+f(x^2)\rightarrow 1}$ as ${x\rightarrow 1^-}$. Hence, ${B = \limsup_{x\rightarrow 1-}(1-f(x^2)) = 1-A}$. In other words, ${A}$ and ${B}$ are symmetric about ${1/2}$. But what are they?

I don’t have an answer, but here is a simple estimate. Let ${g(x)=x-x^2}$ and observe that

$\displaystyle f(x) = g(x) + g(x^4)+g(x^{16}) + g(x^{64})+\dots \qquad \qquad (1)$

The function ${g}$ is not hard to understand: its graph is a parabola.

Since ${g}$ is positive on ${(0,1)}$, any of the terms in the sum (1) gives a lower bound for ${f}$. Each individual term is useless for this purpose, since it vanishes at ${1}$. But we can pick ${n}$ in ${g(x^{4^n})}$ depending on ${x}$.

Let ${x_0\in(0,1)}$ be the unique solution of the equation ${x_0^4=1-x_0}$. It could be written down explicitly, but this is not a pleasant experience; numerically ${x_0\approx 0.7245}$. For every ${x>x_0}$ there is an integer ${n\ge 1}$ such that ${x^{4^n}\in [1-x_0,x_0]}$, namely the smallest integer such that ${x^{4^n} \le x_0}$. Hence,

$\displaystyle f(x)> g(x^{4^n})\ge \min_{[x_0,1-x_0]}g = x_0-x_0^2>0.1996 \qquad \qquad (2)$

which gives a nontrivial lower bound ${A>0.1996}$ and symmetrically ${B<0.8004}$. Frustratingly, this falls just short of neat ${1/5}$ and ${4/5}$.

One can do better than (2) by using more terms of the series (1). For example, study the polynomial ${g(t)+g(t^4)}$ and find a suitable interval ${[t_0^4,t_0]}$ on which its minimum is large (such an interval will no longer be symmetric). Or use ${3,4,5...}$ consecutive terms of the series… which quickly gets boring. This approach gives arbitrarily close approximations to ${A}$ and ${B}$, but does not tell us what these values really are.