## Folded letters and uniformly bounded function sequences

We know that every bounded sequence of real numbers has a convergent subsequence. For a sequence of functions, say ${f_n\colon [0, 1]\to \mathbb R}$, the notion of boundedness can be stated as: there exists a constant ${M}$ such that ${|f_n(x)|\le M}$ for every ${n}$ and for all ${x\in [0, 1]}$. Such a sequence is called uniformly bounded.

Once we fix some point ${x\in [0, 1]}$, the boundedness assumption provides a subsequence ${\{f_{n_k}\}}$ which converges at that point. But since different points may require different subsequences, it is not obvious whether we can pick a subsequence ${\{f_{n_k}\}}$ which converges for all ${x\in [0, 1]}$. (Such a subsequence is called pointwise convergent.)

It is easy to give an example of a uniformly bounded sequence with no uniformly convergent subsequence (uniform convergence ${f_n\to f}$ requires ${\sup |f_n-f|\to 0}$, which is stronger than ${f_n(x) \to f(x)}$ for every ${x}$). Indeed, ${f_n(x) = x^n}$ does the job. This sequence is uniformly bounded (by ${M=1}$) and converges pointwise to a discontinuous function ${g}$ such that ${g(1)=1 }$ and ${g(x)=0}$ elsewhere. Any subsequence ${f_{n_k}}$ has the same pointwise limit ${g}$ and since ${g}$ is not continuous, the convergence cannot be uniform.

But what would be an example of a uniformly bounded sequence of continuous functions with no pointwise convergent subsequence? In Principles of Mathematical Analysis Rudin gives ${f_n(x) = \sin nx}$ as such an example but then uses the Lebesgue Dominated Convergence Theorem to prove the non-existence of pointwise convergent subsequences. I do not want to use the DCT.

The simplest example I could think of is based on the letter-folding function ${F\colon [0, 1]\to [0, 1]}$ defined by

${\displaystyle F(x) = \begin{cases} 3x, & x\in [0, 1/3] \\ 2 - 3x, & x\in [1/3, 2/3] \\ 3x - 2, & x \in [2/3, 1] \end{cases} }$

or by a magic one-line formula if you prefer: ${F(x) = 3x - 1 - |3x-1| + |3x - 2|}$.

Let ${\{f_n\}}$ be the sequence of the iterates of ${L}$, that is ${f_0(x) = x}$ and ${f_{n+1}(x) = F(f_n(x))}$. This means ${f_1 = F}$, ${f_2 = F\circ F}$, ${f_3 = F\circ F \circ F}$, and so on.

By construction, the sequence ${\{f_n\}}$ is uniformly bounded (${M=1}$). It is somewhat similar to the example ${\{\sin nx\}}$ in that we have increasingly rapid oscillations. But the proof that ${\{f_n\}}$ has no pointwise convergent subsequence is elementary. It is based on the following observations.

(A) Suppose that ${\{a_j\}}$ is a sequence such that ${a_j\in \{0, 2\}}$ for each ${j \in \mathbb N}$. Then the number ${\displaystyle x = \sum_{j=1}^\infty \frac{a_j}{3^j}}$ satisfies ${x\in [0, 1/3]}$ if ${a_1=0}$ and ${x\in [2/3, 1]}$ if ${a_1 = 2}$.

The proof of (A) amounts to summing a geometric series. Incidentally, observe that ${a_1, a_2, \dots}$ are the digits of ${x}$ in base ${3}$.

(B) For ${x}$ as above we have ${\displaystyle F(x) = \sum_{j=1}^\infty \frac{a_{j+1}}{3^j}}$. In other words, ${F}$ shifts the ternary digits of ${x}$ to the left. As a consequence, ${f_n}$ shifts them to the left by ${n}$ places.

(C) Given any subsequence ${\{f_{n_k}\}}$, let ${a_j = 2}$ if ${j = n_{2k} + 1}$ for some ${k}$, and ${a_j=0}$ otherwise. By part (B), ${\displaystyle f_{n_k}(x) = \sum_{j=1}^\infty \frac{a_{j + n_k}}{3^j} }$ which means the first ternary digit of ${f_{n_k}(x)}$ is ${a_{n_k + 1}}$. By construction, this digit is ${2}$ when ${k}$ is even, and ${0}$ when ${k}$ is odd. By part (A) we have ${f_{n_k}(x) \ge 2/3}$ when ${k}$ is even, and ${f_{n_k}(x) \le 1/3}$ when ${k}$ is odd. Thus, ${\{f_{n_k}(x)\}}$ does not converge. This completes the proof.

## Remarks

The set of all points ${x}$ of the form considered in (A), (B), (C), i.e., those with all ternary digits ${0}$ or ${2}$, is precisely the standard Cantor set ${C}$.

The function ${F}$ magnifies each half of ${C}$ by the factor of ${3}$, and maps it onto ${C}$.

The important part of the formula for ${F}$ is that ${F(x) = 3x\bmod 1}$ when ${x\in C}$. The rest of it could be some other continuous extension to ${[0, 1]}$.

A similar example could be based on the tent map ${T(x) = 1 - |2x-1|}$, whose graph is shown below.

However, in case of the tent map it is more convenient to use the sequence of even iterates: ${T\circ T}$, ${T\circ T\circ T\circ T}$, and so on.

Indeed, since ${T(T(x)) = 4x \bmod 1}$ when ${x\in [0, 1/4]\cup [1/2, 3/4]}$, one can simply replace base ${3}$ with base ${4}$ in all of the above computations and arrive at the same conclusion, except the orbit of ${x}$ will now be jumping between the intervals ${[0, 1/4]}$ and ${[1/2, 3/4]}$.

The tent map is conjugate to the logistic map ${L(x) = 4x(1-x)}$, shown below. This conjugacy is ${L\circ \varphi = \varphi\circ T}$ where ${\varphi(x) = (1 - \cos \pi x)/2}$.

The conjugacy shows that the ${n}$th iterate ${L^{n}}$ is ${\varphi\circ T^n \circ \varphi^{-1}}$. Therefore, the sequence of the even iterates of ${L}$ has no pointwise convergent subsequence. This provides an example with smooth functions, even polynomials.

One could use the sequence of all iterates (not just the even ones) in the constructions based on ${T}$ and ${L}$, it just takes a bit of extra work that I prefer to avoid.

## Iterating the logistic map: limsup of nonperiodic orbits

Last time we found that when a sequence with ${x_1\in (0,1)}$ and ${x_{n+1} = 4x_n(1-x_n)}$ does not become periodic, its upper limit ${\limsup x_n}$ must be at least ${\approx 0.925}$. This time we’ll see that ${\limsup x_n}$ can be as low as ${(2+\sqrt{3})/4\approx 0.933}$ and determine for which ${x_1}$ it is equal to 1.

The quadratic polynomial ${f(x)=4x(1-x)}$ maps the interval ${[0,1]}$ onto itself. Since the linear function ${g(x) = 1-2x}$ maps ${[0,1]}$ onto ${[-1,1]}$, it follows that the composition ${h=g\circ f\circ g^{-1}}$ maps ${[-1,1]}$ onto ${[-1,1]}$. This composition is easy to compute: ${h(x) = 2x^2-1 }$.

We want to know whether the iteration of ${f}$, starting from ${x_1}$, produces numbers arbitrarily close to ${1}$. Since ${f\circ f \circ \cdots \circ f = g^{-1}\circ h \circ h \circ \cdots \circ h\circ g}$ the goal is equivalent to finding whether the iteration of ${h}$, starting from ${g(x_1)}$, produces numbers arbitrarily close to ${g(1) = -1}$. To shorten formulas, let’s write ${h_n}$ for the ${n}$th iterate of ${h}$, for example, ${h_3 = h\circ h\circ h}$.

So far we traded one quadratic polynomial ${f}$ for another, ${h}$. But ${h}$ satisfies a nice identity: ${h(\cos t)=2\cos^2 t-1 = \cos(2t)}$, hence ${h_n(\cos t) = \cos (2^n t)}$ for all ${n\in\mathbb N}$. It’s convenient to introduce ${\alpha = \frac{1}{\pi}\cos^{-1}(1-2x_1)}$, so that ${ h_n(g(x_1)) = h_n(\cos 2\pi \alpha ) = \cos(2^n\cdot 2\pi \alpha) }$.

The problem becomes to determine whether the numbers ${2^n\cdot 2\pi \alpha}$ come arbitrarily close to ${\pi}$, modulo an integer multiple of ${2\pi}$. Dividing by ${2\pi}$ rephrases this as: does the fractional part of ${2^n \alpha}$ come arbitrarily close to ${1/2}$?

A number that is close to ${1/2}$ has the binary expansion beginning either with ${0.01111111\dots}$ or with ${0.10000000\dots}$. Since the binary expansion of ${2^n\alpha}$ is just the binary expansion of ${\alpha}$ shifted ${n}$ digits to the left, the property ${\limsup x_n=1}$ is equivalent to the following: for every ${k\in\mathbb N}$ the binary expansion of ${\alpha}$ has infinitely many groups of the form “1 followed by k zeros” or “0 followed by k ones”.

A periodic expansion cannot have the above property; this, ${\alpha}$ must be irrational. The property described above can then be simplified to “irrational and has arbitrarily long runs of the same digit”, since a long run of ${0}$s will be preceded by a ${1}$, and vice versa.

For example, combining the pairs 01 and 10 in some non-periodic way, we get an irrational number ${\alpha}$ such that the fractional part of ${2^n\alpha}$ does not get any closer to 1/2 than ${0.01\overline{10}_2 = 5/12}$ or ${0.10\overline{01}_2 = 7/12}$. Hence, ${\cos 2^n 2\pi \alpha \ge -\sqrt{3}/2}$, which leads to the upper bound ${x_n\le (2+\sqrt{3})/4\approx 0.933}$ for the sequence with the starting value ${x_1=(1-\cos\pi\alpha)/2}$.

Let us summarize the above observations about ${\limsup x_n}$.

Theorem: ${\limsup x_n=1}$ if and only if (A) the number ${\alpha = \frac{1}{\pi}\cos^{-1}(1-2x_1)}$ is irrational, and (B) the binary expansion of ${\alpha}$ has arbitrarily long runs of the same digit.

Intuitively, one expects that a number that satisfies (A) will also satisfy (B) unless it was constructed specifically to fail (B). But to verify that (B) holds for a given number is not an easy task.

As a bonus, let’s prove that for every rational number ${y\in (-1,1)}$, except 0, 1/2 and -1/2, the number ${\alpha = \frac{1}{\pi}\cos^{-1}y}$ is irrational. This will imply, in particular, that ${x_1=1/3}$ yields a non-periodic sequence. The proof follows a post by Robert Israel and requires a lemma (which could be replaced with an appeal to Chebyshev polynomials, but the lemma keeps things self-contained).

Lemma. For every ${n\in \mathbb N}$ there exists a monic polynomial ${P_n}$ with integer coefficients such that ${P_n(2 \cos t) = 2\cos nt }$ for all ${t}$.

Proof. Induction, the base case ${n=1}$ being ${P_1(x)=x}$. Assuming the result for integers ${\le n}$, we have ${2 \cos (n+1)t = e^{i(n+1)t} + e^{-i(n+1)t} }$ ${ = (e^{int} + e^{-int})(e^{it} + e^{-it}) - (e^{i(n-1)t} + e^{-i(n-1)t}) }$ ${ = P_n(2 \cos t) (2\cos t) - P_{n-1}(2\cos t) }$
which is a monic polynomial of ${2\cos t}$. ${\Box}$

Suppose that there exists ${n}$ such that ${n\alpha \in\mathbb Z}$. Then ${2\cos(\pi n\alpha)=\pm 2}$. By the lemma, this implies ${P_n(2\cos(\pi \alpha)) =\pm 2}$, that is ${P_n(2y)=\pm 2}$. Since ${2y}$ is a rational root of a monic polynomial with integer coefficients, the Rational Root Theorem implies that it is an integer. ${\Box}$

## A limsup exercise: iterating the logistic map

Define the sequence ${\{x_n\}}$ as follows: ${x_1=1/3}$ and ${x_{n+1} = 4x_n(1-x_n)}$ for ${n=1,2,\dots}$. What can we say about its behavior as ${n\rightarrow\infty}$?

The logistic map ${f(x)=4x(1-x)}$ leaves the interval [0,1] invariant (as a set), so ${0\le x_n\le 1}$ for all ${n}$. There are two fixed points: 0 and 3/4.

Can ${x_n}$ ever be 0? If ${n}$ is the first index this happens, then ${x_{n-1}}$ must be ${1}$. Working backwards, we find ${x_{n-2}=1/2}$, and ${x_{n-3}\in \{1/2 \pm \sqrt{2}/4\}}$. But this is impossible since all elements of the sequence are rational. Similarly, if ${n}$ is the first index when ${x_n = 3/4}$, then ${x_{n-1}=1/4}$ and ${x_{n-2}\in \{1/2\pm \sqrt{3}/4\}}$, a contradiction again. Thus, the sequence never stabilizes.

If ${x_n}$ had a limit, it would have to be one of the two fixed points. But both are repelling: ${f'(x) = 4 - 8x}$, so ${|f'(0)|=4>1 }$ and ${|f'(3/4)| = 2 > 1}$. This means that a small nonzero distance to a fixed point will increase under iteration. The only way to converge to a repelling fixed point is to hit it directly, but we already know this does not happen. So the sequence ${\{x_n\}}$ does not converge.

But we can still consider its upper and lower limits. Let’s try to estimate ${S = \limsup x_n}$ from below. Since ${f(x)\ge x}$ for ${x\in [0,3/4]}$, the sequence ${\{x_n\}}$ increases as long as ${x_n\le 3/4}$. Since we know it doesn’t have a limit, it must eventually break this pattern, and therefore exceed 3/4. Thus, ${S\ge 3/4}$.

This can be improved. The second iterate ${f_2(x)=f(f(x))}$ satisfies ${f_2(x)\ge x}$ for ${x}$ between ${3/4}$ and ${a = (5+\sqrt{5})/8 \approx 0.9}$. So, once ${x_n>3/4}$ (which, by above, happens infinitely often), the subsequence ${x_n, x_{n+2}, x_{n+4},\dots}$ increases until it reachesÂ ${a}$. Hence ${S\ge a}$.

The boundÂ ${\limsup x_n\ge a}$ is best possible if the only information aboutÂ ${x_1}$ is that the sequenceÂ ${x_n}$ does not converge. Indeed, ${a}$ is a periodic point of ${f}$, with the corresponding iteration sequence ${\{(5+Â (-1)^n\sqrt{5})/8\}}$.

Further improvement is possible if we recall that our sequence is rational and hence cannot hit ${a}$Â exactly. By doubling the number of iterations (so that the iterate also fixes ${a}$ but also has positive derivative there) we arriveÂ at the fourth iterate ${f_4}$. ThenÂ ${f_4(x)\ge x}$ for ${a\le x\le b}$, where ${b }$ is the next root of ${f_4(x)-x}$ after ${a}$, approximately ${0.925}$. HenceÂ ${S\ge b}$.

This is a colorful illustration of the estimation process (made with Sage): we are covering the line ${y=x}$ with the iterates of ${f}$, so that each subsequent one rises above the line the moment the previous one falls. This improves the lower bound onÂ ${S}$ from 0.75 to 0.9 to 0.92.

Although this process can be continued, the gainsÂ diminish so rapidly that it seems unlikely one can get to 1 in this way. In fact, one cannot because we are not using any properties of ${x_1}$ other than “the sequence ${x_n}$ is not periodic.” And it’s not true that ${\limsup x_n = 1}$ for every non-periodic orbit of ${f}$. Let’s return toÂ this later.

## From boring to puzzling in 30 iterative steps

The function ${f(x)=2\cos x }$ may be nice and important as a part of trigonometric basis, but there is nothing exciting in its graph:

Let’s look at its iterations ${f^n=f\circ f\circ \dots \circ f}$ where ${n }$ is the number of iterations, not an exponent. Here is the graph of ${f^{14}}$:

A rectangular pattern is already visible above; further iterations only make it stronger. For example, ${f^{30} }$:

It may be impossible to see on the graph, but the rectangles are slightly apart from one another (though of course they are connected by the graph of continuous function). This is easier to see on the histogram of the values ${f^{n}(0) }$ for ${n=0,\dots, 10000 }$, which contains two small gaps in addition to a large one:

What goes on here? The range of ${f}$ on ${[-2,2]}$, as well as the range of any of its iterates, is of course connected: it is the closed interval ${[f^{2}(0),f(0)] = [2 \cos 2, 2]}$. But the second iterate ${f^2=f\circ f}$ also has two invariant subintervals, marked here by horizontal lines:

Namely, they are ${I_1=[f^{2}(0), f^{4}(0)]}$ and ${I_2=[f^{3}(0),2]}$. It is easy to see that ${f(I_1)=I_2}$ and ${f(I_2)=I_1}$. The gap between ${I_1}$ and ${I_2}$ contains the repelling fixed point of ${f}$, approximately ${x=1.03}$. Every orbit except for the fixed point itself is pushed away from this point and is eventually trapped in the cycle between ${I_1}$ and ${I_2}$.

But there is more. A closer look at the fourth iterate reveals smaller invariant subintervals of ${f^4}$. Here is what it does on ${I_2}$:

Here the gap contains a repelling fixed point of ${f^2}$, approximately ${1.8}$. The invariant subintervals of ${I_2}$ are ${I_{21}=[f^{3}(0), f^{7}(0)]}$ and ${I_{22}=[f^9(0), 2]}$. Also, ${I_1}$ contains invariant subintervals ${I_{11}=[f^{2}(0), f^{6}(0)]}$ and ${I_{12}=[f^8(0), f^4(0)]}$. These are the projections of the rectangles in the graph of ${f^{30}}$ onto the vertical axes.

No more such splitting occurs. The histogram of the values of iterates of ${f}$ indeed consists of four disjoint intervals. Can one get a Cantor-type set in this way, starting from some boring function?

## Fun with TI-83: billions upon billions of cosines

Okay, maybe not billions. But by taking cosines repeatedly, one can find the solution of the equation ${\cos x = x}$ with high precision in under a minute.

Step 1: Enter any number, for example 0, and press Enter.
Step 2: Enter cos(Ans) and press Enter

Step 3: Keep pushing Enter. (Unfortunately, press-and-hold-to-repeat does not work on TI-83). This will repeatedly execute the command cos(Ans).

After a few iterations, the numbers begin to settle down:

and eventually stabilize at 0.7390851332

Explanation: the graph of cosine meets the line ${y = x}$ at one point: this is a unique fixed point of the function ${f(x)=\cos x}$.

Since the derivative ${f'(x)=-\sin x}$ at the fixed point is less than 1 in absolute value, the fixed point is attracting.

Now try the same with the equation ${10 \cos x =x}$.

This time, the numbers flat out refuse to converge:

Explanation: the graph of ${f(x)=10\cos x}$ meets the line ${y = x}$ at seven point: thus, this function has seven fixed points.

And it so happens that ${|f'(x)|>1}$ at each of those fixed points. This makes them repelling. The sequence has nowhere to converge, because every candidate for the limit pushes it away. All that’s left to it is to jump chaotically around the interval ${[-10,10]}$. Here are the first 1024 terms, plotted with OpenOffice:

Clearly, the distribution of the sequence is not uniform. I divided the interval ${[-10,10]}$ into subintervals of length ${0.05}$ and counted the number of terms falling into each.

What is going on here? Stay tuned.