The Kolakoski-Cantor set

A 0-1 sequence can be interpreted as a point in the interval [0,1]. But this makes the long-term behavior of the sequence practically invisible due to limited resolution of our screens (and eyes). To make it visible, we can also plot the points obtained by shifting the binary sequence to the left (Bernoulli shift, which also goes by many other names). The resulting orbit  is often dense in the interval, which doesn’t really help us visualize any patterns. But sometimes we get an interesting complex structure.

kol_set_small
The Kolakoski-Cantor set, KC

The vertical axis here is the time parameter, the number of dyadic shifts. The 0-1 sequence being visualized is the Kolakoski sequence in its binary form, with 0 and 1 instead of 1 and 2. By definition, the n-th run of equal digits in this sequence has length {x_n+1}. In particular, 000 and 111 never occur, which contributes to the blank spots near 0 and 1.

Although the sequence is not periodic, the set is quite stable in time; it does not make a visible difference whether one plots the first 10,000 shifts, or 10,000,000. The apparent symmetry about 1/2 is related to the open problem of whether the Kolakoski sequence is mirror invariant, meaning that together with any finite word (such as 0010) it also contains its complement (that would be 1101).

There are infinitely many forbidden words apart from 000 and 111 (and the words containing those). For example, 01010 cannot occur because it has 3 consecutive runs of length 1, which implies having 000 elsewhere in the sequence. For the same reason, 001100 is forbidden. This goes on forever: 00100100 is forbidden because it implies having 10101, etc.

The number of distinct words of length n in the Kolakoski sequence is bounded by a power of n (see F. M. Dekking, What is the long range order in the Kolakoski sequence?). Hence, the set pictured above is covered by {O(n^p)} intervals of length {2^{-n}}, which implies it (and even its closure) is zero-dimensional in any fractal sense (has Minkowski dimension 0).

The set KC apparently does not have any isolated points; this is also an open problem, of recurrence (whether every word that appears in the sequence has to appear infinitely many times). Assuming this is so, the closure of the orbit is a totally disconnected compact set without isolated points, i.e., a Cantor set. It is not self-similar (not surprising, given it’s zero-dimensional), but its relation to the Bernoulli shift implies a structure resembling self-similarity:

kolhalf
KC is covered by two copies scaled by 1/2

Applying the transformations {x\mapsto x/2} and {x\mapsto (1+x)/2} yields two disjoint smaller copies that cover the original set, but with some spare parts left. The leftover bits exist because not every word in the sequence can be preceded by both 0 and 1.

kolx2
KC covered by two copies scaled by 2

Applying the transformations {x\mapsto 2x} and {x\mapsto 2x-1} yields two larger copies that cover the original set. There are no extra parts within the interval [0,1] but there is an overlap between the two copies.

The number {c = \inf KC\approx 0.146778684766479} appears several times in the structure of the set: for instance, the central gap is {((1-c)/2, (1+c)/2)}, the second-largest gap on the left has the left endpoint {(1-c)/4}, etc. The Inverse Symbolic Calculator has not found anything about this number. Its binary expansion begins with 0.001 001 011 001 001 101 001 001 101 100… which one can recognize as the smallest binary number that can be written without doing anything three times in a row. (Can’t have 000; also can’t have 001 three times in a row; and 001 010 is not allowed because it contains 01010, three runs of length 1. Hence, the number begins with 001 001 011.) This number is obviously irrational, but other than that…

In conclusion, the Python code used to plot KC.

import numpy as np
import matplotlib.pyplot as plt
n = 1000000
a = np.zeros(n, dtype=int)
j = 0                  
same = False  
for i in range(1, n):
    if same:
        a[i] = a[i-1]    
        same = False
    else:
        a[i] = 1 - a[i-1]
        j += 1            
        same = bool(a[j])
v = np.array([1/2**k for k in range(60, 0, -1)])
b = np.convolve(a, v, mode='valid')
plt.plot(b, np.arange(np.size(b)), '.', ms=2)
plt.show()

Pisot constant beyond 0.843

In a 1946 paper Charles Pisot proved a theorem involving a curious constant {\gamma_0= 0.843\dots}. It can be defined as follows:

{\gamma_0= \sup\{r \colon \exists } monic polynomial {p} such that {|p(e^z)| \le 1} whenever {|z|\le r \}}

Equivalently, {\gamma_0} is determined by the requirement that the set {\{e^z\colon |z|\le \gamma_0\}} have logarithmic capacity 1; this won’t be used here. The theorem is stated below, although this post is really about the constant.

Theorem: If an entire function takes integer values at nonnegative integers and is {O(e^{\gamma |z|})} for some {\gamma < \gamma_0}, then it is a finite linear combination of terms of the form {z^n \alpha^z}, where each {\alpha } is an algebraic integer.

The value of {\gamma_0} is best possible; thus, in some sense Pisot’s theorem completed a line of investigation that began with a 1915 theorem by Pólya which had {\log 2} in place of {\gamma_0}, and where the conclusion was that {f} is a polynomial. (Informally speaking, Pólya proved that {2^z} is the “smallest” entire-function that is integer-valued on nonnegative integers.)

Although the constant {\gamma_0} was mentioned in later literature (here, here, and here), no further digits of it have been stated anywhere, as far as I know. So, let it be known that the decimal expansion of {\gamma_0} begins with 0.84383.


A lower bound on {\gamma_0} can be obtained by constructing a monic polynomial that is bounded by 1 on the set {E(r) = \{e^z \colon |z|\le r \}}. Here is E(0.843):

er

It looks pretty round, except for that flat part on the left. In fact, E(0.82) is covered by a disk of unit radius centered at 1.3, which means that the choice {p(z) = z-1.3} shows {\gamma_0 > 0.82}.

082disk
p(z) = z-1.3 gives lower bound 0.82

How to get an upper bound on {\gamma_0}? Turns out, it suffices to exhibit a monic polynomial {q} that has all zeros in {E(r)} and satisfies {|q|>1} on the boundary of {E(r)}. The existence of such {q} shows {\gamma_0 < r}. Indeed, suppose that {p} is monic and {|p|\le 1} on {E(r)}. Consider the function {\displaystyle u(z) = \frac{\log|p(z)|}{\deg p} - \frac{\log|q(z)|}{\deg q}}. By construction {u<0} on the boundary of {E(r)}. Also, {u} is subharmonic in its complement, including {\infty}, where the singularities of both logarithms cancel out, leaving {u(\infty)=0}. This contradicts the maximum principle for subharmonic functions, according to which {u(\infty)} cannot exceed the maximum of {u} on the boundary.

The choice of {q(z) = z-1.42} works for {r=0.89}.

089disk

So we have {\gamma_0} boxed between 0.82 and 0.89; how to get more precise bounds? I don’t know how Pisot achieved the precision of 0.843… it’s possible that he strategically picked some linear and quadratic factors, raised them to variable integer powers and optimized the latter. Today it is too tempting to throw some optimization routine on the problem and let it run for a while.

But what to optimize? The straightforward approach is to minimize the maximum of {|p(e^z)|} on the circle {|z|=r}, approximated by sampling the function at a sufficiently fine uniform grid {\{z_k\}} and picking the maximal value. This works… unspectacularly. One problem is that the objective function is non-differentiable. Another is that taking maximum throws out a lot of information: we are not using the values at other sample points to better direct the search. After running optimization for days, trying different optimization methods, tolerance options, degrees of the polynomial, and starting values, I was not happy with the results…

Turns out, the optimization is much more effective if one minimizes the variance of the set {\{|p(\exp(z_k))|^2\}}. Now we are minimizing a polynomial function of {p(\exp(z_k)}, which pushes them toward having the same absolute value — the behavior that we want the polynomial to have. It took from seconds to minutes to produce the polynomials shown below, using BFGS method as implemented in SciPy.

As the arguments for optimization function I took the real and imaginary parts of the zeros of the polynomial. The symmetry about the real axis was enforced automatically: the polynomial was the product of quadratic terms {(z-x_k-iy_k) (z-x_k+iy_k)}. This eliminated the potentially useful option of having real zeros of odd order, but I did not feel like special-casing those.

Three digits

843
Degree 8, lower bound 0.843

Real part: 0.916, 1.186, 1.54, 1.783
Imaginary part: 0.399, 0.572, 0.502, 0.199

Here and below, only the zeros with positive imaginary part are listed (in the left-to-right order), the others being their conjugates.

844
Degree 10, upper bound 0.844

Real part: 0.878, 1.0673, 1.3626, 1.6514, 1.8277
Imaginary part: 0.3661, 0.5602, 0.6005, 0.4584, 0.171

Four digits

8438
Degree 14, lower bound 0.8438

Real part: 0.8398, 0.9358, 1.1231, 1.357, 1.5899, 1.776, 1.8788
Imaginary part: 0.3135, 0.4999 ,0.6163, 0.637, 0.553, 0.3751, 0.1326

8439
Degree 14, upper bound 0.8439

Real part: 0.8397, 0.9358, 1.1231, 1.3571, 1.5901, 1.7762, 1.879
Imaginary part: 0.3136, 0.5, 0.6164, 0.6372, 0.5531, 0.3751, 0.1326

No, I didn’t post the same picture twice. The polynomials are just that similar. But as the list of zeros shows, there are tiny differences…

Five digits

84383
Degree 20, lower bound 0.84383

Real part: 0.81527, 0.8553, 0.96028, 1.1082, 1.28274, 1.46689, 1.63723, 1.76302, 1.82066, 1.86273
Imaginary part: 0.2686, 0.42952, 0.556, 0.63835, 0.66857, 0.63906, 0.54572, 0.39701, 0.23637, 0.08842

84384
Degree 20, upper bound 0.84384

Real part: 0.81798, 0.85803, 0.95788, 1.09239, 1.25897, 1.44255, 1.61962, 1.76883, 1.86547, 1.89069
Imaginary part: 0.26631, 0.4234, 0.54324, 0.62676, 0.66903, 0.65366, 0.57719, 0.44358, 0.26486, 0.07896

Again, nearly the same polynomial works for upper and lower bounds. The fact that the absolute value of each of these polynomials is below 1 (for lower bounds) or greater than 1 (for upper bounds) can be ascertained by sampling them and using an upper estimate on the derivative; there is enough margin to trust computations with double precision.

Finally, the Python script I used. The function “obj” is getting minimized while function “values” returns the actual values of interest: the minimum and maximum of polynomial. The degree of polynomial is 2n, and the radius under consideration is r. The sample points are collected in array s. To begin with, the roots are chosen randomly. After minimization runs (inevitably, ending in a local minimum of which there are myriads), the new starting point is obtained by randomly perturbing the local minimum found. (The perturbation is smaller if minimization was particularly successful.)

import numpy as np
from scipy.optimize import minimize

def obj(r):
    rc = np.concatenate((r[:n]+1j*r[n:], r[:n]-1j*r[n:])).reshape(-1,1)
    p = np.prod(np.abs(s-rc)**2, axis=0)
    return np.var(p)

def values(r):
    rc = np.concatenate((r[:n]+1j*r[n:], r[:n]-1j*r[n:])).reshape(-1,1)
    p = np.prod(np.abs(s-rc), axis=0)
    return [np.min(p), np.max(p)]

r = 0.84384
n = 10
record = 2 
s = np.exp(r * np.exp(1j*np.arange(0, np.pi, 0.01)))
xr = np.random.uniform(0.8, 1.8, size=(n,))
xi = np.random.uniform(0, 0.7, size=(n,))
x0 = np.concatenate((xr, xi))

while True:
    res = minimize(obj, x0, method = 'BFGS')
    if res['fun'] < record:
        record = res['fun']
        print(repr(res['x']))
        print(values(res['x']))
        x0 = res['x'] + np.random.uniform(-0.001, 0.001, size=x0.shape)
    else:
        x0 = res['x'] + np.random.uniform(-0.05, 0.05, size=x0.shape)

 

Multipliers preserving series convergence

The Comparison Test shows that if {\sum a_n} is an absolutely convergent series, and {\{b_n\}} is a bounded sequence, then {\sum a_nb_n} converges absolutely. Indeed, {|a_nb_n|\le M|a_n|} where {M} is such that {|b_n|\le M} for all {n}.

With a bit more effort one can prove that this property of preserving absolute convergence is equivalent to being a bounded sequence. Indeed, if {\{b_n\}} is unbounded, then for every {k} there is {n_k} such that {|b_{n_k}|\ge 2^k}. We can ensure {n_k > n_{k-1}} since there are infinitely many candidates for {n_k}. Define {a_n=2^{-k}} if {n = n_k} for some {k}, and {a_n=0} otherwise. Then {\sum a_n} converges but {\sum a_nb_n} diverges because its terms do not approach zero.


What if we drop “absolutely”? Let’s say that a sequence {\{b_n\}} preserves convergence of series if for every convergent series {\sum a_n}, the series {\sum a_n b_n} also converges. Being bounded doesn’t imply this property: for example, {b_n=(-1)^n} does not preserve convergence of the series {\sum (-1)^n/n}.

Theorem. A sequence {\{b_n\}} preserves convergence of series if and only if it has bounded variation, meaning {\sum |b_n-b_{n+1}| } converges.

For brevity, let’s say that {\{b_n\}} is BV. Every bounded monotone sequence is BV because the sum {\sum |b_n-b_{n+1}| } telescopes. On the other hand, {(-1)^n} is not BV, and neither is {(-1)^n/n}. But {(-1)^n/n^p} is for {p>1}. The following lemma describes the structure of BV sequences.

Lemma 1. A sequence {\{b_n\}} is BV if and only if there are two increasing bounded sequences {\{c_n\}} and {\{d_n\}} such that {b_n=c_n-d_n} for all {n}.

Proof. If such {c_n,d_n} exist, then by the triangle inequality \displaystyle \sum_{n=1}^N |b_n-b_{n+1}| = \sum_{n=1}^N (|c_n-c_{n +1}| + |d_{n+1}-d_n|) = \sum_{n=1}^N (c_{n+1}-c_n) + \sum_{n=1}^N (d_{n+1}-d_n) and the latter sums telescope to {c_{N+1}-c_1 + d_{N+1}-d_1} which has a limit as {N\rightarrow\infty} since bounded monotone sequences converge.

Conversely, suppose {\{b_n\}} is BV. Let {c_n = \sum_{k=1}^{n-1}|b_k-b_{k+1}|}, understanding that {c_1=0}. By construction, the sequence {\{c_n\}} is increasing and bounded. Also let {d_n=c_n-b_n}; as a difference of bounded sequences, this is bounded too. Finally,

\displaystyle d_{n+1}-d_n = c_{n+1} -c_n + b_n - b_{n+1} = |b_n-b_{n+1}|+ b_n - b_{n+1} \ge 0

which shows that {\{d_n\}} is increasing.

To construct a suitable example where {\sum a_nb_n} diverges, we need another lemma.

Lemma 2. If a series of nonnegative terms {\sum A_n} diverges, then there is a sequence {c_n\rightarrow 0} such that the series {\sum c_n A_n} still diverges.

Proof. Let {s_n = A_1+\dots+A_n} (partial sums); then {A_n=s_n-s_{n-1}}. The sequence {\sqrt{s_n}} tends to infinity, but slower than {s_n} itself. Let {c_n=1/(\sqrt{s_n}+\sqrt{s_{n-1}})}, so that {c_nA_n = \sqrt{s_n}-\sqrt{s_{n-1}}}, and we are done: the partial sums of {\sum c_nA_n} telescope to {\sqrt{s_n}}.


Proof of the theorem, Sufficiency part. Suppose {\{b_n\}} is BV. Using Lemma 1, write {b_n=c_n-d_n}. Since {a_nb_n = a_nc_n - a_n d_n}, it suffices to prove that {\sum a_nc_n} and {\sum a_nd_n} converge. Consider the first one; the proof for the other is the same. Let {L=\lim c_n} and write {a_nc_n = La_n - a_n(L-c_n)}. Here {\sum La_n} converges as a constant multiple of {\sum a_n}. Also, {\sum a_n(L-c_n)} converges by the Dirichlet test: the partial sums of {\sum a_n} are bounded, and {L-c_n} decreases to zero.

Proof of the theorem, Necessity part. Suppose {\{b_n\}} is not BV. The goal is to find a convergent series {\sum a_n} such that {\sum a_nb_n} diverges. If {\{b_n\}} is not bounded, then we can proceed as in the case of absolute convergence, considered above. So let’s assume {\{b_n\}} is bounded.

Since {\sum_{n=1}^\infty |b_n-b_{n+1}|} diverges, by Lemma 2 there exists {\{c_n\} } such that {c_n\rightarrow 0} and {\sum_{n=1}^\infty c_n|b_n-b_{n+1}|} diverges. Let {d_n} be such that {d_n(b_n-b_{n+1}) = c_n|b_n-b_{n+1}|}; that is, {d_n} differs from {c_n} only by sign. In particular, {d_n\rightarrow 0}. Summation by parts yields

\displaystyle { \sum_{n=1}^N d_n(b_n-b_{n+1}) = \sum_{n=2}^N (d_{n}-d_{n-1})b_n + d_1b_1-d_Nb_{N+1} }

As {N\rightarrow\infty}, the left hand side of does not have a limit since {\sum d_n(b_n-b_{n+1})} diverges. On the other hand, {d_1b_1-d_Nb_{N+1}\rightarrow d_1b_1} since {d_N\rightarrow 0} while {b_{N+1}} stays bounded. Therefore, {\lim_{N\rightarrow\infty} \sum_{n=2}^N (d_{n}-d_{n-1})b_n} does not exist.

Let {a_n= d_n-d_{n-1}}. The series {\sum a_n} converges (by telescoping, since {\lim_{n\rightarrow\infty} d_n} exists) but {\sum a_nb_n} diverges, as shown above.


In terms of functional analysis the preservation of absolute convergence is essentially the statement that {(\ell_1)^* = \ell_\infty}. Notably, the {\ell_\infty} norm of {\{b_n\}}, i.e., {\sup |b_n|}, is the number that controls by how much {\sum |a_nb_n|} can exceed {\sum |a_n|}.

I don’t have a similar quantitative statement for the case of convergence. The BV space has a natural norm too, {\sum |b_n-b_{n-1}|} (interpreting {b_0} as {0}), but it’s not obvious how to relate this norm to the values of the sums {\sum a_n} and {\sum a_nb_n}.

Compact sets in Banach spaces

In a Euclidean space, a set is compact if and only if it is closed and bounded. This fails in all infinite-dimensional Banach spaces (and in particular in Hilbert spaces) where the closed unit ball is not compact. However, one still has a simple description of compact sets:

A subset of a Banach space is compact if and only if it is closed, bounded, and flat.

By definition, a set is flat if for every positive number r it is contained in the r-neighborhood of some finite-dimensional linear subspace.

Notes:

  • The r-neighborhood of a set consists of all points whose distance to the set is less than r.
  • In a finite-dimensional subspace every subset is vacuously flat.

Necessity: Suppose K is a compact set. Every compact set is closed and bounded, this is true in all metric spaces. Given a positive number r, let F be a finite set such that K is contained in the r-neighborhood of F; the existence of such F follows by covering K with r-neighborhoods of points and choosing a finite subcover. Then the linear subspace spanned by F is finite-dimensional and demonstrates that K is flat.

Sufficiency: to prove K is compact, we must show it’s complete and totally bounded. Completeness follows from being a closed subset of a complete space, so the issue is total boundedness. Given r > 0, let M be a finite-dimensional subspace such that K is contained in the (r/2)-neighborhood of M. For each point of K, pick a point of M at distance less than r/2 from it. Let E be the set of all such points in M. Since K is bounded, so it E. Being a bounded subset of a finite-dimensional linear space, E is totally bounded. Thus, there exists a finite set F such that E is contained in the (r/2)-neighborhood of F. Consequently, K is contained in the r-neighborhood of F, which shows its total boundedness.

It’s worth noting that the equivalence of compactness with “flatness” (existence of finite-dimensional approximations) breaks down for linear operators in Banach spaces. While in a Hilbert space an operator is compact if and only if it is the norm-limit of finite-rank operators, some Banach spaces admit compact operators without a finite-rank approximation; that is, they lack the Approximation Property.

 

 

Laguerre polynomials under 1

Laguerre polynomials have many neat definitions; I am partial to {\displaystyle L_n(x) = \left(\frac{d}{dx} - 1\right)^n  \frac{x^n}{n!}} because it’s so easy to remember:

  1. Begin with {x^n}
  2. Repeat “subtract the derivative” {n} times
  3. Normalize so the constant term is 1.

For example, for n=3 this process goes as {x^3} to {x^3-3x^2} to {x^3 -6x^2 + 6x} to {x^3-9x^2+18x -6}, which normalizes to {-\frac16x^3+\frac{3}{2}x^2 -3x +1}. This would make a mean exercise on differentiating polynomials: every mistake snowballs throughout the computation.

What would happen if we added the derivative instead? Nothing really new: this change is equivalent to reversing the direction of the x-axis, so we’d end up with {L_n(-x)}. Incidentally, this shows that the polynomial {L_n(-x)} has positive coefficients, which means the behavior of {L_n} for negative {x} is boring: the values go up as {x} becomes more negative. Laguerre polynomials are all about the interval {[0,\infty)} on which they are orthogonal with respect to the weight {\exp(-x)} and therefore change sign often.

L20.png
20 Laguerre polynomials

But when I look at the plot shown above, it’s not the zeros that draw my attention (perhaps because the x-axis is not shown) but the horizontal line {y=1}, the zero-degree polynomial. The coefficients of {L_n} have alternating signs; in particular, {L_n(0)=1} and {L_n'(0)=-n}. So, nonconstant Laguerre polynomials start off with the value of 1 and immediately dive below it. All except the linear one, {L_1(x)=1-x}, eventually recover and reach 1 again (or so it seems; I don’t have a proof).

L30.png
30 Laguerre polynomials

The yellow curve that is the first to cross the blue line is the 5th degree Laguerre polynomial. Let’s see if any of the other polynomials rises about 1 sooner…

L100.png
100 Laguerre polynomials

Still, nobody beats {L_5} (and the second place is held by {L_4}). By the way, the visible expansion of oscillations is approximately exponential; multiplying the polynomials by {\exp(-x/2)} turns the envelopes into horizontal lines:

lag_functions30.png
30 Laguerre polynomials times exp(-x/2)

Back to the crossing of y=1 line. The quantity to study is the smallest positive root of {L_n - 1}, denoted  {r(n)} from now on. (It is the second smallest root overall; as discussed above, this polynomial has a root at x=0 and no negative roots.) For n=2,3,4,5,6, the value of {r(n)} is  {4, 3, 6-2\sqrt{3}, (15-\sqrt{105})/2, 6} which evaluates to 4, 3, 2.536…, 2.377…, and 6 respectively. I got these with Python / SymPy:

from sympy import *
x = Symbol('x')
[Poly(laguerre(n, x) - 1).all_roots()[1] for n in range(2, 7)]

For higher degrees we need numerics. SymPy can still help (applying .evalf() to the roots), but the process gets slow. Switching to NumPy’s roots method speeds things up, but when it indicated than {r(88)}  and a few others are in double digits, I became suspicious…  a closer check showed this was a numerical artifact.

Conjecture: {r(5) \le r(n) \le r(6)} for all {n}. Moreover, {3 < r(n) < 6} when {n \ge 7}.

Here is a closer look at the exceptional polynomials of degrees 3, 4, 5 and 6, with 1 subtracted from each:

l3456

The first local maximum of {L_n} shifts down and to the left as the degree n increases. The degree n=5 is the last for which {L_n} exceeds 1 on the first attempt, so it becomes the quickest to do so. On the other hand, n=6 fails on its first attempt to clear the bar, and its second attempt is later than for any subsequent Laguerre polynomial; so it sets the record for maximal {r(n)}.

Evaluating high-degree Laguerre polynomials is a numerical challenge: adding large terms of alternating signs can reduce accuracy dramatically. Here is a plot of the degree 98 polynomial (minus 1): where is its first positive root?

L98.png
L(98, x) – 1

Fortunately, SymPy can evaluate Laguerre polynomials at rational points using exact arithmetics since the coefficients are rational. For example, when it evaluates the expression laguerre(98, 5) > 1 to True, that’s a (computer-assisted) proof that {r(98) < 5}, which one could in principle "confirm" by computing the same rational value of {L_{98}(5) } by hand (of course, in this situation a human is far less trustworthy than a computer) . Evaluation at the 13 rational points 3, 3.25, 3.5, … , 5.75, 6 is enough to certify that {r(n) < 6} for {n} up to 200 (with the aforementioned exception of {r(6) = 6}).

The lower bounds call for Sturm’s theorem which is more computationally expensive than sampling at a few rational points. SymPy offers a root-counting routine based on this theorem (it counts the roots within the given closed interval):

for n in range(2, 101):
    num_roots = count_roots((laguerre(n,x)-1)/x, 0, 3)
    print('{} roots for n = {}'.format(num_roots, n))

Division by x eliminates the root at 0, so we are left with the root count on (0,3] — which is 1 for n=3,4 and 2 for n=5. The count is zero for all other degrees up to 100, confirming that {r(n) > 3} for {n \ge 6}.

So, the conjecture looks solid. I don’t have a clue to its proof (nor do I know if it’s something known). The only upper bound on {L_n} that I know is Szegő’s {|L_n(x)|\le \exp(x/2)} for {x\ge 0}, which is not helping here.

Complex Cantor sets

Every real number in the interval [0,1] can be written in binary as {\sum_{k=1}^\infty c_k(1/2)^k} where each coefficient {c_k} is either 0 or 1. Another way to put this: the set of all possible sums {\sum_{k=1}^\infty c_kb^k} for b = 1/2 is a line segment.

line12

What is this set for other values of “base” b, then? Let’s stick to |b| < 1 for now, so that the series converges. Nothing interesting happens for real b between 1/2 and 1; the segment grows longer, to length b/(1-b). When b is between 0 and 1, we get Cantor sets, with the classical middle-third set being the case b = 1/3.

cantor13

There is no need to consider negative b, because of a symmetry between b and -b. Indeed, up to scaling and translation, the coefficients can be taken from {-1, 1} instead of {0, 1}. Then it’s obvious that changing the sign of b is the same as flipping half of coefficients the other way — does not change the set of possible sums.

Let’s look at purely imaginary b, then. Here is b = 0.6i

j06

Why so rectangular? The real part is the sum of {c_kb^k} over even k, and the imaginary part is the sum over odd k. Each of these yields a Cantor type set as long as {|b|^2 < 1/2}. Since the odd- and even-numbered coefficients are independent of each other, we get the product of two Cantor sets. Which changes into a rectangle when  {|b| \ge \sqrt{1/2}}:

jsqrt12

(I didn’t think a full-size picture of a solid rectangle was necessary here.)

This is already interesting: the phase transition from dust to solid (connected, and even with interior) happens at different values in the real and imaginary directions: 1/2 versus {\sqrt{1/2}}. What will happen for other complex values? Using complex conjugation and the symmetry between b and -b, we reduce the problem to the quarter-disk in the first quadrant. Which still leaves a room for a lot of things to happen…

06j03.PNG
b = 0.6 + 0.3i
07j02.png
b = 0.7 + 0.2i
04j03.PNG
b = 0.4 + 0.3i
02j07.png
b = 0.2 + 0.7i

It’s clear that for |b| < 1/2 we get a totally disconnected set — it is covered by 2 copies of itself scaled by the factor of |b|, so its Hausdorff dimension is less than 1 when |b| is less than 1/2. Also, the argument of b is responsible for rotation of the scaled copies, and it looks like rotation favors disconnectivity… but then again, the pieces may link up again after being scaled-rotated a few times, so the story is not a simple one.

The set of bases b for which the complex Cantor set is connected is a Manderbrot-like set introduced by Barnsley and Harrington in 1985. It has the symmetries of a rectangle, and features a prominent hole centered at 0 (discussed above). But it actually has infinitely many holes, with “exotic” holes being tiny islands of disconnectedness, surrounded by connected sets. This was proved in 2014 by Calegari, Koch, Walker, so I refer to Danny Calegari’s post for an explanation and more pictures (much better looking than mine).

Besides “disconnected to connected”, there is another phase transition: empty interior to nonempty interior. Hare and Sidorov proved that the complex Cantor set has nonempty interior when  {|b| > 2^{-1/4}}; their path to the proof involved a MathOverflow question The Minkowski sum of two curves which is of its own interest.

The pictures were made with a straightforward Python script, using expansions of length 20:

import matplotlib.pyplot as plt
import numpy as np
import itertools
n = 20
b = 0.6 + 0.3j
c = np.array(list(itertools.product([0, 1], repeat=n)))
w = np.array([b**k for k in range(n)]).reshape(1, -1)
z = np.sum(c*w, axis=1)
plt.plot(np.real(z), np.imag(z), '.', ms=4)
plt.axis('equal')
plt.show()

Since we are looking at partial sums anyway, it’s not necessary to limit ourselves to |b| being less than 1. Replacing b by 1/b only scales the picture, so the place to look for new kinds of pictures is the unit circle. Let’s try a 7th root of unity:

7throot.PNG
b = exp(pi i / 7)

The set above looks sparse because many points overlap. Let’s change b to something non-algebraic:

etoi.PNG
b = exp(i)

What’s with the cusps along the perimeter?

Iterating the logistic map: limsup of nonperiodic orbits

Last time we found that when a sequence with {x_1\in (0,1)} and {x_{n+1} = 4x_n(1-x_n)} does not become periodic, its upper limit {\limsup x_n} must be at least {\approx 0.925}. This time we’ll see that {\limsup x_n} can be as low as {(2+\sqrt{3})/4\approx 0.933} and determine for which {x_1} it is equal to 1.

The quadratic polynomial {f(x)=4x(1-x)} maps the interval {[0,1]} onto itself. Since the linear function {g(x) = 1-2x} maps {[0,1]} onto {[-1,1]}, it follows that the composition {h=g\circ f\circ g^{-1}} maps {[-1,1]} onto {[-1,1]}. This composition is easy to compute: {h(x) = 2x^2-1 }.

We want to know whether the iteration of {f}, starting from {x_1}, produces numbers arbitrarily close to {1}. Since {f\circ f \circ \cdots \circ f = g^{-1}\circ h \circ h \circ \cdots \circ h\circ g} the goal is equivalent to finding whether the iteration of {h}, starting from {g(x_1)}, produces numbers arbitrarily close to {g(1) = -1}. To shorten formulas, let’s write {h_n} for the {n}th iterate of {h}, for example, {h_3 = h\circ h\circ h}.

So far we traded one quadratic polynomial {f} for another, {h}. But {h} satisfies a nice identity: {h(\cos t)=2\cos^2 t-1 = \cos(2t)}, hence {h_n(\cos t) = \cos (2^n t)} for all {n\in\mathbb N}. It’s convenient to introduce {\alpha = \frac{1}{\pi}\cos^{-1}(1-2x_1)}, so that { h_n(g(x_1)) = h_n(\cos 2\pi \alpha ) = \cos(2^n\cdot 2\pi \alpha) }.

The problem becomes to determine whether the numbers {2^n\cdot 2\pi \alpha} come arbitrarily close to {\pi}, modulo an integer multiple of {2\pi}. Dividing by {2\pi} rephrases this as: does the fractional part of {2^n \alpha} come arbitrarily close to {1/2}?

A number that is close to {1/2} has the binary expansion beginning either with {0.01111111\dots} or with {0.10000000\dots}. Since the binary expansion of {2^n\alpha} is just the binary expansion of {\alpha} shifted {n} digits to the left, the property {\limsup x_n=1} is equivalent to the following: for every {k\in\mathbb N} the binary expansion of {\alpha} has infinitely many groups of the form “1 followed by k zeros” or “0 followed by k ones”.

A periodic expansion cannot have the above property; this, {\alpha} must be irrational. The property described above can then be simplified to “irrational and has arbitrarily long runs of the same digit”, since a long run of {0}s will be preceded by a {1}, and vice versa.

For example, combining the pairs 01 and 10 in some non-periodic way, we get an irrational number {\alpha} such that the fractional part of {2^n\alpha} does not get any closer to 1/2 than {0.01\overline{10}_2 = 5/12} or {0.10\overline{01}_2 = 7/12}. Hence, {\cos 2^n 2\pi \alpha \ge -\sqrt{3}/2}, which leads to the upper bound {x_n\le (2+\sqrt{3})/4\approx 0.933} for the sequence with the starting value {x_1=(1-\cos\pi\alpha)/2}.

Let us summarize the above observations about {\limsup x_n}.

Theorem: {\limsup x_n=1} if and only if (A) the number {\alpha = \frac{1}{\pi}\cos^{-1}(1-2x_1)} is irrational, and (B) the binary expansion of {\alpha} has arbitrarily long runs of the same digit.

Intuitively, one expects that a number that satisfies (A) will also satisfy (B) unless it was constructed specifically to fail (B). But to verify that (B) holds for a given number is not an easy task.

As a bonus, let’s prove that for every rational number {y\in (-1,1)}, except 0, 1/2 and -1/2, the number {\alpha = \frac{1}{\pi}\cos^{-1}y} is irrational. This will imply, in particular, that {x_1=1/3} yields a non-periodic sequence. The proof follows a post by Robert Israel and requires a lemma (which could be replaced with an appeal to Chebyshev polynomials, but the lemma keeps things self-contained).

Lemma. For every {n\in \mathbb N} there exists a monic polynomial {P_n} with integer coefficients such that {P_n(2 \cos t) = 2\cos nt } for all {t}.

Proof. Induction, the base case {n=1} being {P_1(x)=x}. Assuming the result for integers {\le n}, we have {2 \cos (n+1)t = e^{i(n+1)t} + e^{-i(n+1)t} } {  = (e^{int} + e^{-int})(e^{it} + e^{-it}) - (e^{i(n-1)t} + e^{-i(n-1)t}) } { = P_n(2 \cos t) (2\cos t) - P_{n-1}(2\cos t) }
which is a monic polynomial of {2\cos t}. {\Box}

Suppose that there exists {n} such that {n\alpha \in\mathbb Z}. Then {2\cos(\pi n\alpha)=\pm 2}. By the lemma, this implies {P_n(2\cos(\pi \alpha)) =\pm 2}, that is {P_n(2y)=\pm 2}. Since {2y} is a rational root of a monic polynomial with integer coefficients, the Rational Root Theorem implies that it is an integer. {\Box}