Recursive randomness of reals: summing a random decreasing sequence

In a comment to Recursive randomness of integers Rahul pointed out a continuous version of the same problem: pick {x_0} uniformly in {[0,1]}, then {x_1} uniformly in {[0,x_0]}, then {x_2} uniformly in {[0, x_1]}, etc. What is the distribution of the sum {S=\sum_{n=0}^\infty x_n}?

The continuous version turns out to be easier to analyse. To begin with, it’s equivalent to picking uniformly distributed, independent {y_n\in [0,1]} and letting {x_n = y_0y_1\cdots y_n}. Then the sum is

{\displaystyle y_0+y_0y_1 + y_0y_1y_2 + \cdots }

which can be written as

{\displaystyle y_0(1+y_1(1+y_2(1+\cdots )))}

So, {S} is a stationary point of the random process {X\mapsto (X+1)U} where {U} is uniformly distributed in {[0,1]}. Simply put, {S} and {(S+1)U} have the same distribution. This yields the value of {E[S]} in a much simpler way than in the previous post:

{E[S]=E[(S+1)U] = (E[S] + 1) E[U] = (E[S] + 1)/2}

hence {E[S]=1}.

We also get an equation for the cumulative distribution function {C(t) = P[S\le t]}. Indeed,

{\displaystyle P[S\le t] = P[(S+1)U \le t] = P[S \le t/U-1]}

The latter probability is {\int_0^1 P[S\le t/u-1]\,du = \int_0^1 C(t/u-1)\,du}. Conclusion: {C(t) = \int_0^1 C(t/u-1)\,du}. Differentiate to get an equation for the probability density function {p(t)}, namely {p(t) = \int_0^1 p(t/u-1)\,du/u}. It’s convenient to change the variable of integration to {v = t/u-1}, which leads to

{\displaystyle p(t) = \int_{t-1}^\infty p(v)\,\frac{dv}{v+1}}

Another differentiation turns the integral equation into a delay differential equation,

{\displaystyle p'(t) = - \frac{p(t-1)}{t}}

Looks pretty simple, doesn’t it? Since the density is zero for negative arguments, it is constant on {[0,1]}. This constant, which I’ll denote {\gamma}, is {\int_{0}^\infty p(v)\,\frac{dv}{v+1}}, or simply {E[1/(S+1)]}. I couldn’t get an analytic formula for {\gamma}. My attempt was {E[1/(S+1)] = \sum_{n=0}^\infty (-1)^n M_n} where {M_n=E[S^n]} are the moments of {S}. The moments can be computed recursively using {E[S^n] = E[(S+1)^n]E[U^n]}, which yields

{\displaystyle M_n=\frac{1}{n} \sum_{k=0}^{n-1} \binom{n}{k}M_k}

The first few moments, starting with {M_0}, are 1, 1, 3/2, 17/6, 19/3, 81/5, 8351/80… Unfortunately the series {\sum_{n=0}^\infty (-1)^n M_n} diverges, so this approach seems doomed. Numerically {\gamma \approx 0.5614} which is not far from the Euler-Mascheroni constant, hence the choice of notation.

On the interval (1,2) we have {p'(t) = -\gamma/t}, hence

{p(t) = \gamma(1-\log t)} for {1 \le t \le 2}.

The DDE gets harder to integrate after that… on the interval {[2,3]} the solution already involves the dilogarithm (Spence’s function):

{\displaystyle p(t) = \gamma(1+\pi^2/12 - \log t + \log(t-1)\log t + \mathrm{Spence}\,(t))}

following SciPy’s convention for Spence. This is as far as I went… but here is an experimental confirmation of the formulas obtained so far (link to full size image).

match03
Histogram of 10 million samples and the pdf (red) plotted on [0,3]

To generate a sample from distribution S, I begin with a bunch of zeros and repeat “add 1, multiply by U[0,1]” many times. That’s it.

import numpy as np
import matplotlib.pyplot as plt
trials = 10000000
terms = 10000
x = np.zeros(shape=(trials,))
for _ in range(terms):
    np.multiply(x+1, np.random.uniform(size=(trials,)), out=x)
_ = plt.hist(x, bins=1000, normed=True)
plt.show()

I still want to know the exact value of {\gamma}… after all, it’s also the probability that the sum of our random decreasing sequence is less than 1.

Update

The constant I called “{\gamma}” is in fact {\exp(-\gamma)} where {\gamma} is indeed Euler’s constant… This is what I learned from the Inverse Symbolic Calculator after solving the DDE (with initial value 1) numerically, and calculating the integral of the solution. From there, it did not take long to find that

Oh well. At least I practiced solving delay differential equations in Python. There is no built-in method in SciPy for that, and although there are some modules for DDE out there, I decided to roll my own. The logic is straightforward: solve the ODE on an interval of length 1, then build an interpolating spline out of the numeric solution and use it as the right hand side in the ODE, repeat. I used Romberg’s method for integrating the solution; the integration is done separately on each interval [k, k+1] because of the lack of smoothness at the integers.

import numpy as np
from scipy.integrate import odeint, romb
from scipy.interpolate import interp1d
numpoints = 2**12 + 1
solution = [lambda x: 1]
integrals = [1]
for k in range(1, 15):
    y0 = solution[k-1](k)
    t = np.linspace(k, k+1, numpoints)
    rhs = lambda y, x: -solution[k-1](np.clip(x-1, k-1, k))/x
    y = odeint(rhs, y0, t, atol=1e-15, rtol=1e-13).squeeze()
    solution.append(interp1d(t, y, kind='cubic', assume_sorted=True))
    integrals.append(romb(y, dx=1/(numpoints-1)))
total_integral = sum(integrals)
print("{:.15f}".format(1/total_integral))

As a byproduct, the program found the probabilities of the random sum being in each integer interval:

  • 56.15% in [0,1]
  • 34.46% in [1,2]
  • 8.19% in [2,3]
  • 1.1% in [3,4]
  • 0.1% in [4,5]
  • less than 0.01% chance of being greater than 5

Recursive randomness of integers

Entering a string such as “random number 0 to 7” into Google search brings up a neat random number generator. For now, it supports only uniform probability distributions over integers. That’s still enough to play a little game.

Pick a positive number, such as 7. Then pick a number at random between 0 and 7 (integers, with equal probability); for example, 5. Then pick a number between 0 and 5, perhaps 2… repeat indefinitely. When we reach 0, the game becomes really boring, so that is a good place to stop. Ignoring the initial non-random number, we got a random non-increasing sequence such as 5, 2, 1, 1, 0. The sum of this one is 9… how are these sums distributed?

Let’s call the initial number A and the sum S. The simplest case is A=1, when S is the number of returns to 1 until the process hits 0. Since each return to 1 has probability 1/2, we get the following geometric distribution

n1t2m

Sum Probability
0 1/2
1 1/4
2 1/8
3 1/16
k 1/2k+1

When starting with A=2, things are already more complicated: for one thing, the probability mass function is no longer decreasing, with P[S=2] being greater than P[S=1]. The histogram shows the counts obtained after 2,000,000 trials with A=2.

n2t2m

The probability mass function is still not too hard to compute: let’s say b is the number of times the process arrives at 2, then the sum is 2b + the result with A=1. So we end up convolving two geometric distributions, one of which is supported on even integers: hence the bias toward even sums.

Sum Probability
0 1/3
1 1/6
2 5/36
3 7/72
k ((4/3)[k/2]+1-1)/2k

For large k, the ratio P[S=k+2]/P[s=k] is asymptotic to (4/3)/4 = 1/3, which means that the tail of the distribution is approximately geometric with the ratio of {1/\sqrt{3}}.

I did not feel like computing exact distribution for larger A, resorting to simulations. Here is A=10 (ignore the little bump at the end, an artifact of truncation): n10t2m

There are three distinct features: P[S=0] is much higher than the rest; the distribution is flat (with a bias toward even, which is diminishing) until about S=n, and after that it looks geometric. Let’s see what we can say for a general starting value A.

Perhaps surprisingly, the expected value E[S] is exactly A. To see this, consider that we are dealing with a Markov chain with states 0,1,…,A. The transition probabilities from n to any number 0,…,n are 1/(n+1). Ignoring the terminal state 0, which does not contribute to the sum, we get the following kind of transition matrix (the case A=4 shown):

{\displaystyle M = \begin{pmatrix}1/2 & 0 & 0 & 0 \\ 1/3 & 1/3 & 0 & 0 \\ 1/4 & 1/4 & 1/4 & 0 \\ 1/5 & 1/5 & 1/5 & 1/5\end{pmatrix} }

The initial state is a vector such as {v = (0,0,0,1)}. So {vM^j} is the state after j steps. The expected value contributed by the j-th step is {vM^jw} where {w = (1,2,3,4)^T} is the weight vector. So, the expected value of the sum is

{\displaystyle \sum_{j=1}^\infty vM^jw = v\left(\sum_{j=1}^\infty M^j\right)w = vM(I-M)^{-1}w}

It turns out that the matrix {M(I-M)^{-1}} has a simple form, strongly resembling M itself.

{\displaystyle M(I-M)^{-1} = \begin{pmatrix} 1 & 0 & 0 & 0 \\ 1 & 1/2 & 0 & 0 \\ 1 & 1/2 & 1/3 & 0 \\ 1 & 1/2 & 1/3 & 1/4 \end{pmatrix} }

Left multiplication by v extracts the bottom row of this matrix, and we are left with a dot product of the form {(1,1/2,1/3,1/4)\cdot (1,2,3,4) = 1 + 1 + 1 + 1 = 4 }. Neat.

What else can we say? The median is less than A, which is no surprise given the long tail on the right. Also, P[S=0] = 1/(A+1) since the only way to have zero sum is to hit 0 at once. A more interesting question is: what is the limit of the distribution of T = S/A as A tends to infinity? Here is the histogram of 2,000,000 trials with A=50.

n50t2m

It looks like the distribution of T tends to a limit, which has constant density until 1 (so, until A before rescaling) and decays exponentially after that. Writing the supposed probability density function as {f(t) = c} for {0\le t\le 1}, {f(t) = c\exp(k(1-t))} for {t > 1}, and using the fact that the expected value of T is 1, we arrive at {c = 2-\sqrt{2} \approx 0.586} and {k=\sqrt{2}}. This is a pretty good approximation in some aspects: the median of this distribution is {1/(2c)}, suggesting that the median of S is around {n/(4-2\sqrt{2})} which is in reasonable agreement with experiment. But the histogram for A=1000 still has a significant deviation from the exponential curve, indicating that the supposedly geometric part of T isn’t really geometric:

fit1000
The distribution of S with A=1000 versus the constant-geometric approximation

One can express S as a sum of several independent geometric random variables, but the number of summands grows quadratically in A, and I didn’t get any useful asymptotics from this. What is the true limiting distribution of S/A, if it’s not the red curve above?

Maximum of three numbers: it’s harder than it sounds

This simple identity hold for any two real numbers {x,y}:

\displaystyle   \max(|x|,|y|) = \frac12\,(|x+y|+|x-y|)   \ \ \ \ \  \ \ \ \ \ (1)

Indeed, if {|x|} realizes the maximum, then both {x+y} and {x-y} have the same sign as {x}. After opening the absolute value signs, we get {y} to cancel out.

So, (1) represents {\max(|x|,|y|)}, also known as the {\ell_\infty} norm, as the sum of absolute values of linear functions. Let’s try the same with {\max(|x|,|y|,|z|)}. Since the right hand side of (1) is just the average of {|\pm x \pm y|} over all possible choices of {\pm } signs, the natural thing to do is to average {|\pm x \pm y \pm z|} over all eight choices. The sign in front of {x} can be taken to be {+}, which simplifies the average to

\displaystyle    \frac14\,(|x+y+z|+|x+y-z|+|x-y+z|+|x-y-z|)    \ \ \ \ \  \ \ \ \ \ (2)

Does this formula give {\max(|x|,|y|,|z|)}? Instead of trying random numbers, let’s just plot the unit ball for the norm given by (2). If the identity works, it will be a cube. I used Maple:

with(plots): f:=(abs(x+y+z)+abs(x+y-z)+abs(x-y-z)+abs(x-y+z))/4:
implicitplot3d(f=1,x=-1/4..1/4,y=-1/4..1/4,z=-1/4..1/4,grid=[25,25,25]);
My precious!
My precious!

Close, but no cube. Luckily, this is my favorite Archimedean solid, the cuboctahedron.

Although all terms of (2) look exactly the same, the resulting shape has both triangular and square faces. Where does the difference of shapes come from?

More importantly, is (2) really the best we can do? Is there some other sum of moduli of linear functions that will produce {\max(|x|,|y|,|z|)}?

— No.

Even if negative coefficients are allowed?

— Even then. (But you can come arbitrarily close.)

What if we allow integrals with respect to an arbitrary (signed) measure, as in

\displaystyle    \iiint |\alpha x+\beta y+\gamma z|\,d \mu(\alpha, \beta, \gamma)    \ \ \ \ \  \ \ \ \ \ (3)

— Still no. But if {\mu} is allowed to be a distribution of higher order (an object more singular than a measure), then a representation exists (W. Weil, 1976). Yes, one needs the theory of distributions to write the maximum of three numbers as a combination of linear functions.

I’ll only prove that there is no identity of the form

\displaystyle  \max(|x|,|y|,|z|) = \sum_{i=1}^N |\alpha_i x+\beta_i y+ \gamma_i z|

Indeed, such an identity amounts to having an isometric embedding {T\colon \ell_\infty^3 \rightarrow \ell_1^N}. The adjoint operator {T^* \colon \ell_\infty^N \rightarrow \ell_1^3} is a submetry meaning that it maps the unit ball of {\ell_\infty^N } onto the unit ball {\ell_1^3}. The unit ball of {\ell_\infty^N } is just a cube; all of its faces are centrally symmetric, and this symmetry is preserved by linear maps. But {\ell_1^3} is an octahedron, with triangular faces. A contradiction. {\ \Box}

An aside: what if instead of averaging {|\pm x \pm y|} over all {\pm } choices (i.e., unimodular real coefficients) we take the average over all unimodular complex coefficients? This amounts to {\|(x,y)\| = \frac{1}{2\pi} \int_0^{2\pi} |x+e^{it}y|\,dt}. I expected something nice from this norm, but

Complex averaging in real space
Complex averaging in real space

it’s a strange shape whose equation involves the complete elliptic integral of second kind. Yuck.

Random queen

This is a sequel to Random knight… don’t expect more than you usually get from sequels. I do not plan to do other chess pieces, although “random pawn” is an intriguing idea.

Queen begins its random walk at d1. After two moves, it can get anywhere on the board.

2 queen moves
2 queen moves

It is not surprising that the most likely outcome is the return to initial state. It is less obvious why the second most likely position is d3.

After 3 moves, the most likely positions are near the center of the board.

3 queen moves
3 queen moves

After 4 moves, a concentric-square pattern emerges.

4 queen moves
4 queen moves

After 10 moves, the distribution is visually identical to the steady-state distribution.

10 queen moves; ≈ steady state.
10 queen moves; ≈ steady state

As was noted in the Random knight post, the steady state distribution is the normalized number of moves from a given position. Hence, all probabilities are rational. For the knight they happen to be unit fractions: 1/168, 1/112, 1/84, 1/56, 1/42. The queen probabilities are uglier: 3/208, 23/1456, 25/1456, 27/1456. Random queen is distributed more uniformly than random knight: it spends between 1.44% and 1.85% of its time in any given square. For the knight, the range is from 0.59% to 2.38%.

But the champion in uniform coverage of the chessboard is the rook: when moving randomly, it spends 1/64 of time on every square.

Random knight

A knight walks randomly on the standard chessboard. What is the proportion of time that it will spend at e4?

The answer (1/42) is not hard to get, but I’ll take the computational approach first (testing Scilab 5.5.0 beta 1, by the way). Begin by placing the knight at b1 and letting it make five random moves. This is the distribution of its position after five moves:

1 knight 5 moves
1 knight 5 moves

Unsurprisingly, half of the board is black; actually more than half because the knight can’t get to h8 in five moves. the other half isn’t — you can even get to h8 in five moves (want to find all possible ways to do this?).

After ten moves, the colors become more uniform.

1 knight 10 moves
1 knight 10 moves

After 200 (or any large even number) of moves, the distribution is little changed. But you may notice that it is centrally symmetric, while the previous one isn’t quite.

1 knight 200 moves
1 knight 200 moves

Let’s repeat the process beginning with two knights at b1 and g1. After five moves of each:

2 knights 5 moves
2 knights 5 moves

After ten moves:

2 knights 10 moves
2 knights 10 moves

After a large number of moves (does not matter, even or odd), the variety of colors is greatly reduced:

steady state distribution
steady state distribution

Indeed, this is the distribution which also describes the proportion of time that the knight (wherever it started) will spend at a given square Q.

Precisely, the proportion of time spent at Q is P(Q)=N(Q)/336 where N(Q) is the number of legal moves from Q. For the sixteen central squares P(Q) = 8/336 = 1/42, while for the corners we get 2/336 = 1/168.

Here is a quick argument to support the above. Let Q1 and Q2 be two squares such that the move from Q1 to Q2 is legal. The proportion of time that the knight makes this move is P(Q1)/N(Q1). Similarly, the time proportion of Q2-Q1 move is P(Q2)/N(Q2). Since the process is symmetric in time (we have a reversible Markov chain), P(Q1)/N(Q1)=P(Q2)/N(Q2). In other words, the ratio P/Q is the same for any two squares that are joined by a legal move; it follows that P/Q is the same for all cells. Finding the coefficient of proportionality is a matter of counting, since the sum of P(Q) over all Q is 1.

The Scilab code I wrote for this post is largely not knight-specific. The function update receives the initial distribution of a chess piece and the set of its legal moves. It computes the distribution after one random move.

function newState=update(state,moves)
    [maxK,n]=size(moves)
    [n,n]=size(state)
    newState=zeros(state)
    for i=1:n 
        for j=1:n
            neighborCount=0
            contribution=zeros(state)
            for k=1:maxK
                pos=[i,j]+moves(k,:)
                if (min(pos)>=1)&(max(pos)<=n) then
                   neighborCount=neighborCount+1
                   contribution(pos(1),pos(2))=state(i,j)
                end
            end
            newState=newState+contribution/neighborCount
        end
    end
endfunction

This set is given in the function knight which calls update iteratively.

function state=knight(state,iter)
    moves=[2 1;1 2;-2 1;1 -2;2 -1;-1 2;-2 -1;-1 -2]
    for i=1:iter
        state=update(state,moves)
    end
endfunction

For example, this is how I plotted the distribution after 10 random moves starting at b1:

initial = zeros(8,8)
initial(8,2)=1
state = knight(initial,10)
isoview(0,0,9,9)
Matplot(state*500)
f = gcf()
f.color_map = hotcolormap(32)

NB: getting the correct color representation of matrices from Matplot requires an up-to-date version of Scilab; in version 5.4.0 Matplot smoothens colors, producing intriguing but less informative images.

stars?
knight sky?

Fun with TI-83: billions upon billions of cosines

Okay, maybe not billions. But by taking cosines repeatedly, one can find the solution of the equation {\cos x = x} with high precision in under a minute.

Step 1: Enter any number, for example 0, and press Enter.
Step 2: Enter cos(Ans) and press Enter

Step 2
Step 2

Step 3: Keep pushing Enter. (Unfortunately, press-and-hold-to-repeat does not work on TI-83). This will repeatedly execute the command cos(Ans).

Step 3
Step 3

After a few iterations, the numbers begin to settle down:

Convergence
Convergence

and eventually stabilize at 0.7390851332

Limit
Limit

Explanation: the graph of cosine meets the line {y = x} at one point: this is a unique fixed point of the function {f(x)=\cos x}.

Fixed point of f(x)=cos x
Fixed point of f(x)=cos x

Since the derivative {f'(x)=-\sin x} at the fixed point is less than 1 in absolute value, the fixed point is attracting.

Now try the same with the equation {10 \cos x =x}.

Beginning
Beginning

This time, the numbers flat out refuse to converge:

Chaos!
Chaos!

Explanation: the graph of {f(x)=10\cos x} meets the line {y = x} at seven point: thus, this function has seven fixed points.

Seven fixed points
Seven fixed points

And it so happens that {|f'(x)|>1} at each of those fixed points. This makes them repelling. The sequence has nowhere to converge, because every candidate for the limit pushes it away. All that’s left to it is to jump chaotically around the interval {[-10,10]}. Here are the first 1024 terms, plotted with OpenOffice:

There is some system in this chaos
There is a pattern in this chaos…

Clearly, the distribution of the sequence is not uniform. I divided the interval {[-10,10]} into subintervals of length {0.05} and counted the number of terms falling into each.

Distribution of terms
Distribution of terms

What is going on here? Stay tuned.