Recursive randomness of integers

Entering a string such as “random number 0 to 7” into Google search brings up a neat random number generator. For now, it supports only uniform probability distributions over integers. That’s still enough to play a little game.

Pick a positive number, such as 7. Then pick a number at random between 0 and 7 (integers, with equal probability); for example, 5. Then pick a number between 0 and 5, perhaps 2… repeat indefinitely. When we reach 0, the game becomes really boring, so that is a good place to stop. Ignoring the initial non-random number, we got a random non-increasing sequence such as 5, 2, 1, 1, 0. The sum of this one is 9… how are these sums distributed?

Let’s call the initial number A and the sum S. The simplest case is A=1, when S is the number of returns to 1 until the process hits 0. Since each return to 1 has probability 1/2, we get the following geometric distribution

n1t2m

Sum Probability
0 1/2
1 1/4
2 1/8
3 1/16
k 1/2k+1

When starting with A=2, things are already more complicated: for one thing, the probability mass function is no longer decreasing, with P[S=2] being greater than P[S=1]. The histogram shows the counts obtained after 2,000,000 trials with A=2.

n2t2m

The probability mass function is still not too hard to compute: let’s say b is the number of times the process arrives at 2, then the sum is 2b + the result with A=1. So we end up convolving two geometric distributions, one of which is supported on even integers: hence the bias toward even sums.

Sum Probability
0 1/3
1 1/6
2 5/36
3 7/72
k ((4/3)[k/2]+1-1)/2k

For large k, the ratio P[S=k+2]/P[s=k] is asymptotic to (4/3)/4 = 1/3, which means that the tail of the distribution is approximately geometric with the ratio of {1/\sqrt{3}}.

I did not feel like computing exact distribution for larger A, resorting to simulations. Here is A=10 (ignore the little bump at the end, an artifact of truncation): n10t2m

There are three distinct features: P[S=0] is much higher than the rest; the distribution is flat (with a bias toward even, which is diminishing) until about S=n, and after that it looks geometric. Let’s see what we can say for a general starting value A.

Perhaps surprisingly, the expected value E[S] is exactly A. To see this, consider that we are dealing with a Markov chain with states 0,1,…,A. The transition probabilities from n to any number 0,…,n are 1/(n+1). Ignoring the terminal state 0, which does not contribute to the sum, we get the following kind of transition matrix (the case A=4 shown):

{\displaystyle M = \begin{pmatrix}1/2 & 0 & 0 & 0 \\ 1/3 & 1/3 & 0 & 0 \\ 1/4 & 1/4 & 1/4 & 0 \\ 1/5 & 1/5 & 1/5 & 1/5\end{pmatrix} }

The initial state is a vector such as {v = (0,0,0,1)}. So {vM^j} is the state after j steps. The expected value contributed by the j-th step is {vM^jw} where {w = (1,2,3,4)^T} is the weight vector. So, the expected value of the sum is

{\displaystyle \sum_{j=1}^\infty vM^jw = v\left(\sum_{j=1}^\infty M^j\right)w = vM(I-M)^{-1}w}

It turns out that the matrix {M(I-M)^{-1}} has a simple form, strongly resembling M itself.

{\displaystyle M(I-M)^{-1} = \begin{pmatrix} 1 & 0 & 0 & 0 \\ 1 & 1/2 & 0 & 0 \\ 1 & 1/2 & 1/3 & 0 \\ 1 & 1/2 & 1/3 & 1/4 \end{pmatrix} }

Left multiplication by v extracts the bottom row of this matrix, and we are left with a dot product of the form {(1,1/2,1/3,1/4)\cdot (1,2,3,4) = 1 + 1 + 1 + 1 = 4 }. Neat.

What else can we say? The median is less than A, which is no surprise given the long tail on the right. Also, P[S=0] = 1/(A+1) since the only way to have zero sum is to hit 0 at once. A more interesting question is: what is the limit of the distribution of T = S/A as A tends to infinity? Here is the histogram of 2,000,000 trials with A=50.

n50t2m

It looks like the distribution of T tends to a limit, which has constant density until 1 (so, until A before rescaling) and decays exponentially after that. Writing the supposed probability density function as {f(t) = c} for {0\le t\le 1}, {f(t) = c\exp(k(1-t))} for {t > 1}, and using the fact that the expected value of T is 1, we arrive at {c = 2-\sqrt{2} \approx 0.586} and {k=\sqrt{2}}. This is a pretty good approximation in some aspects: the median of this distribution is {1/(2c)}, suggesting that the median of S is around {n/(4-2\sqrt{2})} which is in reasonable agreement with experiment. But the histogram for A=1000 still has a significant deviation from the exponential curve, indicating that the supposedly geometric part of T isn’t really geometric:

fit1000
The distribution of S with A=1000 versus the constant-geometric approximation

One can express S as a sum of several independent geometric random variables, but the number of summands grows quadratically in A, and I didn’t get any useful asymptotics from this. What is the true limiting distribution of S/A, if it’s not the red curve above?

Random queen

This is a sequel to Random knight… don’t expect more than you usually get from sequels. I do not plan to do other chess pieces, although “random pawn” is an intriguing idea.

Queen begins its random walk at d1. After two moves, it can get anywhere on the board.

2 queen moves
2 queen moves

It is not surprising that the most likely outcome is the return to initial state. It is less obvious why the second most likely position is d3.

After 3 moves, the most likely positions are near the center of the board.

3 queen moves
3 queen moves

After 4 moves, a concentric-square pattern emerges.

4 queen moves
4 queen moves

After 10 moves, the distribution is visually identical to the steady-state distribution.

10 queen moves; ≈ steady state.
10 queen moves; ≈ steady state

As was noted in the Random knight post, the steady state distribution is the normalized number of moves from a given position. Hence, all probabilities are rational. For the knight they happen to be unit fractions: 1/168, 1/112, 1/84, 1/56, 1/42. The queen probabilities are uglier: 3/208, 23/1456, 25/1456, 27/1456. Random queen is distributed more uniformly than random knight: it spends between 1.44% and 1.85% of its time in any given square. For the knight, the range is from 0.59% to 2.38%.

But the champion in uniform coverage of the chessboard is the rook: when moving randomly, it spends 1/64 of time on every square.

Random knight

A knight walks randomly on the standard chessboard. What is the proportion of time that it will spend at e4?

The answer (1/42) is not hard to get, but I’ll take the computational approach first (testing Scilab 5.5.0 beta 1, by the way). Begin by placing the knight at b1 and letting it make five random moves. This is the distribution of its position after five moves:

1 knight 5 moves
1 knight 5 moves

Unsurprisingly, half of the board is black; actually more than half because the knight can’t get to h8 in five moves. the other half isn’t — you can even get to h8 in five moves (want to find all possible ways to do this?).

After ten moves, the colors become more uniform.

1 knight 10 moves
1 knight 10 moves

After 200 (or any large even number) of moves, the distribution is little changed. But you may notice that it is centrally symmetric, while the previous one isn’t quite.

1 knight 200 moves
1 knight 200 moves

Let’s repeat the process beginning with two knights at b1 and g1. After five moves of each:

2 knights 5 moves
2 knights 5 moves

After ten moves:

2 knights 10 moves
2 knights 10 moves

After a large number of moves (does not matter, even or odd), the variety of colors is greatly reduced:

steady state distribution
steady state distribution

Indeed, this is the distribution which also describes the proportion of time that the knight (wherever it started) will spend at a given square Q.

Precisely, the proportion of time spent at Q is P(Q)=N(Q)/336 where N(Q) is the number of legal moves from Q. For the sixteen central squares P(Q) = 8/336 = 1/42, while for the corners we get 2/336 = 1/168.

Here is a quick argument to support the above. Let Q1 and Q2 be two squares such that the move from Q1 to Q2 is legal. The proportion of time that the knight makes this move is P(Q1)/N(Q1). Similarly, the time proportion of Q2-Q1 move is P(Q2)/N(Q2). Since the process is symmetric in time (we have a reversible Markov chain), P(Q1)/N(Q1)=P(Q2)/N(Q2). In other words, the ratio P/Q is the same for any two squares that are joined by a legal move; it follows that P/Q is the same for all cells. Finding the coefficient of proportionality is a matter of counting, since the sum of P(Q) over all Q is 1.

The Scilab code I wrote for this post is largely not knight-specific. The function update receives the initial distribution of a chess piece and the set of its legal moves. It computes the distribution after one random move.

function newState=update(state,moves)
    [maxK,n]=size(moves)
    [n,n]=size(state)
    newState=zeros(state)
    for i=1:n 
        for j=1:n
            neighborCount=0
            contribution=zeros(state)
            for k=1:maxK
                pos=[i,j]+moves(k,:)
                if (min(pos)>=1)&(max(pos)<=n) then
                   neighborCount=neighborCount+1
                   contribution(pos(1),pos(2))=state(i,j)
                end
            end
            newState=newState+contribution/neighborCount
        end
    end
endfunction

This set is given in the function knight which calls update iteratively.

function state=knight(state,iter)
    moves=[2 1;1 2;-2 1;1 -2;2 -1;-1 2;-2 -1;-1 -2]
    for i=1:iter
        state=update(state,moves)
    end
endfunction

For example, this is how I plotted the distribution after 10 random moves starting at b1:

initial = zeros(8,8)
initial(8,2)=1
state = knight(initial,10)
isoview(0,0,9,9)
Matplot(state*500)
f = gcf()
f.color_map = hotcolormap(32)

NB: getting the correct color representation of matrices from Matplot requires an up-to-date version of Scilab; in version 5.4.0 Matplot smoothens colors, producing intriguing but less informative images.

stars?
knight sky?