An exercise in walking randomly

Suppose I randomly and blindly walk back and forth along this hallway, which may or may not happen in reality. The safest place to start is in the middle, next to the department’s main office. How long will it take before I bump into a wall at one of the ends?

Carnegie Building

Formally, let X_1,X_2,\dots be independent random variables which take values \pm 1 with equal probability 1/2. These are the steps of the symmetric random walk. Starting from S_0=0, after n steps I end up at the point S_n=X_1+\dots +X_n. The sequence (S_n) is a martingale, being the sum of independent random variables with zero mean. Less obviously, for every number \theta\in\mathbb R the sequence Z_n=\exp(\theta S_n-n\log \cosh \theta) is also a martingale, even though the increments d_n=Z_n-Z_{n-1} are not independent of each other. The reason for \log \cosh\theta is that E(e^{\theta X_n})=\cosh\theta. It is convenient to write s=\log\cosh\theta in what follows. We check the martingale property of Z_n=e^{\theta S_n-ns} this by conditioning on some value S_{n-1}=\alpha and computing E(Z_n | S_{n-1}=\alpha)=e^{\theta\alpha-(n-1)s}=Z_{n-1}.

If our hallway is the interval [-N,N], we should investigate the stopping time T=\min\{n\ge 1\colon |S_n|\ge N\}. The optional stopping time theorem applies to Z_T because |Z_n|\le e^{|\theta| N} whenever n\le T. Thus, E(Z_T)=E(Z_0)=1. In terms of S_n this means E e^{\theta S_T-sT}=1.

Now, S_T is either N or -N; both happen with P=1/2 and in both cases T has the same distribution. Therefore, 1=E e^{\theta S_T-sT}=\frac{1}{2}(e^{N\theta}+e^{-N\theta}) E e^{-sT}. We end up with the Laplace transform of T,

\displaystyle E e^{-sT}=\frac{1}{\cosh N\theta} = \frac{1}{\cosh N\, \mathrm{arccosh}\, e^{s}}, \qquad s\ge 0

(I would not want to write \cosh^{-1} in such a formula.) Since N is an integer and e^{s}\ge 1, a magical identity applies:

\displaystyle  \cosh N \,\mathrm{arccosh}\,x =  \mathcal T_N(x)\qquad x\ge 1,

\mathcal T_N being the Nth Чебышёв polynomial of the first kind. (Compare to \displaystyle  \cos N \,\mathrm{arccos}\,x =  \mathcal T_N(x) which holds for |x|\le 1.) Thus,

\displaystyle E e^{-sT}=\frac{1}{\mathcal T_N (e^{s})}, \qquad s\ge 0

One more change of notation: let e^{-s}=y, so that E e^{-sT}=E(y^T)=\sum_{n=1}^{\infty} P(T=n)\,y^n. The formula

\displaystyle \sum_{n=1}^{\infty} P(T=n)\,y^n=\frac{1}{\mathcal T_N (1/y)}

gives us the probabilities P(T=n) for each n, assuming we can expand the reciprocal of the Чебышёв polynomial into a power series. Let’s check two simple cases:

  • if N=1, then \mathcal T_1(x)=x and \frac{1}{\mathcal T_1 (1/y)}=y. Hence T\equiv 1 which is correct: the walk immediately hits one of the walls \pm 1.
  • if N=2, then \mathcal T_1(x)=2x^2-1, hence \displaystyle \frac{1}{\mathcal T_2 (1/y)}=\frac{y^2}{2}\frac{1}{1-y^2/2}=\sum_{k=1}^\infty \frac{1}{2^k} y^{2k}. This means T can never be odd (which makes sense, since it must always have the parity of N), and the probability of hitting a wall in exactly 2k steps is 1/2^k. A moment’s reflection confirms that the answer is right.

For a serious example, with N=10, I used Maple:

ser:=series(1/ChebyshevT(10, 1/x),x=0,2*n+1):
for i from 1 to 2*n do a[i]:=coeff(ser, x^i) end do:
listplot([seq(a[i],i=1..2*n)], color=blue, thickness=2);

By default listplot connects the dots, and since every other term is zero, the plot has a trippy pattern.

Exit time distribition: click to magnify

Obviously, it’s impossible to hit a wall in fewer than N steps. The formula confirms this: writing 1/\mathcal T_N (1/y) a rational function with numerator y^N, we see that no terms y^n with n<N can appear in the expansion.

(This post is an exercise which I should have done long ago but never did. Thanks are due to Ted Cox for a walk-through.)

2 thoughts on “An exercise in walking randomly”

  1. I took a course on Stochastic Calculus with J. Michael Steele at UPenn. (I remember you said you were gonna teach stat in fall 2010, but never learnt it before)
    First of all, S_n^2-n is a martingale,
    E(S_{n+1}^2-(n+1))=\frac12 ((S_n+1)^2-(n+1))+\frac12 ((S_n-1)^2-(n+1))=S_n^2-n
    From Doob stopping time theorem,
    E(T)=E(S_T^2)=\frac12 N^2+\frac12 N^2=N^2
    I like your series term matching method for solving for P(T=M). I came up with a combinatorial approach that I want to discuss with you.
    Let (\frac{M+N}{2},\frac{M-N}{2}) denote the event that for a random walk without stopping rules (hallway is infinitely long), there are \frac{M+N}{2} left steps and \frac{M-N}{2} right steps during the first M steps.
    P(T=M, S_T=N) P((\frac{M+N}{2},\frac{M-N}{2})| T=M, S_T=N)=P(T=M, S_T=N| (\frac{M+N}{2},\frac{M-N}{2}) ) P( (\frac{M+N}{2},\frac{M-N}{2}) )
    P(T=M, S_T=N)  = P(T=M, S_T=N| (\frac{M+N}{2},\frac{M-N}{2}) )  C_M^{\frac{M+N}{2}} /2^M
    where C_X^Y is the number of ways to choose Y items from X items.
    P(T=M, S_T=N | (\frac{M+N}{2},\frac{M-N}{2}) )=[C_M^{(M+N)/2}-N(\text{hit }N\text{ before }T)-N(\text{hit }-N\text{ before }T)]/C_M^{(M+N)/2}
    By principle of reflection, N(\text{hit }-N\text{ before }T)=N(S_T= -3N)=C_M^{M+3N/2} for M\ge3N. Otherwise it is 0.
    N(\text{hit }N\text{ before }T) can be decomposed as N(S_{T-1}=N+1,\text{ hit }N\text{ before }T)+N(S_{T-1}=N-1,\text{ hit }N\text{ before }T)=C_M^{M+3N/2}/2+N(S_{T-1}=N+1)=C_M^{M+3N/2}/2+C_{M-1}^{(M+N)/2}
    Finally P(T=M)=2 P(T=M, S_T=N)

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.