This is a follow-up to An exercise in walking randomly, but can be read independently.
Suppose I randomly and blindly walk back and forth along this hallway, which may or may not happen in reality. The safest place to start is in the middle, next to the department’s main office. How long will it take before I bump into a wall at one of the ends?
(Visitors are advised to not walk blindly near the ongoing construction.)
Formally, this is a random walk: I start at the midpoint of interval and move to the left or right with probability
. The walk ends when I reach either
or
. What is the probability that the walk will last exactly
steps?
The answer (found in the earlier post) is given by the coefficient of in the power series for
, where
is the Chebyshev polynomial of degree
. The plot below shows the exit time distribution for
, with an extraneous but nice pattern.

In a comment, Lianxin He pointed out an alternative approach via reflection principle: it gives the answer as a sum of binomial coefficients, which can be terminated to yield a simple and effective approximation. I describe his approach below.
To begin with, assume that is even, otherwise the probability is zero. Out of
possible walks,
make
more steps to the right than to the left. Dividing this binomial coefficient by
and multiplying by two (to account for both exits), we conclude that
is the probability of reaching an end of the hallway at time
. The graph of
is in red below:

This is not good at all. We neglected to exclude the walks that hit a wall before time . How many of them are there? Let’s consider only the walks that end at
at times
. Formally,
where
denotes the position at time
.
- The walks that hit
before time
can be reflected about
the first time they get there. The reflection puts them in bijection with the walks such that
. There are
of those.
- If a walk hits
before time
, we flip a fair coin deciding whether to reflect it about
. As a result of our intervention, such walks will have
or
with equal probability. But on the other hand, our intervention did not change the distribution of walks. Hence, the number of walks hitting
before time
is twice the number of walks with
. Namely,
.
Excluding both types 1) and 2) counted above, we arrive at the new probability estimate
Let’s see how it works…

Looks very good at the beginning, but for large we are seriously undercounting. Indeed, the events 1) and 2) described above are not mutually exclusive. For example, a walk could hit
, then
, and then arrive to
again at time
. At present we are excluding such a walk twice. Let’s fix this by adding them back.
- The walks that hit
and then
before time
can be reflected twice, putting them in bijection with the walks such that
. There are
of those.
- If a walk hits
and then
before time
, we reflect at
and then flip a coin when the walk reaches
. Following the reasoning in 2) above, the count is
.
Our new estimate, then, is . Here it is.

Looks good! But of course now we overcount the walks that hit the walls three times before time . Fortunately, the pattern is now clear: the next approximation will be
.

You already can’t tell this approximation from the exact distribution. To summarize (pun intended), the exact probability of exiting at time
is
where the series is actually finite.
Thanks for this post! It explains everything very clearly.
for
, so denote
as the probability density function,

back?
,


![\displaystyle \frac{ d E(e^{tT})}{d t} (0) = \frac{-\sinh( N \mathrm{arccosh}\, e^0) N \frac{1}{\sinh (\mathrm{arccosh}\,(e^0))} (-e^0)}{[\cosh( N \mathrm{arccosh}\, (e^{-0}))]^2} = N^2 \displaystyle \frac{ d E(e^{tT})}{d t} (0) = \frac{-\sinh( N \mathrm{arccosh}\, e^0) N \frac{1}{\sinh (\mathrm{arccosh}\,(e^0))} (-e^0)}{[\cosh( N \mathrm{arccosh}\, (e^{-0}))]^2} = N^2](https://s0.wp.com/latex.php?latex=%5Cdisplaystyle++%5Cfrac%7B+d+E%28e%5E%7BtT%7D%29%7D%7Bd+t%7D+%280%29+%3D++%5Cfrac%7B-%5Csinh%28+N+%5Cmathrm%7Barccosh%7D%5C%2C+e%5E0%29+N+%5Cfrac%7B1%7D%7B%5Csinh+%28%5Cmathrm%7Barccosh%7D%5C%2C%28e%5E0%29%29%7D+%28-e%5E0%29%7D%7B%5B%5Ccosh%28+N+%5Cmathrm%7Barccosh%7D%5C%2C+%28e%5E%7B-0%7D%29%29%5D%5E2%7D+%3D+N%5E2&bg=ffffff&fg=1a1a1a&s=0&c=20201002)
. The result of mean of
conforms with our martingale argument.
I just have a follow-up thought on your power series approach. You calculated the Laplace transform
How hard is it to perform the reverse transformation on Chebyshev polynomials to get
Also from the moment generating function for
We get the ith moment equal
For the first moment (mean),
The additional N comes from l’hopital’s rule in evaluating
sorry i added additional \’s infront of latex in the above paragraph..