Wild power pie

Many people are aware of {\pi} being a number between 3 and 4, and some also know that {e} is between 2 and 3. Although the difference {\pi-e} is less than 1/2, it’s enough to place the two constants in separate buckets on the number line, separated by an integer.

When dealing with powers of {e}, using {e>2} is frequently wasteful, so it helps to know that {e^2>7}. Similarly, {\pi^2<10} is way more precise than {\pi<4}. To summarize: {e^2} is between 7 and 8, while {\pi^2} is between 9 and 10.

Do any two powers of {\pi} and {e} have the same integer part? That is, does the equation {\lfloor \pi^n \rfloor = \lfloor e^m \rfloor} have a solution in positive integers {m,n}?

Probably not. Chances are that the only pairs {(m,n)} for which {|\pi^n - e^m|<10} are {m,n\in \{1,2\}}, the smallest difference attained by {m=n=1}.

Indeed, having {|\pi^n - e^m|<1} implies that {|n\log \pi - m|<e^{-m}}, or put differently, {\left|\log \pi - \dfrac{m}{n}\right| < \dfrac{1}{n \,\pi^n}}. This would be an extraordinary rational approximation… for example, with {n=100} it would mean that {\log \pi = 1.14\ldots} with the following {50} digits all being {0}. This isn’t happening.

Looking at the continued fraction expansion of {\log \pi} shows the denominators of modest size {[1; 6, 1, 10, 24, \dots]}, indicating the lack of extraordinarily nice rational approximations. Of course, can use them to get good approximations, {\left|\log \pi - \dfrac{m}{n}\right| < \dfrac{1}{n^2}}, which leads to {\pi^n\approx e^m} with small relative error. For example, dropping {24} and subsequent terms yields the convergent {87/76}, and one can check that {\pi^{76} = 6.0728... \cdot 10^{37}} while {e^{87} = 6.0760...\cdot 10^{37}}.

Trying a few not-too-obscure constants with the help of mpmath library, the best coincidence of integer parts that I found is the following: the 13th power of the golden ratio {\varphi = (\sqrt{5}+1)/2} and the 34th power of Apèry’s constant {\zeta(3) = 1^{-3}+2^{-3}+3^{-3}+4^{-4}+\dots} both have integer part 521.

Irrational sunflowers

A neat way to visualize a real number {\alpha} is to make a sunflower out of it. This is an arrangement of points with polar angles {2 \pi \alpha k} and polar radii {\sqrt{k}} (so that the concentric disks around the origin get the number of points proportional to their area). The prototypical sunflower has {\alpha=(\sqrt{5}+1)/2}, the golden ratio. This is about the most uniform arrangement of points within a disk that one can get.

Golden ratio sunflower
Golden ratio sunflower

But nothing stops us from using other numbers. The square root of 5 is not nearly as uniform, forming distinct spirals.

Square root of 5
Square root of 5

The number {e} begins with spirals, but quickly turns into something more uniform.

Euler's sunflower
Euler’s sunflower

The number {\pi} has stronger spirals: seven of them, due to {\pi\approx 22/7} approximation.

pi sunflower
pi sunflower

Of course, if {\pi} was actually {22/7}, the arrangement would have rays instead of spirals:

Rational sunflower: 22/7
Rational sunflower: 22/7

What if we used more points? The previous pictures have 500 points; here is {\pi} with {3000}. The new pattern has 113 rays: {\pi\approx 355/113}.

pi with 3000 points
pi with 3000 points

Apéry’s constant, after beginning with five spirals, refuses to form rays or spirals again even with 3000 points.

Apéry's constant, 3000 points
Apéry’s constant, 3000 points

The images were made with Scilab as follows, with an offset by 1/2 in the polar radius to prevent the center from sticking out too much.

n = 500
alpha = (sqrt(5)+1)/2
r = sqrt([1:n]-1/2)  
theta = 2*%pi*alpha*[1:n]
plot(r.*cos(theta), r.*sin(theta), '*');
set(gca(), "isoview", "on")

Improving the Wallis product

The Wallis product for {\pi}, as seen on Wikipedia, is

{\displaystyle 2\prod_{k=1}^\infty \frac{4k^2}{4k^2-1}  = \pi \qquad \qquad (1)}

Historical significance of this formula nonwithstanding, one has to admit that this is not a good way to approximate {\pi}. For example, the product up to {k=10} is

{\displaystyle 2\,\frac{2\cdot 2\cdot 4\cdot 4\cdot 6\cdot 6\cdot 8\cdot 8 \cdot 10 \cdot 10\cdot 12\cdot 12\cdot 14\cdot 14\cdot 16\cdot 16\cdot 18 \cdot 18\cdot 20\cdot 20}{1\cdot 3\cdot 3\cdot 5 \cdot 5\cdot 7\cdot 7\cdot 9\cdot 9\cdot 11\cdot 11\cdot 13\cdot 13\cdot 15\cdot 15\cdot 17\cdot 17\cdot 19\cdot 19\cdot 21} =\frac{137438953472}{44801898141} }

And all we get for this effort is the lousy approximation {\pi\approx \mathbf{3.0677}}.

But it turns out that (1) can be dramatically improved with a little tweak. First, let us rewrite partial products in (1) in terms of double factorials. This can be done in two ways: either

{\displaystyle 2\prod_{k=1}^n \frac{4k^2}{4k^2-1}  =  (4n+2) \left(\frac{(2n)!!}{(2n+1)!!}\right)^2  \qquad \qquad (2)}

or

{\displaystyle 2\prod_{k=1}^n \frac{4k^2}{4k^2-1}  =  \frac{2}{2n+1} \left(\frac{(2n)!!}{(2n-1)!!}\right)^2  \qquad \qquad (3)}

Seeing how badly (2) underestimates {\pi}, it is natural to bump it up: replace {4n+2} with {4n+3}:

{\displaystyle \pi \approx b_n= (4n+3) \left(\frac{(2n)!!}{(2n+1)!!}\right)^2  \qquad \qquad (4)}

Now with {n=10} we get {\mathbf{3.1407}} instead of {\mathbf{3.0677}}. The error is down by two orders of magnitude, and all we had to do was to replace the factor of {4n+2=42} with {4n+3=43}. In particular, the size of numerator and denominator hardly changed:

{\displaystyle b_{10}=43\, \frac{2\cdot 2\cdot 4\cdot 4\cdot 6\cdot 6\cdot 8\cdot 8 \cdot 10 \cdot 10\cdot 12\cdot 12\cdot 14\cdot 14\cdot 16\cdot 16\cdot 18 \cdot 18\cdot 20\cdot 20}{3\cdot 3\cdot 5 \cdot 5\cdot 7\cdot 7\cdot 9\cdot 9\cdot 11\cdot 11\cdot 13\cdot 13\cdot 15\cdot 15\cdot 17\cdot 17\cdot 19\cdot 19\cdot 21\cdot 21} }

Approximation (4) differs from (2) by additional term {\left(\frac{(2n)!!}{(2n+1)!!}\right)^2}, which decreases to zero. Therefore, it is not obvious whether the sequence {b_n} is increasing. To prove that it is, observe that the ratio {b_{n+1}/b_n} is

{\displaystyle  \frac{4n+7}{4n+3}\left(\frac{2n+2}{2n+3}\right)^2}

which is greater than 1 because

{\displaystyle  (4n+7)(2n+2)^2 - (4n+3)(2n+3)^2 = 1 >0 }

Sweet cancellation here. Incidentally, it shows that if we used {4n+3+\epsilon} instead of {4n+3}, the sequence would overshoot {\pi} and no longer be increasing.

The formula (3) can be similarly improved. The fraction {2/(2n+1)} is secretly {4/(4n+2)}, which should be replaced with {4/(4n+1)}. The resulting approximation for {\pi}

{\displaystyle c_n =  \frac{4}{4n+1} \left(\frac{(2n)!!}{(2n-1)!!}\right)^2  \qquad \qquad (5)}

is about as good as {b_n}, but it approaches {\pi} from above. For example, {c_{10}\approx \mathbf{3.1425}}.

The proof that {c_n} is decreasing is familiar: the ratio {c_{n+1}/c_n} is

{\displaystyle  \frac{4n+1}{4n+5}\left(\frac{2n+2}{2n+1}\right)^2}

which is less than 1 because

{\displaystyle  (4n+1)(2n+2)^2 - (4n+5)(2n+1)^2 = -1 <0 }

Sweet cancellation once again.

Thus, {b_n<\pi<c_n} for all {n}. The midpoint of this containing interval provides an even better approximation: for example, {(b_{10}+c_{10})/2 \approx \mathbf{3.1416}}. The plot below displays the quality of approximation as logarithm of the absolute error:

Logarithm of approximation error
  • yellow dots show the error of Wallis partial products (2)-(3)
  • blue is the error of {b_n}
  • red is for {c_n}
  • black is for {(b_n+c_n)/2}

And all we had to do was to replace {4n+2} with {4n+3} or {4n+1} in the right places.

When the digits of pi go to 11

There is an upward trend in the digits of {\pi}. I just found it using Maple.

X := [0, 1, 2, 3, 4, 5, 6, 7, 8]:
Y := [3, 1, 4, 1, 5, 9, 2, 6, 5]:
LinearFit([1, n], X, Y, n);

2.20000000000000+.450000000000000*n

Here the digits are enumerated beginning with the {0}th, which is {3}. The regression line {y = 2.2 + 0.45n} predicts that the {20}th digit of {\pi} is approximately {11}.

It goes to 11
It goes to 11

But maybe my data set is too small. Let’s throw in one more digit; that ought to be enough. Next digit turns out to be {3}, and this hurts my trend. The new regression line {y=2.67+0.27n} has smaller slope, and it crosses the old one at {n\approx 2.7}.

Next digit, not as good
Next digit, not as good

But we all know that {3} can be easily changed to {8}. The old “professor, you totaled the scores on my exam incorrectly” trick. Finding a moment when none of the {\pi}-obsessed people are looking, I change the decimal expansion of {\pi} to {3.1 41592658\dots}. New trend looks even better than the old: the regression line became steeper, and it crosses the old one at the point {n\approx 2.7}.

Much better!
Much better!

What, {2.7} again? Is this a coincidence? I try changing the {9}th digit to other numbers, and plot the resulting regression lines.

What is going on?
What is going on?

All intersect at the same spot. The hidden magic of {\pi} is uncovered.

(Thanks to Vincent Fatica for the idea of this post.)

Galperin’s billiard method of computing pi

For the lack of original pi-related ideas, I describe a billiard experiment invented and popularized by Gregory Galperin (Григорий Гальперин). I read about it in a short note he published in 2001 in the Russian journal “Mathematical Education” (“Математическое Просвещение”). Although the table of contents is in Russian, it’s not hard to find the article with “pi” in the title. It’s clear from the article that the observation precedes its publication by some years; the author gave talks on this subject for a while before committing it to print.

Place two billiard balls (red and blue) on the half-line [0,\infty): the red ball is at rest, the blue ball is to the right of it and is moving the the left. At x=0 there is an elastic wall.

1-dimensional billiard table

Assume perfectly elastic collisions: the kinetic energy is preserved. How many collisions will happen depends on the masses of the ball, or rather their ratio. Suppose they have equal mass (ratio=1). Then we’ll see the following:

  1. Blue hits Red and stops; Red begins to move to the left
  2. Red hits the wall and bounces back
  3. Red hits Blue and stops; Blue begins to move to the right

That’s all, 3 collisions in total. Here is a Scilab-generated illustration: I placed red at x=1 and blue at x=1.5, with initial velocity -1. The time axis is vertical.

Mass ratio 1 leads to 3 collisions

Number 3 is also the first digit of \pi. Coincidence? No.

Consider the blue/red mass ratio of 100. Now we really need scilab. Count the collisions as you scroll down:

Mass ratio 100 leads to 31 collisions

Yes, that’s exactly 31 collisions. The mass ratio of 10000 would yield 314 collisions, and so on.

How does this work? The trick is to consider the two-dimensional configuration space in which the axes are scaled proportional to the square roots of masses. A point on the configuration plane describes the position of two balls at once, but it also behaves just like a billiard ball itself, bouncing off the lines (red=0) and (red=blue). The scaling by square roots of masses is needed to make the (red=blue) bounce work correctly: the angle of incidence is equal to the angle of reflection. (See Update 2 below.)

Configuration space

The opening angle is \alpha=\tan^{-1}\sqrt{M_r/M_b}. Now we can straighten the billiard trajectory by repeated reflection: this will help us count the number of collisions.

Straightening a billiard trajectory by reflection

It remains to count how many angles of size \alpha=\tan^{-1}\sqrt{M_r/M_b} fit inside of angle \pi. By taking \sqrt{M_r/M_b}=10^{-n}, we should get 10^n \pi collisions, rounded to an integer one way or the other. It seems that the rounding is always down, that is, we get exactly the first n digits of \pi. I once tried to prove that this always happens, but unsuccessfully. The error of approximation \tan^{-1} 10^{-n}\approx 10^{-n} needs to be compared to the fractional part of 10^n \pi, which looks like tricky business.

Here’s the scilab source. The plots in this post were obtained with galperin(1) and galperin(100).

function galperin(m)
    h=0.001; steps=4000;  x1=zeros(steps); x2=zeros(steps);
    x1(1)=1.5;  v1=-1;  x2(1)=1;  v2=0; collision=0; j=1;
    while j<steps,
        select collision
        case 0 then
            newx1=x1(j)+h*v1; newx2=x2(j)+h*v2;
            if (newx2<0) then collision=1;
            elseif (newx1<newx2) then collision=2;
            else
                x1(j+1)=newx1; x2(j+1)=newx2; j=j+1;
            end
        case 1 then
            v2=-v2; x1(j+1)=x1(j); x2(j+1)=x2(j); collision=0; j=j+1;
        case 2 then
            newv1=(v1*(m-1)+2*v2)/(m+1);  newv2=(v2*(1-m)+2*m*v1)/(m+1);
            v1=newv1; v2=newv2; x1(j+1)=x1(j); x2(j+1)=x2(j);
            collision=0; j=j+1;
        end
    end
    t=linspace(0, h*steps, steps); plot(t, x1); plot(t, x2, 'red');
endfunction

Update: this post was referred to from Math StackExchange, with a request for a more intuitive explanation of how the scaling by \sqrt{M} works. Let V_c be the velocity vector of the point in the configuration space. Thanks to the scaling, we have the following:

  • The magnitude of V_c is equal to the total kinetic energy of the system, up to a constant factor.
  • The projection of V_c onto the collision line x_r=x_b is equal to the total momentum of the system, up to a constant factor.

When the balls collide with each other, both the energy and the momentum are preserved. It follows that the new vector V_c will have the same magnitude and the same projection onto the collision line. By trigonometry, this implies the equality of the incident and reflected angles.