This is a brief return to the topic of Irrational Sunflowers. The sunflower associated with a real number is the set of points with polar coordinates and , . A sunflower reduces to equally spaced rays if and only if is a rational number written in lowest terms as .

Here is the sunflower of of size .

Seven rays emanate from the center because , then they become spirals, and spirals rearrange themselves into 113 rays because . Counting these rays is boring, so here is a way to do this automatically with Python (+NumPy as np):

a = np.pi
n = 5000
x = np.mod(a*np.arange(n, 2*n), 1)
np.sum(np.diff(np.sort(x)) > 1/n)

This code computes the polar angles of sunflower points indexed , sorts them and counts the relatively large gaps between the sorted values. These correspond to the gaps between sunflower rays, except that one of the gaps gets lost when the circle is cut and straightened onto the interval . So the program output (112) means there are 113 rays.

Here is the same sunflower with the points alternatively colored red and blue.

The colors blur into purple when the rational approximation pattern is strong. But they are clearly seen in the transitional period from 22/7 approximation to 355/113.

How many points would we need to see the next rational approximation after 355/113?

What will that approximation be? Yes, 22/7 and 355/113 and among the convergent of the continued fraction of . But so is 333/106 which I do not see in the sunflower. Are some convergents better than others?

Finally, the code I used to plot sunflowers.

import numpy as np
import matplotlib.pyplot as plt
a = np.pi
k = np.arange(10000)
r = np.sqrt(k)
t = a*2*np.pi*k
plt.axes().set_aspect('equal')
plt.plot(r*np.cos(t), r*np.sin(t), '.')
plt.show()

Many people are aware of being a number between 3 and 4, and some also know that is between 2 and 3. Although the difference is less than 1/2, it’s enough to place the two constants in separate buckets on the number line, separated by an integer.

When dealing with powers of , using is frequently wasteful, so it helps to know that . Similarly, is way more precise than . To summarize: is between 7 and 8, while is between 9 and 10.

Do any two powers of and have the same integer part? That is, does the equation have a solution in positive integers ?

Probably not. Chances are that the only pairs for which are , the smallest difference attained by .

Indeed, having implies that , or put differently, . This would be an extraordinary rational approximation… for example, with it would mean that with the following digits all being . This isn’t happening.

Looking at the continued fraction expansion of shows the denominators of modest size , indicating the lack of extraordinarily nice rational approximations. Of course, can use them to get good approximations, , which leads to with small relative error. For example, dropping and subsequent terms yields the convergent , and one can check that while .

Trying a few not-too-obscure constants with the help of mpmath library, the best coincidence of integer parts that I found is the following: the 13th power of the golden ratio and the 34th power of Apèry’s constant both have integer part 521.

A neat way to visualize a real number is to make a sunflower out of it. This is an arrangement of points with polar angles and polar radii (so that the concentric disks around the origin get the number of points proportional to their area). The prototypical sunflower has , the golden ratio. This is about the most uniform arrangement of points within a disk that one can get.

But nothing stops us from using other numbers. The square root of 5 is not nearly as uniform, forming distinct spirals.

The number begins with spirals, but quickly turns into something more uniform.

The number has stronger spirals: seven of them, due to approximation.

Of course, if was actually , the arrangement would have rays instead of spirals:

What if we used more points? The previous pictures have 500 points; here is with . The new pattern has 113 rays: .

Apéry’s constant, after beginning with five spirals, refuses to form rays or spirals again even with 3000 points.

The images were made with Scilab as follows, with an offset by 1/2 in the polar radius to prevent the center from sticking out too much.

n = 500
alpha = (sqrt(5)+1)/2
r = sqrt([1:n]-1/2)
theta = 2*%pi*alpha*[1:n]
plot(r.*cos(theta), r.*sin(theta), '*');
set(gca(), "isoview", "on")

Historical significance of this formula nonwithstanding, one has to admit that this is not a good way to approximate . For example, the product up to is

And all we get for this effort is the lousy approximation .

But it turns out that (1) can be dramatically improved with a little tweak. First, let us rewrite partial products in (1) in terms of double factorials. This can be done in two ways: either

or

Seeing how badly (2) underestimates , it is natural to bump it up: replace with :

Now with we get instead of . The error is down by two orders of magnitude, and all we had to do was to replace the factor of with . In particular, the size of numerator and denominator hardly changed:

Approximation (4) differs from (2) by additional term , which decreases to zero. Therefore, it is not obvious whether the sequence is increasing. To prove that it is, observe that the ratio is

which is greater than 1 because

Sweet cancellation here. Incidentally, it shows that if we used instead of , the sequence would overshoot and no longer be increasing.

The formula (3) can be similarly improved. The fraction is secretly , which should be replaced with . The resulting approximation for

is about as good as , but it approaches from above. For example, .

The proof that is decreasing is familiar: the ratio is

which is less than 1 because

Sweet cancellation once again.

Thus, for all . The midpoint of this containing interval provides an even better approximation: for example, . The plot below displays the quality of approximation as logarithm of the absolute error:

yellow dots show the error of Wallis partial products (2)-(3)

blue is the error of

red is for

black is for

And all we had to do was to replace with or in the right places.

There is an upward trend in the digits of . I just found it using Maple.

X := [0, 1, 2, 3, 4, 5, 6, 7, 8]:
Y := [3, 1, 4, 1, 5, 9, 2, 6, 5]:
LinearFit([1, n], X, Y, n);
2.20000000000000+.450000000000000*n

Here the digits are enumerated beginning with the th, which is . The regression line predicts that the th digit of is approximately .

But maybe my data set is too small. Let’s throw in one more digit; that ought to be enough. Next digit turns out to be , and this hurts my trend. The new regression line has smaller slope, and it crosses the old one at .

But we all know that can be easily changed to . The old “professor, you totaled the scores on my exam incorrectly” trick. Finding a moment when none of the -obsessed people are looking, I change the decimal expansion of to . New trend looks even better than the old: the regression line became steeper, and it crosses the old one at the point .

What, again? Is this a coincidence? I try changing the th digit to other numbers, and plot the resulting regression lines.

All intersect at the same spot. The hidden magic of is uncovered.

For the lack of original pi-related ideas, I describe a billiard experiment invented and popularized by Gregory Galperin (Григорий Гальперин). I read about it in a short note he published in 2001 in the Russian journal “Mathematical Education” (“Математическое Просвещение”). Although the table of contents is in Russian, it’s not hard to find the article with “pi” in the title. It’s clear from the article that the observation precedes its publication by some years; the author gave talks on this subject for a while before committing it to print.

Place two billiard balls (red and blue) on the half-line : the red ball is at rest, the blue ball is to the right of it and is moving the the left. At there is an elastic wall.

Assume perfectly elastic collisions: the kinetic energy is preserved. How many collisions will happen depends on the masses of the ball, or rather their ratio. Suppose they have equal mass (ratio=1). Then we’ll see the following:

Blue hits Red and stops; Red begins to move to the left

Red hits the wall and bounces back

Red hits Blue and stops; Blue begins to move to the right

That’s all, 3 collisions in total. Here is a Scilab-generated illustration: I placed red at and blue at , with initial velocity . The time axis is vertical.

Number 3 is also the first digit of . Coincidence? No.

Consider the blue/red mass ratio of 100. Now we really need scilab. Count the collisions as you scroll down:

Yes, that’s exactly 31 collisions. The mass ratio of 10000 would yield 314 collisions, and so on.

How does this work? The trick is to consider the two-dimensional configuration space in which the axes are scaled proportional to the square roots of masses. A point on the configuration plane describes the position of two balls at once, but it also behaves just like a billiard ball itself, bouncing off the lines (red=0) and (red=blue). The scaling by square roots of masses is needed to make the (red=blue) bounce work correctly: the angle of incidence is equal to the angle of reflection. (See Update 2 below.)

The opening angle is . Now we can straighten the billiard trajectory by repeated reflection: this will help us count the number of collisions.

It remains to count how many angles of size fit inside of angle . By taking , we should get collisions, rounded to an integer one way or the other. It seems that the rounding is always down, that is, we get exactly the first digits of . I once tried to prove that this always happens, but unsuccessfully. The error of approximation needs to be compared to the fractional part of , which looks like tricky business.

Here’s the scilab source. The plots in this post were obtained with galperin(1) and galperin(100).

function galperin(m)
h=0.001; steps=4000; x1=zeros(steps); x2=zeros(steps);
x1(1)=1.5; v1=-1; x2(1)=1; v2=0; collision=0; j=1;
while j<steps,
select collision
case 0 then
newx1=x1(j)+h*v1; newx2=x2(j)+h*v2;
if (newx2<0) then collision=1;
elseif (newx1<newx2) then collision=2;
else
x1(j+1)=newx1; x2(j+1)=newx2; j=j+1;
end
case 1 then
v2=-v2; x1(j+1)=x1(j); x2(j+1)=x2(j); collision=0; j=j+1;
case 2 then
newv1=(v1*(m-1)+2*v2)/(m+1); newv2=(v2*(1-m)+2*m*v1)/(m+1);
v1=newv1; v2=newv2; x1(j+1)=x1(j); x2(j+1)=x2(j);
collision=0; j=j+1;
end
end
t=linspace(0, h*steps, steps); plot(t, x1); plot(t, x2, 'red');
endfunction

Update: this post was referred to from Math StackExchange, with a request for a more intuitive explanation of how the scaling by works. Let be the velocity vector of the point in the configuration space. Thanks to the scaling, we have the following:

The magnitude of is equal to the total kinetic energy of the system, up to a constant factor.

The projection of onto the collision line is equal to the total momentum of the system, up to a constant factor.

When the balls collide with each other, both the energy and the momentum are preserved. It follows that the new vector will have the same magnitude and the same projection onto the collision line. By trigonometry, this implies the equality of the incident and reflected angles.