Gregory-Newton series

[Interpolation] rarely appears in calculus books today, and then only as numerical method. Yet three of the most important founders of calculus, Newton, Gregory, and Leibniz, began their work with interpolation… Of course, interpolation is a numerical method in practice, when one uses only a few terms of the Gregory-Newton series, but the full series is exact and hence of much greater interest.

— John Stillwell, Mathematics and its History.

For a function f\colon\mathbb R\to\mathbb R define \Delta^0 f=f, and then inductively \Delta^{n+1}f(a)=\Delta^n f(a+h)-\Delta^n f(a), where h is a fixed step size. Also, let \displaystyle \binom{\alpha}{n} = \frac{\alpha(\alpha-1)\dots (\alpha-n+1)}{n!} where \alpha\in \mathbb R is not necessarily an integer. The Gregory-Newton series for f (centered at zero) is

\displaystyle f(x) \sim \sum_{n=0}^\infty \Delta^n f(0) \binom{x/h}{n}

The idea is that Nth partial sum is the (unique) Nth degree polynomial that agrees with f at the points 0,h,2h,\dots,Nh, and we hope that the limit N\to\infty recovers f exactly. But does it really? After all, the formula uses only the values of f at integer multiples of h, and there are infinitely many functions (including analytic ones) that have the same values as f at these points. For example, with f(x)=\sin x and h=\pi we get \Delta^n f(0)=0 for all n, hence the sum is zero.

Convergence is also an issue. Since \binom{x/h}{n}^{1/n}\to 1 for most values of x, we need \Delta^n f(0) to decay exponentially for the series to be of much use. This is not the case when f(x)=\cos x and h=\pi: it is easy to see that \Delta^n f(0) = (-2)^n, and therefore the series diverges. Even though the function is analytic… I used OpenOffice spreadsheet to carry out the computations for f(x)=\cos x with various step sizes.

With h=\pi/2 we still have exponential divergence:

h = pi / 2

But at h=\pi/3 the pattern becomes periodic:

h = pi / 3

With h=1 the trend is not immediately clear:

h = 1

But for slightly smaller values, such as 0.9, the exponential decay becomes evident:

h = 0.9

What is happening here? As usual, it helps to switch from trigonometric to exponential form and consider f(x)=\cos x + i\sin x=e^{ix}. Another thing that helps is the identity \displaystyle \Delta^n f(0) = \sum_{k=0}^n (-1)^{n-k}\binom{n}{k} f(kh), which is a straightforward induction exercise. Putting the two together, we find \displaystyle \Delta^n f(0) = (e^{ih}-1)^n. We could now take the real part to return to the cosine, but it is already clear than \Delta^n f(0) decays exponentially if and only if |e^{ih}-1|<1. The latter holds exactly when |h|<\pi/3; in particular the step size h=1 should work. Here is a Maple plot comparing the degree 10 partial sum (in red) and the cosine itself (in green), using h=1. I expanded the plot range beyond [0,10], otherwise the graphs would be indistinguishable.

Degree 10 interpolation

In contrast, the Taylor polynomial of 10th degree visibly diverges from the function starting at x=\pm 4. The interval in which it represents the cosine accurately is shorter than for the interpolating polynomial.

Taylor polynomial of 10th degree

With the real exponential function the situation is somewhat different. If f(x)=e^x then \displaystyle \Delta^n f(0) = (e^{h}-1)^n and convergence holds when h<\ln 2\approx 0.69, which is worse than for trigonometric functions. On the other hand, f(x)=e^{-x} has \displaystyle \Delta^n f(0) = (e^{-h}-1)^n which decays exponentially no matter how large h>0 is.

Of course, the convergence of a series does not mean that its sum is equal to f. We already saw what happens with h=2\pi for either cosine or sine: the Gregory-Newton series degenerates to a constant, which of course does not represent the function.

On the other hand, divergent series are not necessarily useless. Here is the 10th partial sum of the series for cosine with h=\pi/2. The series diverges everywhere except at 0, but the partial sum still does a decent job. The key is to use the right number of terms: not too few, but not too many either.

Degree 10, step size pi/2

I find the step size h=\pi particularly interesting, because it addresses a quasi-philosophical question: is trigonometric wave the “canonical” function alternating between 1 and -1 at equal intervals? Although the Gregory-Newton series rapidly diverges, its Nth partial sum resembles the cosine near the midpoint of the interval, N\pi/2. However the quality of approximation is poor. I used N=20 below, but even with N=50 the sum visibly diverges from cosine in every period.

Partial sum of degree 20, using step size pi.

2 thoughts on “Gregory-Newton series”

  1. I like the simple Newton’s interpolation formula with test points x_i
    P_n(x) = c_0 + \sum_{k=1}^n c_k \prod_{i=0}^{k-1} (x-x_i)
    This polynomial can be computed with a neat dynamic programming route, and as n tends to infinity it converges to the original function.
    One thing to note is that taking uniform steps is not optimal. The Chebyshev points generated by a cos function are pseudo-random and can be used as test points for edge-case functions.

    1. Sorry I was confused. The essential thing is to have a nonlinear mapping to map a sequence of uniform spaced numbers to another sequence that is dense toward the endpoints like you said.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s