[Interpolation] rarely appears in calculus books today, and then only as numerical method. Yet three of the most important founders of calculus, Newton, Gregory, and Leibniz, began their work with interpolation… Of course, interpolation is a numerical method in practice, when one uses only a few terms of the Gregory-Newton series, but the full series is exact and hence of much greater interest.
— John Stillwell, Mathematics and its History.
For a function define
, and then inductively
, where
is a fixed step size. Also, let
where
is not necessarily an integer. The Gregory-Newton series for
(centered at zero) is
The idea is that th partial sum is the (unique)
th degree polynomial that agrees with
at the points
, and we hope that the limit
recovers
exactly. But does it really? After all, the formula uses only the values of
at integer multiples of
, and there are infinitely many functions (including analytic ones) that have the same values as
at these points. For example, with
and
we get
for all
, hence the sum is zero.
Convergence is also an issue. Since for most values of
, we need
to decay exponentially for the series to be of much use. This is not the case when
and
: it is easy to see that
, and therefore the series diverges. Even though the function is analytic… I used OpenOffice spreadsheet to carry out the computations for
with various step sizes.
With we still have exponential divergence:

But at the pattern becomes periodic:

With the trend is not immediately clear:

But for slightly smaller values, such as , the exponential decay becomes evident:

What is happening here? As usual, it helps to switch from trigonometric to exponential form and consider . Another thing that helps is the identity
, which is a straightforward induction exercise. Putting the two together, we find
. We could now take the real part to return to the cosine, but it is already clear than
decays exponentially if and only if
. The latter holds exactly when
; in particular the step size
should work. Here is a Maple plot comparing the degree 10 partial sum (in red) and the cosine itself (in green), using
. I expanded the plot range beyond
, otherwise the graphs would be indistinguishable.

In contrast, the Taylor polynomial of 10th degree visibly diverges from the function starting at . The interval in which it represents the cosine accurately is shorter than for the interpolating polynomial.

With the real exponential function the situation is somewhat different. If then
and convergence holds when
, which is worse than for trigonometric functions. On the other hand,
has
which decays exponentially no matter how large
is.
Of course, the convergence of a series does not mean that its sum is equal to . We already saw what happens with
for either cosine or sine: the Gregory-Newton series degenerates to a constant, which of course does not represent the function.
On the other hand, divergent series are not necessarily useless. Here is the 10th partial sum of the series for cosine with . The series diverges everywhere except at
, but the partial sum still does a decent job. The key is to use the right number of terms: not too few, but not too many either.

I find the step size particularly interesting, because it addresses a quasi-philosophical question: is trigonometric wave the “canonical” function alternating between
and
at equal intervals? Although the Gregory-Newton series rapidly diverges, its
th partial sum resembles the cosine near the midpoint of the interval,
. However the quality of approximation is poor. I used
below, but even with
the sum visibly diverges from cosine in every period.

I like the simple Newton’s interpolation formula with test points

tends to infinity it converges to the original function.
This polynomial can be computed with a neat dynamic programming route, and as
One thing to note is that taking uniform steps is not optimal. The Chebyshev points generated by a cos function are pseudo-random and can be used as test points for edge-case functions.
Sorry I was confused. The essential thing is to have a nonlinear mapping to map a sequence of uniform spaced numbers to another sequence that is dense toward the endpoints like you said.