This is post is related to Extremal Taylor polynomials where it was important to observe that the Taylor polynomials of the function do not have zeros in the unit disk. Let’s see how far this generalizes.
The function has the rare property that all zeros of its Taylor polynomial have unit modulus. This is clear from
In this and subsequent illustrations, the zeros of the first 50 Taylor polynomials are shown as blue dots, with the unit circle in red for reference.
When the exponent is less than -1, the zeros move inside the unit disk and begin forming nice patterns in there.
When the exponent is strictly between -1 and 1, the zeros are all outside of the unit disk. Some of them get quite large, forcing a change of scale in the image.
Why does this happen when the exponent approaches 1? The function is its own Taylor polynomial, and has the only zero at -1. So, when , the Taylor polynomials are small perturbations of . These perturbations of coefficients have to create additional zeros, but being small, they require a large value of to help them.
For a specific example, the quadratic Taylor polynomial of is , with roots . When , one of these roots is near (as it has to be) and the other is large.
Finally, when and is not an integer, we get zeros on both sides of the unit circle. The majority of them are still outside. A prominent example of an interior zero is produced by the first-degree polynomial .
The choice of piecewise polynomials of degree 3 for interpolation is justifiably popular: even-degree splines are algebraically awkward to construct, degree 1 is simply piecewise linear interpolation (not smooth), and degree 5, while feasible, entails juggling too many coefficients. Besides, a cubic polynomial minimizes the amount of wiggling (the integral of second derivative squared) for given values and slopes at the endpoints of an interval. (Recall Connecting dots naturally.)
But the derivative of a cubic spline is a quadratic spline. And one needs the derivative to find the critical points. This results in an awkward example in SciPy documentation, annotated with “(NB: sproot only works for order 3 splines, so we fit an order 4 spline)”.
Although not implemented in SciPy, the task of computing the roots of a quadratic spline is a simple one. Obtaining the roots from the internal representation of a quadratic spline in SciPy (as a linear combination of B-splines) would take some work and reading. But a quadratic polynomial is determined by three values, so sampling it at three points, such as two consecutive knots and their average, is enough.
Quadratic formula with values instead of coefficients
Suppose we know the values of a quadratic polynomial q at -1, 0, 1, and wish to find if it has roots between -1 and 1. Let’s normalize so that q(0)=1, and let x = q(-1), y = q(1). If either x or y is negative, there is definitely a root on the interval. If they are positive, there is still a chance: we need the parabola to be concave up, have a minimum within [-1, 1], and for the minimum to be negative. All of this is easily determined once we note that the coefficients of the polynomial are a = (x+y)/2 – 1, b = (y-x)/2, and c = 1.
The inequality ensures the suitable sign of the discriminant. It describes a parabola with vertex (1, 1) and focus (2, 2), contained in the first quadrant and tangent to the axes at (4, 0) and (0, 4). Within the orange region there are no real roots.
The line x+y=2, tangent to the parabola at its vertex, separates convex and concave parabolas. While concavity in conjunction with x, y being positive definitely precludes having roots in [-1, 1], slight convexity is not much better: it results in real roots outside of the interval. Here is the complete picture: green means there is a root in [-1, 1], orange means no real roots, red covers the rest.
Back to splines
Since the derivative of a spline is implemented in SciPy (B-splines have a nice formula for derivatives), all we need is a root-finding routine for quadratic splines. Here it is, based on the above observations but using built-in NumPy polynomial solver np.roots to avoid dealing with various special cases for the coefficients.
roots = 
knots = spl.get_knots()
for a, b in zip(knots[:-1], knots[1:]):
u, v, w = spl(a), spl((a+b)/2), spl(b)
t = np.roots([u+w-2*v, w-u, 2*v])
t = t[np.isreal(t) & (np.abs(t) <= 1)]
roots.extend(t*(b-a)/2 + (b+a)/2)
A demonstration, which plots the spline (blue), its critical points (red), and original data points (black) as follows:
import numpy as np
import matplotlib.pyplot as plt
from scipy.interpolate import InterpolatedUnivariateSpline
x = np.arange(7)
y = np.array([3, 1, 1, 2, 2, 4, 3])
f = InterpolatedUnivariateSpline(x, y, k=3)
crit_pts = quadratic_spline_roots(f.derivative())
t = np.linspace(x, x[-1], 500)
plt.plot(x, y, 'kd')
plt.plot(crit_pts, f(crit_pts), 'ro')
The more terms of Taylor series we use, the more resemblance we see between the Taylor polynomial and the sine function itself. The first-degree polynomial matches one zero of the sine, and gets the slope right. The third-degree polynomial has three zeros in about the right places.
The fifth-degree polynomial will of course have … wait a moment.
Since all four critical points are in the window, there are no real zeros outside of our view. Adding the fifth-degree term not only fails to increase the number of zeros to five, it even drops it back to the level of . How odd.
Since the sine Taylor series converges uniformly on bounded intervals, for every there exists such that . Then will have the same sign as at the maxima and minima of the latter. Consequently, it will have about zeros on the interval . Indeed, the intermediate value theorem guarantees that many; and the fact that on will not allow for extraneous zeros within this interval.
Using the Taylor remainder estimate and Stirling's approximation, we find . Therefore, will have about real zeros at about the right places. What happens when is too large for Taylor remainder estimate to be effective, we can't tell.
Let's just count the zeros, then. Sage online makes it very easy:
sineroots = [[2*n-1,len(sin(x).taylor(x,0,2*n-1).roots(ring=RR))] for n in range(1,51)]
The up-and-down pattern in the number of zeros makes for a neat scatter plot. How close is this data to the predicted number ? Pretty close.
The slope of the blue line is ; the (ir)rationality of this number is unknown. Thus, just under a quarter of the zeros of are expected to be real when is large.
The actual number of real zeros tends to exceed the prediction (by only a few) because some Taylor polynomials have real zeros in the region where they no longer follow the function. For example, does this:
Richard S. Varga and Amos J. Carpenter wrote a series of papers titled Zeros of the partial sums of and in which they classify real zeros into Hurwitz (which follow the corresponding trigonometric function) and spurious. They give the precise count of the Hurwitz zeros: for the sine and for the cosine. The total number of real roots does not appear to admit such an explicit formula. It is the sequence A012264 in the OEIS.