Zeros of Taylor polynomials of (1+z)^p

This is post is related to Extremal Taylor polynomials where it was important to observe that the Taylor polynomials of the function {(1+z)^{-1/2}} do not have zeros in the unit disk. Let’s see how far this generalizes.

The function {f(z)=(1+z)^{-1}} has the rare property that all zeros of its Taylor polynomial have unit modulus. This is clear from

{\displaystyle T_n(z) = \sum_{k=0}^n (-z)^k = (1-(-z)^{n+1})/(1+z)}.

p-33
The Taylor zeros of (1+z)^(-1)

In this and subsequent illustrations, the zeros of the first 50 Taylor polynomials are shown as blue dots, with the unit circle in red for reference.

When the exponent is less than -1, the zeros move inside the unit disk and begin forming nice patterns in there.

p-43
(1+z)^(-4/3)
p-63
(1+z)^(-2)
p-93
(1+z)^(-3)
p-123
(1+z)^(-4)

When the exponent is strictly between -1 and 1, the zeros are all outside of the unit disk. Some of them get quite large, forcing a change of scale in the image.

p-23
(1+z)^(-2/3)
p-13
(1+z)^(-1/3)
p13
(1+z)^(1/3)
p23
(1+z)^(2/3)

Why does this happen when the exponent approaches 1? The function {1+z} is its own Taylor polynomial, and has the only zero at -1.  So, when {p\approx 1}, the Taylor polynomials are small perturbations of {1+z}. These perturbations of coefficients have to create additional zeros, but being small, they require a large value of {z} to help them.

For a specific example, the quadratic Taylor polynomial of {(1+z)^p} is {1 + pz + p(p-1)z^2/2}, with roots {(1\pm \sqrt{(2-p)/p})/(1-p) }. When {p\approx 1}, one of these roots is near {-1} (as it has to be) and the other is large.

Finally, when {p>1} and is not an integer, we get zeros on both sides of the unit circle. The majority of them are still outside. A prominent example of an interior zero is {-1/p} produced by the first-degree polynomial {1 + pz}.

p43
(1+z)^(4/3)
p73
(1+z)^(7/3)
p103
(1+z)^(10/3)

Another related post: Real zeros of sine Taylor polynomials.

Critical points of a cubic spline

The choice of piecewise polynomials of degree 3 for interpolation is justifiably popular: even-degree splines are algebraically awkward to construct, degree 1 is simply piecewise linear interpolation (not smooth), and degree 5, while feasible, entails juggling too many coefficients. Besides, a cubic polynomial minimizes the amount of wiggling (the integral of second derivative squared) for given values and slopes at the endpoints of an interval. (Recall Connecting dots naturally.)

But the derivative of a cubic spline is a quadratic spline. And one needs the derivative to find the critical points. This results in an awkward example in SciPy documentation, annotated with “(NB: sproot only works for order 3 splines, so we fit an order 4 spline)”.

Although not implemented in SciPy, the task of computing the roots of a quadratic spline is a simple one. Obtaining the roots from the internal representation of a quadratic spline in SciPy (as a linear combination of B-splines) would take some work and reading. But a quadratic polynomial is determined by three values, so sampling it at three points, such as two consecutive knots and their average, is enough.

Quadratic formula with values instead of coefficients

Suppose we know the values of a quadratic polynomial q at -1, 0, 1, and wish to find if it has roots between -1 and 1. Let’s normalize so that q(0)=1, and let x = q(-1), y = q(1). If either x or y is negative, there is definitely a root on the interval. If they are positive, there is still a chance: we need the parabola to be concave up, have a minimum within [-1, 1], and for the minimum to be negative. All of this is easily determined once we note that the coefficients of the polynomial are a = (x+y)/2 – 1, b = (y-x)/2, and c = 1.

The inequality {(x-y)^2 \ge 8(x+y-2)} ensures the suitable sign of the discriminant. It describes a parabola with vertex (1, 1) and focus (2, 2), contained in the first quadrant and tangent to the axes at (4, 0) and (0, 4). Within the orange region there are no real roots.

orange
No real roots in the orange region

The line x+y=2, tangent to the parabola at its vertex, separates convex and concave parabolas. While concavity in conjunction with x, y being positive definitely precludes having roots in [-1, 1], slight convexity is not much better: it results in real roots outside of the interval. Here is the complete picture: green means there is a root in [-1, 1], orange means no real roots, red covers the rest.

all_colors
Green = there is a root in the interval [-1, 1]

Back to splines

Since the derivative of a spline is implemented in SciPy (B-splines have a nice formula for derivatives), all we need is a root-finding routine for quadratic splines. Here it is, based on the above observations but using built-in NumPy polynomial solver np.roots to avoid dealing with various special cases for the coefficients.

def quadratic_spline_roots(spl):
    roots = []
    knots = spl.get_knots()
    for a, b in zip(knots[:-1], knots[1:]):
        u, v, w = spl(a), spl((a+b)/2), spl(b)
        t = np.roots([u+w-2*v, w-u, 2*v])
        t = t[np.isreal(t) & (np.abs(t) <= 1)]
        roots.extend(t*(b-a)/2 + (b+a)/2)
    return np.array(roots)

A demonstration, which plots the spline (blue), its critical points (red), and original data points (black) as follows:

spline
There can be 0, 1, or 2 critical points between two knots
import numpy as np
import matplotlib.pyplot as plt
from scipy.interpolate import InterpolatedUnivariateSpline

x = np.arange(7)
y = np.array([3, 1, 1, 2, 2, 4, 3])
f = InterpolatedUnivariateSpline(x, y, k=3)
crit_pts = quadratic_spline_roots(f.derivative())

t = np.linspace(x[0], x[-1], 500)
plt.plot(t, f(t))
plt.plot(x, y, 'kd')
plt.plot(crit_pts, f(crit_pts), 'ro')
plt.show()

 

Real zeros of sine Taylor polynomials

The more terms of Taylor series {\displaystyle \sin x = x-\frac{x^3}{3!}+ \frac{x^5}{5!}- \cdots } we use, the more resemblance we see between the Taylor polynomial and the sine function itself. The first-degree polynomial matches one zero of the sine, and gets the slope right. The third-degree polynomial has three zeros in about the right places.

Third degree Taylor polynomial
Third degree, three zeros

The fifth-degree polynomial will of course have … wait a moment.

Fifth degree Taylor polynomial
Fifth degree, only one zero

Since all four critical points are in the window, there are no real zeros outside of our view. Adding the fifth-degree term not only fails to increase the number of zeros to five, it even drops it back to the level of {T_1(x)=x}. How odd.

Since the sine Taylor series converges uniformly on bounded intervals, for every { A } there exists { n } such that {\max_{[-A,A]} |\sin x-T_n(x)|<1 }. Then { T_n } will have the same sign as { \sin x } at the maxima and minima of the latter. Consequently, it will have about { 2A/\pi } zeros on the interval {[-A,A] }. Indeed, the intermediate value theorem guarantees that many; and the fact that {T_n'(x) \approx \cos x } on { [-A,A]} will not allow for extraneous zeros within this interval.

Using the Taylor remainder estimate and Stirling's approximation, we find {A\approx (n!)^{1/n} \approx n/e }. Therefore, { T_n } will have about { 2n/(\pi e) } real zeros at about the right places. What happens when {|x| } is too large for Taylor remainder estimate to be effective, we can't tell.

Let's just count the zeros, then. Sage online makes it very easy:

sineroots = [[2*n-1,len(sin(x).taylor(x,0,2*n-1).roots(ring=RR))] for n in range(1,51)]
scatter_plot(sineroots) 
Roots of sine Taylor polynomials
Roots of sine Taylor polynomials

The up-and-down pattern in the number of zeros makes for a neat scatter plot. How close is this data to the predicted number { 2n/(\pi e) }? Pretty close.

scatter_plot(sineroots,facecolor='#eeee66') + plot(2*n/(pi*e),(n,1,100))
Compared to 2n/pi*e
Compared to 2n/pi*e

The slope of the blue line is { 2/(\pi e) \approx 0.2342 }; the (ir)rationality of this number is unknown. Thus, just under a quarter of the zeros of { T_n } are expected to be real when { n } is large.

The actual number of real zeros tends to exceed the prediction (by only a few) because some Taylor polynomials have real zeros in the region where they no longer follow the function. For example, { T_{11} } does this:

Spurious zero around x=7
Spurious zero around x=7

Richard S. Varga and Amos J. Carpenter wrote a series of papers titled Zeros of the partial sums of { \cos z } and {\sin z } in which they classify real zeros into Hurwitz (which follow the corresponding trigonometric function) and spurious. They give the precise count of the Hurwitz zeros: {1+2\lfloor n/(\pi e)\rfloor } for the sine and {2\lfloor n/(\pi e)+1/2\rfloor } for the cosine. The total number of real roots does not appear to admit such an explicit formula. It is the sequence A012264 in the OEIS.