Partitions of unity and monotonicity-preserving approximation

There are many ways to approximate a given continuous function {f\colon [a, b]\to \mathbb R} (I will consider the interval {[a, b]=[0, 1]} for convenience.) For example, one can use piecewise linear interpolation through the points {(k/n, f(k/n))}, where {k=0, 1, \dots, n}. The resulting piecewise linear function {g} has some nice properties: for example, it is increasing if {f} is increasing. But it is not smooth.

A convenient way to represent piecewise linear interpolation is the sum {g(x) = \sum_{k=0}^n f(k/n) \varphi_k(x)} where the functions {\varphi_k} are the triangles shown below: {\varphi_k(x) = \max(0, 1 - |nx-k|)}.

Triangular basis functions

The functions {{\varphi_k}} form a partition of unity, meaning that {\sum_k \varphi_k \equiv 1} and all {\varphi_k} are nonnegative. This property leads to the estimate

{\displaystyle |f(x) - g(x)| = \left| \sum_{k=0}^n (f(x) - f(k/n)) \varphi_k(x)\right| \le \sum_{k=0}^n |f(x) - f(k/n)| \varphi_k(x) }

The latter sum is small because when {x} is close to {k/n}, the first factor {|f(x) - f(k/n)|} is small by virtue of continuity, while the second factor {\varphi_k(x)} is bounded by {1}. When {x} is far from {k/n}, the second factor {\varphi_k(x)} is zero, so the first one is irrelevant. The upshot is that {f-g} is uniformly small.

But if we want a smooth approximation {g}, we need a smooth partition of unity {{\varphi_k}}. But not just any set of smooth nonnegative functions that add up to {0} is equally good. One desirable property is preserving monotonicity: if {f} is increasing, then {g} should be increasing, just as this works for piecewise linear interpolation. What does this condition require of our partition of unity?

An increasing function can be expressed as a limit of sums of the form {\sum_{j} c_j [x \ge t_j]} where {c_j>0} and {[\cdots ]} is the Iverson bracket: 1 if true, 0 if false. By linearity, it suffices to have increasing {g} for the case {f(x) = [x \ge t]}. In this case {g} is simply {s_m := \sum_{k=m}^n \varphi_k} for some {m}, {0\le m\le n}. So we want all {s_m} to be increasing functions. Which is the case for the triangular partition of unity, when each {s_m} looks like this:

Also known as a piecewise linear activation function

One smooth choice is Bernstein basis polynomials: {\displaystyle \varphi_k(x) = \binom{n}{k} x^k (1-x)^{n-k}}. These are nonnegative on {[0, 1]}, and the binomial formula shows {\displaystyle \sum_{k=0}^n \varphi_k(x) = (x + 1-x)^n \equiv 1}. Are the sums {\displaystyle s_m(x) = \sum_{k=m}^n \binom{n}{k} x^k (1-x)^{n-k}} increasing with {x}? Let’s find out. By the product rule,

{\displaystyle s_m'(x) = \sum_{k=m}^n \binom{n}{k} k x^{k-1} (1-x)^{n-k} - \sum_{k=m}^n \binom{n}{k} (n-k) x^{k} (1-x)^{n-k - 1}}

In the second sum the term with {k=n} vanishes, and the terms with {k<n} can be rewritten as {\displaystyle \frac{n!}{k! (n-k)!} (n-k) x^{k} (1-x)^{n-k - 1}}, which is {\frac{n!}{(k+1)! (n-k-1)!} (k+1) x^{k} (1-x)^{n-k - 1}}, which is {\binom{n}{k+1} (k+1) x^{k} (1-x)^{n-k - 1} }. After the index shift {k+1\mapsto k} this becomes identical to the terms of the first sum and cancels them out (except for the first one). Thus,

{\displaystyle s_m'(x) = \binom{n}{m} m x^{m-1} (1-x)^{n-m} \ge 0 }

To summarize: the Bernstein polynomials {\displaystyle B_n(x) = \sum_{k=0}^n f(k/n) \binom{n}{k} x^k (1-x)^{n-k}} are monotone whenever {f} is. On the other hand, the proof that {B_n\to f} uniformly is somewhat complicated by the fact that the polynomial basis functions {\varphi_k} are not localized the way that the triangle basis functions are: the factors {\varphi_k(x)} do not vanish when {x} is far from {k/n}. I refer to Wikipedia for a proof of convergence (which, by the way, is quite slow).

Bernstein polynomial basis

Is there some middle ground between non-smooth triangles and non-localized polynomials? Yes, of course: piecewise polynomials, splines. More specifically, B-splines which can be defined as follows: B-splines of degree {1} are the triangle basis functions shown above; a B-spline of degree {d+1} is the moving averages of a {B}-spline of degree {d} with a window of length {h = 1/n}. The moving average of {F} can be written as {\frac{1}{h} \int_{x-h/2}^{x+h/2} f}. We get a partition of unity because the sum of moving averages is the moving average of a sum, and averaging a constant function does not change it.

The splines of even degrees are awkward to work with… they are obtained from the triangles by taking those integrals with {h/2} an odd number of times, which makes their knots fall in the midpoints of the uniform grid instead of the grid points themselves. But I will use {d=2} anyway, because this degree is enough for {C^1}-smooth approximation.

Recall that a triangular basis function {\varphi_k} has slope {\pm n} and is supported on an interval {[(k-1)h, (k+1)h]} where {h=1/n}. Accordingly, its moving average {\psi_k} will be supported on {[(k-3/2)h, (k+3/2)h]}. Since {\psi_k'(x) = n(\phi_k(x+h/2) - \phi_k(x-h/2))}, the second derivative {\psi_k''} is {n^2} when {-3/2< nx-k< -1/2}, is {-2n^2} when {|nx-k| < 1/2}, and is {n^2} again when {1/2 < nx-k < 3/2}. This is enough to figure out the formula for {\psi_k}:

{\displaystyle \psi_k(n) = \begin{cases} (nx-k+3/2)^2 / 2, & -3/2\le nx -k\le -1/2 \\ 3/4 -(nx-k)^2, & -1/2\le nx-k \le 1/2 \\ (nx-k-3/2)^2 / 2, & 1/2\le nx -k \le 3/2 \\ \end{cases} }

These look like:

Is this right?

Nice! But wait a moment, the sum near the endpoints is not constant: it is less than 1 because we do not get a contributions of two splines to the left and right of the interval. To correct for this boundary effect, replace {\psi_0} with {\psi_0 + \psi_{-1}} and {\psi_n} with {\psi_n + \psi_{n+1}}, using “ghost” elements of the basis that lie outside of the actual grid. Now the quadratic B-spline basis is correct:

A smooth partition of unity by quadratic splines

Does this partition of unity preserve monotinicity? Yes, it does: {\displaystyle \sum_{k\ge m}\psi_k'(x) = n\sum_{k\ge m} (\phi_k(x+h/2) - \phi_k(x-h/2)) = n(s(x+h/2) - s(x-h/2))} which is nonnegative because the sum {s := \sum_{k\ge m} \phi_k} is an increasing piecewise linear function, as noted previously. Same logic works for B-splines of higher degree.

In conclusion, here is a quadratic B-spline approximation (orange) to a tricky increasing function (blue).

Smooth approximation

One may wonder why the orange curve deviates from the line at the end – did we miss some boundary effect there? Yes, in a way… the spline actually approximates the continuous extension of our original function by constant values on the left and right. Imagine the blue graph continuing to the right as a horizontal line: this creates a corner at {x=1} and the spline is smoothing that corner. To avoid this effect, one may want to extend {f} in a better way and then work with the extended function, not folding the ghosts {\psi_{-1}, \psi_{n+1}} into {\psi_0, \psi_n}.

But even so, B-spline achieves a better approximation than the Bernstein polynomial with the same number of basis functions (eight):

Bernstein polynomial

The reason is the non-local nature of the polynomial basis {\varphi_k}, which was noted above. Bernstein polynomials do match the function perfectly at the endpoints, but this is small consolation.

Points of maximal curvature

In Calculus I students are taught how to find the points at which the graph of a function has zero curvature (that is, the points of inflection). The points of maximal curvature are usually not discussed. This post attempts to rectify this omission.

The (signed) curvature of {y=f(x)} is

{\displaystyle \kappa = \frac{y''}{(1+(y')^2)^{3/2}}}

We want to maximize the absolute value of {\kappa}, whether the function is positive or negative there (so both maxima and minima of {\kappa} can be of interest). The critical points of {\kappa} are the zeros of

{\displaystyle \kappa' = \frac{y''' (1+(y')^2) - 3 y' (y'')^2 }{(1+(y')^2)^{5/2}}}

So we are lead to consider a polynomial of the first three derivatives of {y}, namely {p := y''' (1+(y')^2) - 3 y' (y'')^2 }.

Begin with some simple examples:

{y = x^2} has {p = -24x} so the curvature of a parabola is maximal at its vertex. No surprise there.

{y = x^3} has {p = 6(1 - 45x^4)}, indicating two symmetric points of maximal curvature, {x\approx \pm 0.386}, pretty close to the point of inflection.

x3
y = x^3

 

{y=x^4} has {p = 24 x (1 - 56 x^6)}. This has three real roots, but {x=0} actually minimizes curvature (it vanishes there).

x4
y = x^4

More generally, {y=x^n} with positive integer {n} yields {\displaystyle p = n(n-1)x^{n-3} (n - 2 - (2n^3-n^2) x^{2n-2})} indicating two points {\displaystyle x = \pm \left(\frac{n-2}{2n^3-n^2} \right)^{1/(2n-2)}} which tend to {\pm 1} as {n} grows.

The graph of a polynomial of degree {n} can have at most {n-2} points of zero curvature, because the second derivative vanishes at those. How many points of maximal curvature can it have? The degree of expression {p} above is {3n-5} but it is not obvious whether all of its roots can be real and distinct, and also be the maxima of {|\kappa|} (unlike {x=0} for {y=x^4}). For {n=2} we do get {3n-5 = 1} point of maximal curvature. But for {n=3}, there can be at most {2} such points, not {3n-5 = 4}. Edwards and Gordon (Extreme curvature of polynomials, 2004) conjectured that the graph of a polynomial of degree {n} has at most {n-1} points of maximal curvature. This remains open despite several partial results: see the recent paper Extreme curvature of polynomials and level sets (2017).

A few more elementary functions:

{y = e^x} has {p = e^x(1-2e^{2x})}, so the curvature is maximal at {x=-\log(2)/2}. Did I expect the maximum of curvature to occur for a negative {x}? Not really.

exp
y = exp(x)

{y=\sin x} has {p = (\cos 2x - 3)\cos x}. The first factor is irrelevant: the points of maximum curvature of a sine wave are at its extrema, as one would guess.

{y = \tan x} has {p = -6\tan^8(x) - 16\tan^6(x) - 6\tan^4(x) + 8\tan(x)^2 + 4} which is zero at… ahem. The expression factors as

{\displaystyle p = -2(\tan^2 x+1)(3\tan^6(x) + 5\tan^4(x) - 2\tan^2(x) - 2)}

Writing {u = \tan^2(x)} we can get a cubic equation in {u}, but it is not a nice one. Or we could do some trigonometry and reduce {p=0} to the equation {8 - 41\cos 2x + \cos 6x =0}. Either way, a numerical solution is called for: {x \approx \pm 0.6937} (and {+\pi n} for other periods).

tan
y = tan(x)

Discrete maximum principle for polynomials

The polynomially convex hull of a compact set {K\subset \mathbb C} is defined as the set of all points {z\in \mathbb C} such that the inequality {|p(z)|\le \sup_K |p|} (a form of the maximum principle) holds for every polynomial {p}. For example, the polynomially convex hull of a simple closed curve is the union of that curve with its interior region. In general, this process fills up the holes in the set {K}, resulting in the complement of the unbounded connected component of {\mathbb C\setminus K}.

We can recover the usual convex hull from this construction by restricting {p} to the polynomials of first degree. Indeed, when {p} is linear, the set {|p|\le M} is a closed disk, and we end up with the intersection of all closed disks that contain {K}. This is precisely the convex hull of {K}.

What if we restrict {p} to the polynomials of degree at most {n}? Let’s call the resulting set the degree-{n} convex hull of {K}, denoted {K_n}. By definition, it is contained in the convex hull and contains the polynomially convex hull. To exactly compute {K_n} for general {K} appears to be difficult even when {n=2}.

Consider finite sets. When {K} has at most {n} points, we have {K_n=K} because there is a polynomial of degree {n} whose zero set is precisely {K}. So, the first nontrivial case is of {K} having {n+1} points. Let us write {K=\{z_0, \dots, z_n\}}.

Depending on the location of the points, {K_n} may be strictly larger than {K}. For example, if {K} consists of the vertices of a regular {(n+1)}-gon, then {K_n} also contains its center. Here is why. By a linear transformation, we can make sure that {K=\{\zeta^k\colon k=0, \dots, n\}} where {\zeta = \exp(2\pi i/(n+1))}. For {j=1, \dots, n} we have {\sum_{k=0}^n \zeta^{kj} = (\zeta^{(n+1)j}-1)/(\zeta^j - 1) = 0}. Hence, for any polynomial {p} of degree at most {n}, the sum {\sum_{k=0}^n p(\zeta^k)} is equal to {(n+1)p(0)}. This implies {|p(0)|\le \max_{0\le k\le n}|p(\zeta^k)|}, a kind of a discrete maximum principle.

A more systematic approach is to use the Lagrange basis polynomials, that is {L_j(z) = \prod_{k\ne j} (z-z_k)/(z_j-z_k)}, which satisfy {L_j(z_k) = \delta_{jk}}. Since {p = \sum_j p(z_j)L_j} for any polynomial of degree at most {n}, it follows that {z\in K_n} if and only if {\left|\sum_j c_j L_j(z)\right|\le \max |c_j| } holds for every choice of scalars {c_0, \dots, c_n}. The latter is equivalent to the inequality {\sum_j |L_j(z)|\le 1}.

This leads us to consider the function {S=\sum_j |L_j|}, the sum of the absolute values of the Lagrange basis polynomials. (Remark: S is called a Lebesgue function for this interpolation problem.) Since {\sum_j L_j\equiv 1}, it follows that {S\ge 1} everywhere. By construction, {S=1} on {K}. At a point {z\notin K}, the equality {S(z)=1} holds if and only if {\arg L_j(z)} is the same for all {j}.

In the trivial case {K=\{z_0, z_1\}}, the function {S(z) = (|z-z_0|+|z-z_1|)/|z_0-z_1|} is equal to {1} precisely on the linear segment with endpoints {z_0, z_1}. Of course, this only repeats what we already knew: the degree-1 convex hull is the ordinary convex hull.

If {K=\{x_0, \dots, x_n\}} with {x_0<\cdots<x_n} real and {n\ge 2}, then {K_n=K}. Indeed, if {x\in K_n\setminus K}, then {x} lies in the convex hull of {K}, and therefore {x_{j-1}<x<x_j} for some {j}. The basis polynomial {L_j} is positive at {x}, since it is equal to {1} at {x_j} and does not vanish outside of {K}. Since a polynomial changes its sign at every simple zero, it follows that {L_{j+1}(x) < 0}. Well, there is no {L_{j+1}} if {j=n}, but in that case, the same reasoning applies to {L_{j-2}(x)<0}. In any case, the conclusion is that {\arg L_k(x)} cannot be the same for all {k}.

At this point one might guess that the vertex set of a regular polygon is the only kind of finite sets that admit a nontrivial discrete maximum principle for polynomials. But this is not so: the vertices of a rhombus work as well. Indeed, if {K=\{a, -a, ib, -ib\}} with {a, b>0}, then {L_j(0)>0} for all {j}, hence {S(0)=1}.

The vertices of a non-square rectangle do not work: if {K} is the set of these vertices, the associated function {S} is strictly greater than 1 on the complement of {K}.

Are there any other finite sets that support a discrete maximum principle for polynomials?

Extremal Taylor polynomials

Suppose {f(z)=a_0+a_1z+a_2z^2+\cdots} is a holomorphic function in the unit disk {|z|<1} such that {|f|\le 1} in the disk. How large can its Taylor polynomial {T_n(z)=a_0+a_1z+\cdots +a_n z^n} be in the disk?

We should not expect {T_n} to be bounded by 1 as well. Indeed, the Möbius transformation {f(z)=(z+1/2)/(1+z/2)} has Taylor expansion {(z+1/2)(1-z/2+O(z^2)) = 1/2 + (3/4)z + O(z^2)}, so {T_1(1)=5/4} in this case. This turns out to be the worst case: in general {T_1} is bounded by 5/4 in the disk.

For the second-degree polynomial {T_2} the sharp bound is {89/64}, attained when {f(z) = (8z^2 + 4z + 3)/(3z^2 + 4z + 8)}; the image of the unit circle under the extremal {T_2} is shown below. Clearly, there is something nontrivial going on.

T2
Extremal T_2 attains 89/64 > 1.39

Edmund Landau established the sharp bound for {|T_n|} in his paper Abschätzung der Koeffizientensumme einer Potenzreihe, published in Archiv der Mathematik und Physik (3) 21 in 1913. Confusingly, there are two papers with the same title in the same issue of the journal: one on pages 42-50, the other on pages 250-255, and they appear in different volumes of Landau’s Collected Works. The sharp bound is in the second paper.

First steps

By rotation, it suffices to bound {|T_n(1)|}, which is {|a_0+\cdots +a_n|}. As is often done, we rescale {f} a bit so that it’s holomorphic in a slightly larger disk, enabling the use of the Cauchy integral formula on the unit circle {\mathbb T}. The Cauchy formula says {2\pi i a_k = \int_{\mathbb T} z^{-k-1} f(z) \,dz}. Hence

{\displaystyle 2\pi |T_n(1)| = \left| \int_{\mathbb T} z^{-n-1}(1+z+\dots+z^n) f(z) \,dz \right|}

It is natural to use {|f(z)|\le 1} now, which leads to

{\displaystyle 2\pi |T_n(1)| \le \int_{\mathbb T} |1+z+\dots+z^n|\, |dz| }

Here we can use the geometric sum formula and try to estimate the integral of {|(1-z^{n+1})/(1-z)|} on the unit circle. This is what Landau does in the first of two papers; the result is {O(\log n)} which is the correct rate of growth (this is essentially the Dirichlet kernel estimate from the theory of Fourier series). But there is a way to do better and get the sharp bound.

Key ideas

First idea: the factor {1+z+\dots+z^n} could be replaced by any polynomial {Q} as long as the coefficients of powers up to {n} stay the same. Higher powers contribute nothing to the integral that evaluates {T_n(1)}, but they might reduce the integral of {|Q|}.

Second idea: we should choose {Q} to be the square of some polynomial, {Q=P^2}, because {(2\pi)^{-1}\int_{\mathbb T} |P(z)|^2\, |dz|} can be computed exactly: it is just the sum of squares of the coefficients of {P}, by Parseval’s formula.

Implementation

Since {1+z+\dots+z^n} is the {n}-th degree Taylor polynomial of {(1-z)^{-1}}, it is natural to choose {P} to be the {n}-th degree Taylor polynomial of {(1-z)^{-1/2}}. Indeed, if {P_n(z) = (1-z)^{-1/2} + O(z^{n+1})}, then {P_n(z)^2 = (1-z)^{-1} + O(z^{n+1}) = 1+z+\dots+z^n + O(z^{n+1})} as desired (asymptotics as {z\to 0}). The binomial formula tells us that
{\displaystyle P_n(z)=\sum_{k=0}^n (-1)^k\binom{-1/2}{k}z^k }

The coefficient of {z^k} here can be written out as {(2k-1)!!/(2k)!!} or rewritten as {4^{-k}\binom{2k}{k}} which shows that in lowest terms, its denominator is a power of 2. To summarize, {|T_n(1)|} is bounded by the sum of squares of the coefficients of {P_n}. Such sums are referred to as the Landau constants,

{\displaystyle G_n = 1+ \left(\frac{1}{2}\right)^2 + \left(\frac{1\cdot 3}{2\cdot 4}\right)^2 + \cdots + \left(\frac{(2n-1)!!}{(2n)!!}\right)^2 }

A number of asymptotic and non-asymptotic formulas have been derived for {G_n}, for example Brutman (1982) shows that {G_n - (1/\pi)\log(n+1)} is between 1 and 1.0663.

Sharpness

To demonstrate the sharpness of the bound {|T_n|\le G_n}, we want {|f|\equiv 1} and {P_n(z)^2f(z)/z^n\ge 0} on the unit circle. Both are arranged by taking {f(z) = z^n P_n(1/z) / P_n(z)} which is a Blaschke product of degree {n}. Note that the term {P_n(1/z)} can also be written as {\overline{P_n(1/\bar z)}}. Hence {P_n(z)^2f(z)/z^n = P_n(z) \overline{P_n(1/\bar z)}} which is simply {|P_n(z)|^2} when {|z|=1}. Equality holds in all the estimates above, so they are sharp.

Here are the images of the unit circle under extremal Taylor polynomials {T_5} and {T_{20}}.

T5
Extremal Taylor polynomial of 5th degree

T20
Extremal Taylor polynomial of 20th degree

These polynomials attain large values only on a short subarc of the circle; most of the time they oscillate at levels less than 1. Indeed, the mean value of {|T_n|^2} cannot exceed the mean of {|f|^2} which is at most 1. Here is the plot of the roots of extremal {T_n}:  they are nearly uniform around the circle, except for a gap near 1.

Troots10
Roots of extremail T_10

Troots20
Roots of extremal T_20

But we are not done…

Wait a moment. Does {f(z) = z^n P_n(1/z) / P_n(z)} define a holomorphic function in the unit disk? We are dividing by {P_n} here. Fortunately, {P_n} has no zeros in the unit disk, because its coefficients are positive and decreasing as the exponent {k} increases. Indeed, if {p(z)=c_0+c_1z+\cdots + c_nz^n} with {c_0>c_1>\dots>c_n > 0}, then {(1-z)p(z)} has constant term {c_0} and other coefficients {c_1-c_0}, {c_2-c_1}, … {c_n-c_{n-1}}, {-c_n}. Summing the absolute values of the coefficients of nonconstant terms we get {c_0}. So, when these coefficients are attached to {z^k} with {|z|<1}, the sum of nonconstant terms is strictly less than {c_0} in absolute value. This proves {P_n\ne 0} in the unit disk. Landau credits Adolf Hurwitz with this proof.

In fact, the zeros of {P_n} (Taylor polynomials of {(1-z)^{-1/2}}) lie just outside of the unit disk.

roots20
Zeros of P_20

roots50
Zeros of P_50

The zeros of the Blaschke products formed from {P_n} are the reciprocals of the zeros of  {P_n}, so they lie just inside the unit circle, much like the zeros of {T_n} (though they are different).

Extreme values of a reproducing kernel for polynomials

For every nonnegative integer {n} there exists a (unique) polynomial {K_n(x, y)} of degree {n} in {x} and {y} separately with the following reproducing property:

{\displaystyle p(x) = \int_{-1}^1 K_n(x, y)p(y)\,dy}

for every polynomial {p} of degree at most {n}, and for every {x}. For example, {K_1(x, y)= (3xy+1)/2}; other examples are found in the post Polynomial delta function.

This fact gives an explicit pointwise bound on a polynomial in terms of its integral on an interval:

{\displaystyle |p(x)| \le M_n(x) \int_{-1}^1 |p(y)|\,dy}

where {M_n(x) = \sup\{|K(x, y)| \colon y\in [-1, 1]\}}. For example, {M_1(x) = (3|x|+1)/2}.

Although in principle {x} could be any real or complex number, it makes sense to restrict attention to {x\in [-1, 1]}, where integration takes place. This leads to the search for extreme values of {K} on the square {Q=[-1, 1]\times [-1, 1]}. Here is how this function looks for {n=1, 2, 3}:

K1
Degree 1

K2
Degree 2

K3
Degree 3

The symmetries {K(x, y)=K(-x, -y) = K(y, x)} are evident here.

Explicitly,

{\displaystyle K_n(x, y) = \sum_{k=0}^n \frac{2k+1}{2} P_k(x)P_k(y)}

where {P_k} is the Legendre polynomial of degree {k} and the factor {(2k+1)/2} is included to make the polynomials an orthonormal set in {L^2(-1, 1)}. Since {P_k} oscillates between {-1} and {1}, it follows that

{\displaystyle |K_n(x, y)|\le \sum_{k=0}^n \frac{2k+1}{2} = \frac{(n+1)^2}{2}}

and this bound is attained at {K(1, 1)=K(-1,-1)=(n+1)^2/2} because {P_k(1)=1} and {P_k(-1)=(-1)^k}.

Is

{\displaystyle K_n(-1, 1) =\sum_{k=0}^n (-1)^k\frac{2k+1}{2} = (-1)^n \frac{n+1}{2}}

the minimum value of {K} on the square {Q}? Certainly not for even {n}. Indeed, differentiating the sum

{\displaystyle S_n(x) = K_n(x, 1) = \sum_{k=0}^n \frac{2k+1}{2} P_k(x)}

with respect to {x} and using {P_k'(-1) =(-1)^{k-1}k(k+1)/2}, we arrive at

{\displaystyle S_n'(-1) = (-1)^{n-1} \frac{n(n^2+3n+2)}{4}}

which is negative if {n} is even, ruling out this point as a minimum.

What about odd {n}, then: is it true that {K_n \ge -(n+1)/2} on the square {Q}?

{n=1}: yes, {K_1(x, y) = (3xy+1)/2 \ge (-3+1)/2 = -1} is clear enough.

{n=3}: the inequality {K_3\ge -2} is also true… at least numerically. It can be simplified to {35(xy)^3 + 9(xy)^2 + 15xy \ge (21x+21y+3)(x^2+y^2)} but I do not see a way forward from there. At least on the boundary of {Q} it can be shown without much work:

{\displaystyle K_3(x, 1) + 2 = \frac{5}{4}(x+1)(7x^2-4x+1)}

The quadratic term has no real roots, which is easy to check.

{n=5}: similar story, the inequality {K_5\ge -3} appears to be true but I can only prove it on the boundary, using

{\displaystyle K_5(x, 1)+3 = \frac{21}{16}(x + 1)(33 x^4 - 18x^3 - 12x^2 + 2x + 3)}

The quartic term has no real roots, which is not so easy to check.

{n=7}: surprisingly, {K_7(4/5, 1) = -2229959/500000} which is about {-4.46}, disproving the conjectural bound {K_7\ge -4}. This fact is not at all obvious from the plot.

K7
Degree 7 kernel

Questions:

  • Is {K_n \ge -Cn} on the square {Q = [-1, 1]\times [-1, 1]} with some universal constant {C}?
  • Is the minimum of {K_n} on {Q} always attained on the boundary of {Q}?
  • Can {M_n(x) = \sup\{|K(x, y)| \colon y\in [-1, 1]\}} be expressed in closed form, at least for small degrees {n}?

Critical points of a cubic spline

The choice of piecewise polynomials of degree 3 for interpolation is justifiably popular: even-degree splines are algebraically awkward to construct, degree 1 is simply piecewise linear interpolation (not smooth), and degree 5, while feasible, entails juggling too many coefficients. Besides, a cubic polynomial minimizes the amount of wiggling (the integral of second derivative squared) for given values and slopes at the endpoints of an interval. (Recall Connecting dots naturally.)

But the derivative of a cubic spline is a quadratic spline. And one needs the derivative to find the critical points. This results in an awkward example in SciPy documentation, annotated with “(NB: sproot only works for order 3 splines, so we fit an order 4 spline)”.

Although not implemented in SciPy, the task of computing the roots of a quadratic spline is a simple one. Obtaining the roots from the internal representation of a quadratic spline in SciPy (as a linear combination of B-splines) would take some work and reading. But a quadratic polynomial is determined by three values, so sampling it at three points, such as two consecutive knots and their average, is enough.

Quadratic formula with values instead of coefficients

Suppose we know the values of a quadratic polynomial q at -1, 0, 1, and wish to find if it has roots between -1 and 1. Let’s normalize so that q(0)=1, and let x = q(-1), y = q(1). If either x or y is negative, there is definitely a root on the interval. If they are positive, there is still a chance: we need the parabola to be concave up, have a minimum within [-1, 1], and for the minimum to be negative. All of this is easily determined once we note that the coefficients of the polynomial are a = (x+y)/2 – 1, b = (y-x)/2, and c = 1.

The inequality {(x-y)^2 \ge 8(x+y-2)} ensures the suitable sign of the discriminant. It describes a parabola with vertex (1, 1) and focus (2, 2), contained in the first quadrant and tangent to the axes at (4, 0) and (0, 4). Within the orange region there are no real roots.

orange
No real roots in the orange region

The line x+y=2, tangent to the parabola at its vertex, separates convex and concave parabolas. While concavity in conjunction with x, y being positive definitely precludes having roots in [-1, 1], slight convexity is not much better: it results in real roots outside of the interval. Here is the complete picture: green means there is a root in [-1, 1], orange means no real roots, red covers the rest.

all_colors
Green = there is a root in the interval [-1, 1]

Back to splines

Since the derivative of a spline is implemented in SciPy (B-splines have a nice formula for derivatives), all we need is a root-finding routine for quadratic splines. Here it is, based on the above observations but using built-in NumPy polynomial solver np.roots to avoid dealing with various special cases for the coefficients.

def quadratic_spline_roots(spl):
    roots = []
    knots = spl.get_knots()
    for a, b in zip(knots[:-1], knots[1:]):
        u, v, w = spl(a), spl((a+b)/2), spl(b)
        t = np.roots([u+w-2*v, w-u, 2*v])
        t = t[np.isreal(t) & (np.abs(t) <= 1)]
        roots.extend(t*(b-a)/2 + (b+a)/2)
    return np.array(roots)

A demonstration, which plots the spline (blue), its critical points (red), and original data points (black) as follows:

spline
There can be 0, 1, or 2 critical points between two knots

import numpy as np
import matplotlib.pyplot as plt
from scipy.interpolate import InterpolatedUnivariateSpline

x = np.arange(7)
y = np.array([3, 1, 1, 2, 2, 4, 3])
f = InterpolatedUnivariateSpline(x, y, k=3)
crit_pts = quadratic_spline_roots(f.derivative())

t = np.linspace(x[0], x[-1], 500)
plt.plot(t, f(t))
plt.plot(x, y, 'kd')
plt.plot(crit_pts, f(crit_pts), 'ro')
plt.show()

 

Pisot constant beyond 0.843

In a 1946 paper Charles Pisot proved a theorem involving a curious constant {\gamma_0= 0.843\dots}. It can be defined as follows:

{\gamma_0= \sup\{r \colon \exists } monic polynomial {p} such that {|p(e^z)| \le 1} whenever {|z|\le r \}}

Equivalently, {\gamma_0} is determined by the requirement that the set {\{e^z\colon |z|\le \gamma_0\}} have logarithmic capacity 1; this won’t be used here. The theorem is stated below, although this post is really about the constant.

Theorem: If an entire function takes integer values at nonnegative integers and is {O(e^{\gamma |z|})} for some {\gamma < \gamma_0}, then it is a finite linear combination of terms of the form {z^n \alpha^z}, where each {\alpha } is an algebraic integer.

The value of {\gamma_0} is best possible; thus, in some sense Pisot’s theorem completed a line of investigation that began with a 1915 theorem by Pólya which had {\log 2} in place of {\gamma_0}, and where the conclusion was that {f} is a polynomial. (Informally speaking, Pólya proved that {2^z} is the “smallest” entire-function that is integer-valued on nonnegative integers.)

Although the constant {\gamma_0} was mentioned in later literature (here, here, and here), no further digits of it have been stated anywhere, as far as I know. So, let it be known that the decimal expansion of {\gamma_0} begins with 0.84383.


A lower bound on {\gamma_0} can be obtained by constructing a monic polynomial that is bounded by 1 on the set {E(r) = \{e^z \colon |z|\le r \}}. Here is E(0.843):

er

It looks pretty round, except for that flat part on the left. In fact, E(0.82) is covered by a disk of unit radius centered at 1.3, which means that the choice {p(z) = z-1.3} shows {\gamma_0 > 0.82}.

082disk
p(z) = z-1.3 gives lower bound 0.82

How to get an upper bound on {\gamma_0}? Turns out, it suffices to exhibit a monic polynomial {q} that has all zeros in {E(r)} and satisfies {|q|>1} on the boundary of {E(r)}. The existence of such {q} shows {\gamma_0 < r}. Indeed, suppose that {p} is monic and {|p|\le 1} on {E(r)}. Consider the function {\displaystyle u(z) = \frac{\log|p(z)|}{\deg p} - \frac{\log|q(z)|}{\deg q}}. By construction {u<0} on the boundary of {E(r)}. Also, {u} is subharmonic in its complement, including {\infty}, where the singularities of both logarithms cancel out, leaving {u(\infty)=0}. This contradicts the maximum principle for subharmonic functions, according to which {u(\infty)} cannot exceed the maximum of {u} on the boundary.

The choice of {q(z) = z-1.42} works for {r=0.89}.

089disk


So we have {\gamma_0} boxed between 0.82 and 0.89; how to get more precise bounds? I don’t know how Pisot achieved the precision of 0.843… it’s possible that he strategically picked some linear and quadratic factors, raised them to variable integer powers and optimized the latter. Today it is too tempting to throw some optimization routine on the problem and let it run for a while.

But what to optimize? The straightforward approach is to minimize the maximum of {|p(e^z)|} on the circle {|z|=r}, approximated by sampling the function at a sufficiently fine uniform grid {\{z_k\}} and picking the maximal value. This works… unspectacularly. One problem is that the objective function is non-differentiable. Another is that taking maximum throws out a lot of information: we are not using the values at other sample points to better direct the search. After running optimization for days, trying different optimization methods, tolerance options, degrees of the polynomial, and starting values, I was not happy with the results…

Turns out, the optimization is much more effective if one minimizes the variance of the set {\{|p(\exp(z_k))|^2\}}. Now we are minimizing a polynomial function of {p(\exp(z_k)}, which pushes them toward having the same absolute value — the behavior that we want the polynomial to have. It took from seconds to minutes to produce the polynomials shown below, using BFGS method as implemented in SciPy.

As the arguments for optimization function I took the real and imaginary parts of the zeros of the polynomial. The symmetry about the real axis was enforced automatically: the polynomial was the product of quadratic terms {(z-x_k-iy_k) (z-x_k+iy_k)}. This eliminated the potentially useful option of having real zeros of odd order, but I did not feel like special-casing those.

Three digits

843
Degree 8, lower bound 0.843

Real part: 0.916, 1.186, 1.54, 1.783
Imaginary part: 0.399, 0.572, 0.502, 0.199

Here and below, only the zeros with positive imaginary part are listed (in the left-to-right order), the others being their conjugates.

844
Degree 10, upper bound 0.844

Real part: 0.878, 1.0673, 1.3626, 1.6514, 1.8277
Imaginary part: 0.3661, 0.5602, 0.6005, 0.4584, 0.171

Four digits

8438
Degree 14, lower bound 0.8438

Real part: 0.8398, 0.9358, 1.1231, 1.357, 1.5899, 1.776, 1.8788
Imaginary part: 0.3135, 0.4999 ,0.6163, 0.637, 0.553, 0.3751, 0.1326

8439
Degree 14, upper bound 0.8439

Real part: 0.8397, 0.9358, 1.1231, 1.3571, 1.5901, 1.7762, 1.879
Imaginary part: 0.3136, 0.5, 0.6164, 0.6372, 0.5531, 0.3751, 0.1326

No, I didn’t post the same picture twice. The polynomials are just that similar. But as the list of zeros shows, there are tiny differences…

Five digits

84383
Degree 20, lower bound 0.84383

Real part: 0.81527, 0.8553, 0.96028, 1.1082, 1.28274, 1.46689, 1.63723, 1.76302, 1.82066, 1.86273
Imaginary part: 0.2686, 0.42952, 0.556, 0.63835, 0.66857, 0.63906, 0.54572, 0.39701, 0.23637, 0.08842

84384
Degree 20, upper bound 0.84384

Real part: 0.81798, 0.85803, 0.95788, 1.09239, 1.25897, 1.44255, 1.61962, 1.76883, 1.86547, 1.89069
Imaginary part: 0.26631, 0.4234, 0.54324, 0.62676, 0.66903, 0.65366, 0.57719, 0.44358, 0.26486, 0.07896

Again, nearly the same polynomial works for upper and lower bounds. The fact that the absolute value of each of these polynomials is below 1 (for lower bounds) or greater than 1 (for upper bounds) can be ascertained by sampling them and using an upper estimate on the derivative; there is enough margin to trust computations with double precision.

Finally, the Python script I used. The function “obj” is getting minimized while function “values” returns the actual values of interest: the minimum and maximum of polynomial. The degree of polynomial is 2n, and the radius under consideration is r. The sample points are collected in array s. To begin with, the roots are chosen randomly. After minimization runs (inevitably, ending in a local minimum of which there are myriads), the new starting point is obtained by randomly perturbing the local minimum found. (The perturbation is smaller if minimization was particularly successful.)

import numpy as np
from scipy.optimize import minimize

def obj(r):
    rc = np.concatenate((r[:n]+1j*r[n:], r[:n]-1j*r[n:])).reshape(-1,1)
    p = np.prod(np.abs(s-rc)**2, axis=0)
    return np.var(p)

def values(r):
    rc = np.concatenate((r[:n]+1j*r[n:], r[:n]-1j*r[n:])).reshape(-1,1)
    p = np.prod(np.abs(s-rc), axis=0)
    return [np.min(p), np.max(p)]

r = 0.84384
n = 10
record = 2 
s = np.exp(r * np.exp(1j*np.arange(0, np.pi, 0.01)))
xr = np.random.uniform(0.8, 1.8, size=(n,))
xi = np.random.uniform(0, 0.7, size=(n,))
x0 = np.concatenate((xr, xi))

while True:
    res = minimize(obj, x0, method = 'BFGS')
    if res['fun'] < record:
        record = res['fun']
        print(repr(res['x']))
        print(values(res['x']))
        x0 = res['x'] + np.random.uniform(-0.001, 0.001, size=x0.shape)
    else:
        x0 = res['x'] + np.random.uniform(-0.05, 0.05, size=x0.shape)