This is post is related to Extremal Taylor polynomials where it was important to observe that the Taylor polynomials of the function do not have zeros in the unit disk. Let’s see how far this generalizes.

The function has the rare property that all zeros of its Taylor polynomial have unit modulus. This is clear from

.

In this and subsequent illustrations, the zeros of the first 50 Taylor polynomials are shown as blue dots, with the unit circle in red for reference.

When the exponent is less than -1, the zeros move inside the unit disk and begin forming nice patterns in there.

When the exponent is strictly between -1 and 1, the zeros are all outside of the unit disk. Some of them get quite large, forcing a change of scale in the image.

Why does this happen when the exponent approaches 1? The function is its own Taylor polynomial, and has the only zero at -1. So, when , the Taylor polynomials are small perturbations of . These perturbations of coefficients have to create additional zeros, but being small, they require a large value of to help them.

For a specific example, the quadratic Taylor polynomial of is , with roots . When , one of these roots is near (as it has to be) and the other is large.

Finally, when and is not an integer, we get zeros on both sides of the unit circle. The majority of them are still outside. A prominent example of an interior zero is produced by the first-degree polynomial .

Let be a graph with vertices . The degree of vertex is denoted . Let be the Laplacian matrix of , so that , is when the vertices are adjacent, and is otherwise. The eigenvalues of are written as .

The graph is regular if all vertices have the same degree: . How can this property be seen from its Laplacian eigenvalues ?

Since the sum of eigenvalues is equal to the trace, we have . Moreover, is the trace of , which is equal to the sum of the squares of all entries of . This sum is because the th row of contains one entry equal to and entries equal to . In conclusion, .

The Cauchy-Schwarz inequality says that with equality if and only if all numbers are equal, i.e., the graph is regular. In terms of eigenvalues, this means that the difference

is always nonnegative, and is equal to zero precisely when the graph is regular. This is how one can see the regularity of a graph from its Laplacian spectrum.

As an aside, is an even integer. Indeed, the sum is even because it double-counts the edges. Hence the number of vertices of odd degree is even, which implies that is even for every positive integer .

Up to a constant factor, is simply the degree variance: the variance of the sequence . What graph maximizes it for a given ? We want to have some very large degrees and some very small ones.

Let be the union of the complete graph on vertices and isolated vertices. The sum of degrees is and the sum of squares of degrees is . Hence,

For the maximum is attained by , that is there is one isolated vertex. For the maximum is . In general it is attained by .

The graph is disconnected. But any graph has the same degree variance as its complement. And the complement is always connected: it consists of a “center”, a complete graph on vertices, and “periphery”, a set of vertices that are connected to each central vertex. Put another way, is obtained from the complete bipartite graph by connecting all vertices of the group together.

Tom A. B. Snijders (1981) proved that and are the only graphs maximizing the degree variance; in particular, is the unique maximizer among the connected graphs. It is pictured below for .

Given a vector space and a map (linear or not), consider the displacement set of , denoted . For linear maps this is simply the range of the operator and therefore is a subspace.

The essentially nonlinear operations of taking the inverse or composition of maps become almost linear when the displacement set is considered. Specifically, if has an inverse, then , which is immediate from the definition. Also, .

When is a topological vector space, the maps for which has compact closure are of particular interest: these are compact perturbations of the identity, for which degree theory can be developed. The consideration of makes it very clear that if is an invertible compact perturbation of the identity, then is in this class as well.

It is also of interest to consider the maps for which is either bounded, or is bounded away from . Neither case can occur for linear operators, so this is essentially nonlinear analysis. In the nonlinear case, the boundedness assumption for linear operators is usually replaced by the Lipschitz condition. Let us say that is -bi-Lipschitz if for all in the domain of .

Brouwer’s fixed point theorem fails in infinite-dimensional Hilbert spaces, but it not yet clear how hard it can fail. The strongest possible counterexample would be a bi-Lipschitz automorphism of the unit ball with displacement bounded away from 0. The existence of such a map is unknown. If it does not exist, that would imply that the unit ball and the unit sphere in the Hilbert space are not bi-Lipschitz equivalent, because the unit sphere does have such an automorphism: .

Concerning the maps with bounded displacement, here is a theorem from Patrick Biermann’s thesis (Theorem 3.3.2): if is an -bi-Lipschitz map in a Hilbert space, , and has bounded displacement, then is onto. The importance of bounded displacement is illustrated by the forward shift map for which but surjectivity nonetheless fails.

It would be nice to get rid of the assumption in the preceding paragraph. I guess any bi-Lipschitz map with bounded displacement should be surjective, at least in Hilbert spaces, but possibly in general Banach spaces as well.

For a vector in a normed space , define the orthogonal complement to be the set of all vectors such that for all scalars . In an inner product space (real or complex), this agrees with the normal definition of orthogonality because as , and the right hand side can be nonnegative only if .

Let’s see what properties of orthogonal complement survive in a general normed space. For one thing, if and only if . Another trivial property is that for all . More importantly, is a closed set that contains some nonzero vectors.

Closed because the complement is open: if for some , the same will be true for vectors close to .

Contains a nonzero vector because the Hahn-Banach theorem provides a norming functional for , i.e., a unit-norm linear functional such that . Any is orthogonal to , because .

In general, is not a linear subspace; it need not even have empty interior. For example, consider the orthogonal complement of the first basis vector in the plane with (taxicab) metric: it is .

This example also shows that orthogonality is not symmetric in general normed spaces: but . This is why I avoid using notation here.

In fact, is the union of kernels of all norming functionals of , so it is only a linear subspace when the norming functional is unique. Containment in one direction was already proved. Conversely, suppose and define a linear functional on the span of so that . By construction, has norm 1. Its Hahn-Banach extension is a norming functional for that vanishes on .

Consider as an example. A function satisfies precisely when its th moment is minimal among all translates . This means, by definition, that its “-estimator” is zero. In the special cases the estimator is known as the median, mean, and midrange, respectively. Increasing gives more influence to outliers, so is the more useful range for it.

The market isn’t so hot for Iraq war movies. And it’s nearly impossible to adapt such an introspective novel into film. I still respect the effort and its outcome, even if all references to the normal distribution got left out of it.

I spent a lot of time trying to identify the exact point at which I noticed a change in Murph, somehow thinking that if I could figure out where he had begun to slide down the curve of the bell that I could do something about it. But these are subtle shifts, and trying to distinguish them is like trying to measure the degrees of gray when evening comes. It’s impossible to identify the cause of anything, and I began to see the war as a big joke, for how cruel it was, for how desperately I wanted to measure the particulars of Murph’s new, strange behavior and trace it back to one moment, to one cause, to one thing I would not be guilty of. And I realized very suddenly one afternoon while throwing rocks into a bucket in a daze that the joke was in fact on me. Because how can you measure deviation if you don’t know the mean? There was no center in the world. The curves of all our bells were cracked.

Two actors with (essentially) the same first name and over 50 years of age difference (Anthony Hopkins 1937-, Anton Yelchin 1989-2016) make this Stephen King adaptation well worth watching.

He made another circuit of his room, working the tingles out of his legs, feeling like a prisoner pacing his cell. The door had no lock on it—no more than his mom’s did—but he felt like a jailbird just the same. He was afraid to go out. She hadn’t called him for supper, and although he was hungry—a little, anyway—he was afraid to go out. He was afraid of how he might find her… or of not finding her at all. Suppose she had decided she’d finally had enough of Bobby-O, stupid lying little Bobby-O, his father’s son? Even if she was here, and seemingly back to normal… was there even such a thing as normal? People had terrible things behind their faces sometimes. He knew that now.

Sure, it’s not as good as the first film in the series (which does not qualify for the challenge, scoring 65% on RT), but a major improvement on the mindless zombie chases of the second part. I like to think of it as a parable illustrating ethical issues in public health… allowing for the customary movie-science vs actual-science differences.

How to measure the nonlinearity of a function where is an interval? A natural way is to consider the smallest possible deviation from a line , that is . It turns out to be convenient to divide this by , the length of the interval . So, let . (This is similar to β-numbers of Peter Jones, except the deviation from a line is measured only in the vertical direction.)

Relation with derivatives

The definition of derivative immediately implies that if exists, then as shrinks to (that is, gets smaller while containing ). A typical construction of a nowhere differentiable continuous function is based on making bounded from below; it is enough to do this for dyadic intervals, and that can be done by adding wiggly terms like : see the blancmange curve.

The converse is false: if as shrinks to , the function may still fail to be differentiable at . The reason is that the affine approximation may have different slopes at different scales. An example is in a neighborhood of . Consider a small interval . The line with is a good approximation to because on most of the interval except for a very small part near , and on that part is very close to anyway.

Why the root of logarithm? Because has a fixed amount of change on a fixed proportion of , independently of . We need a function slower than the logarithm, so that as decreases, there is a smaller amount of change on a larger part of the interval .

Nonlinearity of Lipschitz functions

Suppose is a Lipschitz function, that is, there exists a constant such that for all . It’s easy to see that , by taking the mid-range approximation . But the sharp bound is whose proof is not as trivial. The sharpness is shown by with .

Proof. Let be the slope of the linear function that agrees with at the endpoints of . Subtracting this linear function from gives us a Lipschitz function such that and . Let . Chebyshev’s inequality gives lower bounds for the measures of the sets and : namely, and . By adding these, we find that . Since , the mid-range approximation to has error at most . Hence .

Reducing nonlinearity

Turns out, the graph of every Lipschitz function has relatively large almost-flat pieces. That is, there are subintervals of nontrivial size where the measure of nonlinearity is much smaller than the Lipschitz constant. This result is a special (one-dimensional) case of Theorem 2.3 in Affine approximation of Lipschitz functions and nonlinear quotients by Bates, Johnson, Lindenstrauss, Preiss, and Schechtman.

Theorem AA (for “affine approximation”): For every there exists with the following property. If is an -Lipschitz function, then there exists an interval with and .

Theorem AA should not be confused with Rademacher’s theorem which says that a Lipschitz function is differentiable almost everywhere. The point here is a lower bound on the size of the interval . Differentiability does not provide that. In fact, if we knew that is smooth, or even a polynomial, the proof of Theorem AA would not become any easier.

Proof of Theorem AA

We may assume and . For let . That is, is the restricted Lipschitz constant, one that applies for distances at least . It is a decreasing function of , and .

Note that and that every value of is within of either or . Hence, the oscillation of on is at most . If , then the constant mid-range approximation on gives the desired conclusion, with . From now on .

The sequence is increasing toward , which implies for some . Pick an interval that realizes , that is and . Without loss of generality (otherwise consider ). Let be the middle half of . Since each point of is within distance of both and , it follows that for all .

So far we have pinched between two affine functions of equal slope. Let us consider their difference:
. Recall that , which gives a bound of for the difference. Approximating by the average of the two affine functions we conclude that as required.

It remains to consider the size of , about which we only know so far. Naturally, we want to take the smallest such that holds. Let be this value; then . Here and . The conclusion is that , hence . This finally yields as an acceptable choice, completing the proof of Theorem AA.

This is a collection of entirely unoriginal remarks about Laplacian spectrum of graphs. For an accessible overview of the subject I recommend the M.S. thesis The Laplacian Spectrum of Graphs by Michael William Newman. It also includes a large table of graphs with their spectra. Here I will avoid introducing matrices and enumerating vertices.

Let be the vertex set of a graph. Write if are adjacent vertices. Given a function , define .
This is a linear operator (the graph Laplacian) on the Euclidean space of all functions with the norm . It is symmetric: and positive semidefinite: . Since equality is attained for constant , 0 is always an eigenvalue of .

This is the standard setup, but I prefer to change things a little and replace by the smaller space of functions with zero mean: . Indeed, maps to anyway, and since it kills the constants, it makes sense to focus on . It is a vector space of dimension where .

One advantage is that the smallest eigenvalue is 0 if and only if the graph is disconnected: indeed, is equivalent to being constant on each connected component. We also gain better symmetry between and the Laplacian of the graph complement, denoted . Indeed, since , it follows that for every . So, the identity holds on (it does not hold on ). Hence the eigenvalues of are obtained by subtracting the eigenvalues of from . As a corollary, the largest eigenvalue of is at most , with equality if and only if the graph complement is disconnected. More precisely, the multiplicity of eigenvalue is one less than the number of connected components of the graph complement.

Let denote the diameter of the graph. Then the number of distinct Laplacian eigenvalues is at least . Indeed, let be two vertices at distance from each other. Define and elsewhere. Also let for . Note that for all . One can prove by induction that when the distance from to is greater than , and when the distance from to is equal to . In particular, when and . This shows that is not a linear combination of . Since , it follows that is not a linear combination of . Hence the minimal polynomial of has degree at least , which implies the claim.

Let’s consider a few examples of connected graphs.

3 vertices

There are two connected graphs: the 3-path (D=2) and the 3-cycle (D=1). In both cases we get D distinct eigenvalues. The spectra are [1, 3] and [3, 3], respectively.

4 vertices

One graph of diameter 3, the path. Its spectrum is .

One graph of diameter 1, the complete graph. Its spectrum is . This pattern continues for other complete graphs: since the complement is the empty graph ( components), all eigenvalues are equal to .

Four graphs of diameter 2, which are shown below, with each caption being the spectrum.

Remarks:

The graph [1, 3, 4] has more distinct eigenvalues than its diameter.

The graph [2, 2, 4] is regular (all vertices have the same degree).

The smallest eigenvalue of graphs [1, 1, 4] and [2, 2, 4] is multiple, due to the graphs having a large group of automorphisms (here rotations); applying some of these automorphisms to an eigenfunctions for the smallest eigenvalue yields another eigenfunction.

[1, 3, 4] and [2, 4, 4] also have automorphisms, but their automorphisms preserve the eigenfunction for the lowest eigenvalue, up to a constant factor.

5 vertices

One graph of diameter 4, the path. Its spectrum is related to the golden ratio: it consists of .

One graph of diameter 1, the complete one: [5, 5, 5, 5]

Five graphs of diameter 3. All have connected complement, with the highest eigenvalue strictly between 4 and 5. None are regular. Each has 4 distinct eigenvalues.

14 graphs of diameter 2. Some of these are noted below.

Two have connected complement, so their eigenvalues are less than 5 (spectrum shown on hover):

1.382, 1.382, 3.618, 3.618

1.382, 2.382, 3.618, 4.618

One has both integers and non-integers in its spectrum, the smallest such graph. Its eigenvalues are .

Two have eigenvalues of multiplicity 3, indicating a high degree of symmetry (spectrum shown on hover).

1, 1, 1, 5

3, 5, 5, 5

Two have all eigenvalues integer and distinct:

1, 2, 4, 5

2, 3, 4, 5

The 5-cycle and the complete graph are the only regular graphs on 5 vertices.

6 vertices

This is where we first encounter isospectral graphs: the Laplacian spectrum cannot tell them apart.

Both of these have spectrum but they are obviously non-isomorphic (consider the vertex degrees):

Both have these have spectrum and are non-isomorphic.

Indeed, the second pair is obtained from the first by taking graph complement.

Also notable are regular graphs on 6 vertices, all of which have integer spectrum.

Here [3, 3, 3, 3, 6] (complete bipartite) and [2, 3, 3, 5, 5] (prism) are both regular of degree 3, but the spectrum allows us to tell them apart.

The prism is the smallest regular graph for which the first eigenvalue is a simple one. It has plenty of automorphisms, but the relevant eigenfunction (1 on one face of the prism, -1 on the other face) is compatible with all of them.

7 vertices

There are four regular graphs on 7 vertices. Two of them are by now familiar: 7-cycle and complete graph. Here are the other two, both regular of degree 4 but with different spectra.

There are lots of isospectral pairs of graphs on 7 vertices, so I will list only the isospectral triples, of which there are five.

Spectrum 0.676596, 2, 3, 3.642074, 5, 5.681331:

Spectrum 0.726927, 2, 3.140435, 4, 4, 6.132637:

Spectrum 0.867363, 3, 3, 3.859565, 5, 6.273073:

Spectrum 1.318669, 2, 3.357926, 4, 5, 6.323404:

All of the triples mentioned so far have connected complement: for example, taking the complement of the triple with the spectrum [0.676596, 2, 3, 3.642074, 5, 5.681331] turns it into the triple with the spectrum [1.318669, 2, 3.357926, 4, 5, 6.323404].

Last but not least, an isospectral triple with an integer spectrum: 3, 4, 4, 6, 6, 7. This one has no counterpart since the complement of each of these graphs is disconnected.

8 vertices

Regular graphs, excluding the cycle (spectrum 0.585786, 0.585786, 2, 2, 3.414214, 3.414214, 4) and the complete one.