This is a meta-post which collects links to other posts on this blog with (sometimes implicit) questions that were left unanswered there. This does not necessarily mean that nobody has an answer, just that I did not have one when writing the post. The collection is in reverse chronological order.
Multiple kinds of branching here. First, the motorsport content has been moved to formula7.blog. Two blogs? Well, it became clear that my Stack Exchange activity, already on hiatus since 2018, is not going to resume (context: January 14, January 15, January 17). But typing words in boxes is still a hobby of mine.
There may be yet more branching in the knowledge market space, with Codidact and TopAnswers attempting to rise from the ashes of Stack Exchange. (I do not expect either project to have much success.)
Also, examples of branching in complex analysis are often limited to the situations where any two branches differ either by an additive constant like or by a multiplicative constant like . But different branches can even have different branch sets. Consider the dilogarithm, which has a very nice power series in the unit disk:
The series even converges on the unit circle , providing a continuous extension there. But this circle is also the boundary of the disk of convergence, so some singularity has to appear. And it does, at . Going around this singularity and coming back to the unit disk, we suddenly see a function with a branch point at , where there was no branching previously.
What gives? Consider the derivative:
As long as the principal branch of logarithm is considered, there is no singularity at since cancels the denominator. But once we move around , the logarithm acquires a multiple of , and so gets an additional term , and integrating that results in logarithmic branching at .
Of course, this does not even begin the story of the dilogarithm, so I refer to Zagier’s expanded survey which has a few branch points itself.
Thus the dilogarithm is one of the simplest non-elementary functions one can imagine. It is also one of the strangest. It occurs not quite often enough, and in not quite an important enough way, to be included in the Valhalla of the great transcendental functions—the gamma function, Bessel and Legendre- functions, hypergeometric series, or Riemann’s zeta function. And yet it occurs too often, and in far too varied contexts, to be dismissed as a mere curiosity. First defined by Euler, it has been studied by some of the great mathematicians of the past—Abel, Lobachevsky, Kummer, and Ramanujan, to name just a few—and there is a whole book devoted to it. Almost all of its appearances in mathematics, and almost all the formulas relating to it, have something of the fantastical in them, as if this function alone among all others possessed a sense of humor.
The three fundamental limits (as they were taught to me) are
(exponential, logarithmic, trigonometric). Usually they come with pictures like
which show that the graph of the numerator in each limit, say , is indeed close to the line .
But there are so many different degrees of “close”: for example, but the reciprocals of these numbers differ by almost . As a stress-test of the approximation , let us consider the behavior of for each of the fundamental limits.
Expressed as a difference:
Always negative, which is obvious once we recall that the graph of lies above its tangent line. The graph has central symmetry about which is not as obvious but is an easy algebra exercise. Since the asymptote on the right is , the other asymptote is . This looks a lot like the logistic function … but it is not, because the logistic function approaches its asymptotes exponentially fast, while our function does so at the rate .
For comparison, here is the logistic curve (in green) scaled to match the behavior at .
This could be potentially useful if one needs a logistic-type function that behaves like a rational function at infinity. Simply using a rational function for this purpose would not do: it cannot have two distinct horizontal asymptotes.
This one is always positive, and has a vertical asymptote at in addition to the horizontal asymptote at . At a glance it may look like shifted/scaled hyperbola . Indeed, is a decent approximation to it, shown in green below.
Unlike in the previous cases, the difference of reciprocals vanished at due to the approximation being of higher order: the error term is cubic rather than quadratic. The graph looks like the tangent function but it cannot be exactly that since the nearest vertical asymptotes are rather than . (Not to mention other reasons such as non-periodicity.) A stretched and rescaled tangent, namely , sort of fits:
, with the reciprocal difference being .
The limit adds nothing new compared to the previous one, since . But the difference of reciprocals is another story. For one thing, the principal error term for it is twice as large as for the sine limit: versis . Accordingly, the graph looks more like .
All of the above differences have derivatives of all orders at the origin, which is not easy to prove with the standard calculus tools. Complex analysis makes the situation clear: the reciprocal of a holomorphic function with a zero at can be expanded into a Laurent series, and subtracting eliminates the principal part of that series, leaving a convergent power series, i.e., another holomorphic function. Let us take a look at these series:
Nice, the coefficients have alternating signs and all of them have numerator … oh no, next term is . The signs do continue to alternate. Apart from the constant term, only odd powers of appear, according to the central symmetry noted above. The coefficient of in this series is where is the th Bernoulli number. These are the “modern” Bernoulli numbers, with rather than .
Also alternating but not omitting even powers, and not decaying nearly as fast as the coefficients of the previous series. (Of course: this function has a singularity at while the nearest singularities of the exponential thing are at .) These are Gregory coefficients, which according to Wikipedia “often appear in works of modern authors who do not recognize them”. I would not recognize them either without OEIS.
The polynomially convex hull of a compact set is defined as the set of all points such that the inequality (a form of the maximum principle) holds for every polynomial . For example, the polynomially convex hull of a simple closed curve is the union of that curve with its interior region. In general, this process fills up the holes in the set , resulting in the complement of the unbounded connected component of .
We can recover the usual convex hull from this construction by restricting to the polynomials of first degree. Indeed, when is linear, the set is a closed disk, and we end up with the intersection of all closed disks that contain . This is precisely the convex hull of .
What if we restrict to the polynomials of degree at most ? Let’s call the resulting set the degree- convex hull of , denoted . By definition, it is contained in the convex hull and contains the polynomially convex hull. To exactly compute for general appears to be difficult even when .
Consider finite sets. When has at most points, we have because there is a polynomial of degree whose zero set is precisely . So, the first nontrivial case is of having points. Let us write .
Depending on the location of the points, may be strictly larger than . For example, if consists of the vertices of a regular -gon, then also contains its center. Here is why. By a linear transformation, we can make sure that where . For we have . Hence, for any polynomial of degree at most , the sum is equal to . This implies , a kind of a discrete maximum principle.
A more systematic approach is to use the Lagrange basis polynomials, that is , which satisfy . Since for any polynomial of degree at most , it follows that if and only if holds for every choice of scalars . The latter is equivalent to the inequality .
This leads us to consider the function , the sum of the absolute values of the Lagrange basis polynomials. (Remark: S is called a Lebesgue function for this interpolation problem.) Since , it follows that everywhere. By construction, on . At a point , the equality holds if and only if is the same for all .
In the trivial case , the function is equal to precisely on the linear segment with endpoints . Of course, this only repeats what we already knew: the degree-1 convex hull is the ordinary convex hull.
If with real and , then . Indeed, if , then lies in the convex hull of , and therefore for some . The basis polynomial is positive at , since it is equal to at and does not vanish outside of . Since a polynomial changes its sign at every simple zero, it follows that . Well, there is no if , but in that case, the same reasoning applies to . In any case, the conclusion is that cannot be the same for all .
At this point one might guess that the vertex set of a regular polygon is the only kind of finite sets that admit a nontrivial discrete maximum principle for polynomials. But this is not so: the vertices of a rhombus work as well. Indeed, if with , then for all , hence .
The vertices of a non-square rectangle do not work: if is the set of these vertices, the associated function is strictly greater than 1 on the complement of .
Are there any other finite sets that support a discrete maximum principle for polynomials?
Only those with the count of 4 or greater are included. Not counting the deceased. Considering CUNY as a single institution. Source of data.
Top 10 changes compared to 2019 ranking: MIT takes sole possession of the 3rd place with Berkeley dropping into 4th, UIUC rises from 8th to 6th, Princeton drops from 6th to 8th, Stanford rises from 11th to 9th, Wisconsin-Madison drops from 8th to 10th, Illinois-Chicago rises from 13th to 10th. Due to ties, the “top 10” are actually top 12.
Honorable mention: Texas A&M rises from 19th to 16th. Having once set “top 20” as the goal of their “Vision 2020” campaign, they achieved it at least by this measure.
Rutgers The State University of New Jersey New Brunswick : 44
This is post is related to Extremal Taylor polynomials where it was important to observe that the Taylor polynomials of the function do not have zeros in the unit disk. Let’s see how far this generalizes.
The function has the rare property that all zeros of its Taylor polynomial have unit modulus. This is clear from
In this and subsequent illustrations, the zeros of the first 50 Taylor polynomials are shown as blue dots, with the unit circle in red for reference.
When the exponent is less than -1, the zeros move inside the unit disk and begin forming nice patterns in there.
When the exponent is strictly between -1 and 1, the zeros are all outside of the unit disk. Some of them get quite large, forcing a change of scale in the image.
Why does this happen when the exponent approaches 1? The function is its own Taylor polynomial, and has the only zero at -1. So, when , the Taylor polynomials are small perturbations of . These perturbations of coefficients have to create additional zeros, but being small, they require a large value of to help them.
For a specific example, the quadratic Taylor polynomial of is , with roots . When , one of these roots is near (as it has to be) and the other is large.
Finally, when and is not an integer, we get zeros on both sides of the unit circle. The majority of them are still outside. A prominent example of an interior zero is produced by the first-degree polynomial .
Let be a graph with vertices . The degree of vertex is denoted . Let be the Laplacian matrix of , so that , is when the vertices are adjacent, and is otherwise. The eigenvalues of are written as .
The graph is regular if all vertices have the same degree: . How can this property be seen from its Laplacian eigenvalues ?
Since the sum of eigenvalues is equal to the trace, we have . Moreover, is the trace of , which is equal to the sum of the squares of all entries of . This sum is because the th row of contains one entry equal to and entries equal to . In conclusion, .
The Cauchy-Schwarz inequality says that with equality if and only if all numbers are equal, i.e., the graph is regular. In terms of eigenvalues, this means that the difference
is always nonnegative, and is equal to zero precisely when the graph is regular. This is how one can see the regularity of a graph from its Laplacian spectrum.
As an aside, is an even integer. Indeed, the sum is even because it double-counts the edges. Hence the number of vertices of odd degree is even, which implies that is even for every positive integer .
Up to a constant factor, is simply the degree variance: the variance of the sequence . What graph maximizes it for a given ? We want to have some very large degrees and some very small ones.
Let be the union of the complete graph on vertices and isolated vertices. The sum of degrees is and the sum of squares of degrees is . Hence,
For the maximum is attained by , that is there is one isolated vertex. For the maximum is . In general it is attained by .
The graph is disconnected. But any graph has the same degree variance as its complement. And the complement is always connected: it consists of a “center”, a complete graph on vertices, and “periphery”, a set of vertices that are connected to each central vertex. Put another way, is obtained from the complete bipartite graph by connecting all vertices of the group together.
Tom A. B. Snijders (1981) proved that and are the only graphs maximizing the degree variance; in particular, is the unique maximizer among the connected graphs. It is pictured below for .