The definition of uniform continuity (if it’s done right) can be phrased as: is uniformly continuous if there exists a function , with , such that for every set . Indeed, when is a two-point set this is the same as , the modulus of continuity. Allowing general sets does not change anything, since the diameter is determined by two-point subsets.
Does it make a difference if we ask for only for connected sets ? For functions defined on the real line, or on an interval of the line, there is no difference: we can just consider the intervals and obtain
as before.
However, the situation does change for maps defined on a non-convex domain. Consider the principal branch of square root, , defined on the slit plane .
Conformal map of a slit domain is not uniformly continuous
This function is continuous on but not uniformly continuous, since as . Yet, it satisfies for connected subsets , where one can take . I won’t do the estimates; let’s just note that although the points are close to each other, any connected subset of containing both of them has diameter greater than 1.
These points are far apart with respect to the inner diameter metric
In a way, this is still uniform continuity, just with respect to a different metric. Given a metric space , one can define inner diameter metric on by letting be the infimum of diameters of connected sets that contain both and . This is indeed a metric if the space is reasonable enough (i.e., any two points are contained in some bounded connected set). On a convex subset of , the inner diameter metric coincides with the Euclidean metric .
One might think that the equality should imply that the domain is convex, but this is not so. Indeed, consider the union of three quadrants on a plane, say . Any two points of can be connected by going up from whichever is lower, and then moving horizontally. The diameter of a right triangle is equal to its hypotenuse, which is the Euclidean distance between the points we started with.
A non-convex domain where inner diameter metric is the same as Euclidean
Inner diameter metric comes up (often implicitly) in complex analysis. By the Riemann mapping theorem, every simply connected domain , other than itself, admits a conformal map onto the unit disk . This map need not be uniformly continuous in the Euclidean metric (the slit plane is one example), but it is uniformly continuous with respect to the inner diameter metric on .
Furthermore, by normalizing the situation in a natural way (say, and ), one can obtain a uniform modulus of continuity for all conformal maps onto the unit disk, whatever the domain is. This uniform modulus of continuity can be taken of the form for some universal constant . Informally speaking, this means that a slit domain is the worst that can happen to the continuity of a conformal map. This fact isn’t often mentioned in complex analysis books. A proof can be found in the book Conformally Invariant Processes in the Plane by Gregory Lawler, Proposition 3.85. A more elementary proof, with a rougher estimate for the modulus of continuity, is on page 15 of lecture notes by Mario Bonk.
In a 1946 paper Charles Pisot proved a theorem involving a curious constant . It can be defined as follows:
monic polynomial such that whenever
Equivalently, is determined by the requirement that the set have logarithmic capacity 1; this won’t be used here. The theorem is stated below, although this post is really about the constant.
Theorem: If an entire function takes integer values at nonnegative integers and is for some , then it is a finite linear combination of terms of the form , where each is an algebraic integer.
The value of is best possible; thus, in some sense Pisot’s theorem completed a line of investigation that began with a 1915 theorem by Pólya which had in place of , and where the conclusion was that is a polynomial. (Informally speaking, Pólya proved that is the “smallest” entire-function that is integer-valued on nonnegative integers.)
Although the constant was mentioned in later literature (here, here, and here), no further digits of it have been stated anywhere, as far as I know. So, let it be known that the decimal expansion of begins with 0.84383.
A lower bound on can be obtained by constructing a monic polynomial that is bounded by 1 on the set . Here is E(0.843):
It looks pretty round, except for that flat part on the left. In fact, E(0.82) is covered by a disk of unit radius centered at 1.3, which means that the choice shows .
p(z) = z-1.3 gives lower bound 0.82
How to get an upper bound on ? Turns out, it suffices to exhibit a monic polynomial that has all zeros in and satisfies on the boundary of . The existence of such shows . Indeed, suppose that is monic and on . Consider the function . By construction on the boundary of . Also, is subharmonic in its complement, including , where the singularities of both logarithms cancel out, leaving . This contradicts the maximum principle for subharmonic functions, according to which cannot exceed the maximum of on the boundary.
The choice of works for .
So we have boxed between 0.82 and 0.89; how to get more precise bounds? I don’t know how Pisot achieved the precision of 0.843… it’s possible that he strategically picked some linear and quadratic factors, raised them to variable integer powers and optimized the latter. Today it is too tempting to throw some optimization routine on the problem and let it run for a while.
But what to optimize? The straightforward approach is to minimize the maximum of on the circle , approximated by sampling the function at a sufficiently fine uniform grid and picking the maximal value. This works… unspectacularly. One problem is that the objective function is non-differentiable. Another is that taking maximum throws out a lot of information: we are not using the values at other sample points to better direct the search. After running optimization for days, trying different optimization methods, tolerance options, degrees of the polynomial, and starting values, I was not happy with the results…
Turns out, the optimization is much more effective if one minimizes the variance of the set . Now we are minimizing a polynomial function of , which pushes them toward having the same absolute value — the behavior that we want the polynomial to have. It took from seconds to minutes to produce the polynomials shown below, using BFGS method as implemented in SciPy.
As the arguments for optimization function I took the real and imaginary parts of the zeros of the polynomial. The symmetry about the real axis was enforced automatically: the polynomial was the product of quadratic terms . This eliminated the potentially useful option of having real zeros of odd order, but I did not feel like special-casing those.
Again, nearly the same polynomial works for upper and lower bounds. The fact that the absolute value of each of these polynomials is below 1 (for lower bounds) or greater than 1 (for upper bounds) can be ascertained by sampling them and using an upper estimate on the derivative; there is enough margin to trust computations with double precision.
Finally, the Python script I used. The function “obj” is getting minimized while function “values” returns the actual values of interest: the minimum and maximum of polynomial. The degree of polynomial is 2n, and the radius under consideration is r. The sample points are collected in array s. To begin with, the roots are chosen randomly. After minimization runs (inevitably, ending in a local minimum of which there are myriads), the new starting point is obtained by randomly perturbing the local minimum found. (The perturbation is smaller if minimization was particularly successful.)
import numpy as np
from scipy.optimize import minimize
def obj(r):
rc = np.concatenate((r[:n]+1j*r[n:], r[:n]-1j*r[n:])).reshape(-1,1)
p = np.prod(np.abs(s-rc)**2, axis=0)
return np.var(p)
def values(r):
rc = np.concatenate((r[:n]+1j*r[n:], r[:n]-1j*r[n:])).reshape(-1,1)
p = np.prod(np.abs(s-rc), axis=0)
return [np.min(p), np.max(p)]
r = 0.84384
n = 10
record = 2
s = np.exp(r * np.exp(1j*np.arange(0, np.pi, 0.01)))
xr = np.random.uniform(0.8, 1.8, size=(n,))
xi = np.random.uniform(0, 0.7, size=(n,))
x0 = np.concatenate((xr, xi))
while True:
res = minimize(obj, x0, method = 'BFGS')
if res['fun'] < record:
record = res['fun']
print(repr(res['x']))
print(values(res['x']))
x0 = res['x'] + np.random.uniform(-0.001, 0.001, size=x0.shape)
else:
x0 = res['x'] + np.random.uniform(-0.05, 0.05, size=x0.shape)
This is a marvelous exercise in complex analysis; I heard it from Steffen Rohde but don’t remember the original source.
Let . Suppose that a function satisfies the following property: for every three points there exists a holomorphic function such that for . Prove that is holomorphic.
No solution here, just some remarks.
The domain does not matter, because holomorphicity is a local property.
The codomain matters: cannot be replaced by . Indeed, for any function and any finite set there is a holomorphic function that agrees with at — namely, an interpolating polynomial.
Two points would not be enough. For example, passes the two-point test but is not holomorphic.
Perhaps the last item is not immediately obvious. Given two points , let . The hyperbolic distance between and is the infimum of taken over all curves connecting to . Projecting onto the real axis, we obtain a parametrized curve connecting to .
Projection does not increase the hyperbolic distance.
Since
it follows that . That is, is a nonexpanding map in the hyperbolic metric of the disk.
We can assume that . There is a Möbius map such that ; moreover, we can arrange that is a real number greater than , by applying a hyperbolic rotation about . Since is a hyperbolic isometry, , which implies . Let ; this is a Euclidean homothety such that and . By convexity of , . The map achieves for .
The preceding can be immediately generalized: passes the two-point test if and only if it is a nonexpanding map in the hyperbolic metric. Such maps need not be differentiable even in the real-variable sense.
However, the three-point test is a different story.