In a 1946 paper Charles Pisot proved a theorem involving a curious constant . It can be defined as follows:
monic polynomial
such that
whenever
Equivalently, is determined by the requirement that the set
have logarithmic capacity 1; this won’t be used here. The theorem is stated below, although this post is really about the constant.
Theorem: If an entire function takes integer values at nonnegative integers and is
for some
, then it is a finite linear combination of terms of the form
, where each
is an algebraic integer.
The value of is best possible; thus, in some sense Pisot’s theorem completed a line of investigation that began with a 1915 theorem by Pólya which had
in place of
, and where the conclusion was that
is a polynomial. (Informally speaking, Pólya proved that
is the “smallest” entire-function that is integer-valued on nonnegative integers.)
Although the constant was mentioned in later literature (here, here, and here), no further digits of it have been stated anywhere, as far as I know. So, let it be known that the decimal expansion of
begins with 0.84383.
A lower bound on can be obtained by constructing a monic polynomial that is bounded by 1 on the set
. Here is E(0.843):
It looks pretty round, except for that flat part on the left. In fact, E(0.82) is covered by a disk of unit radius centered at 1.3, which means that the choice shows
.

How to get an upper bound on ? Turns out, it suffices to exhibit a monic polynomial
that has all zeros in
and satisfies
on the boundary of
. The existence of such
shows
. Indeed, suppose that
is monic and
on
. Consider the function
. By construction
on the boundary of
. Also,
is subharmonic in its complement, including
, where the singularities of both logarithms cancel out, leaving
. This contradicts the maximum principle for subharmonic functions, according to which
cannot exceed the maximum of
on the boundary.
The choice of works for
.

So we have boxed between 0.82 and 0.89; how to get more precise bounds? I don’t know how Pisot achieved the precision of 0.843… it’s possible that he strategically picked some linear and quadratic factors, raised them to variable integer powers and optimized the latter. Today it is too tempting to throw some optimization routine on the problem and let it run for a while.
But what to optimize? The straightforward approach is to minimize the maximum of on the circle
, approximated by sampling the function at a sufficiently fine uniform grid
and picking the maximal value. This works… unspectacularly. One problem is that the objective function is non-differentiable. Another is that taking maximum throws out a lot of information: we are not using the values at other sample points to better direct the search. After running optimization for days, trying different optimization methods, tolerance options, degrees of the polynomial, and starting values, I was not happy with the results…
Turns out, the optimization is much more effective if one minimizes the variance of the set . Now we are minimizing a polynomial function of
, which pushes them toward having the same absolute value — the behavior that we want the polynomial to have. It took from seconds to minutes to produce the polynomials shown below, using BFGS method as implemented in SciPy.
As the arguments for optimization function I took the real and imaginary parts of the zeros of the polynomial. The symmetry about the real axis was enforced automatically: the polynomial was the product of quadratic terms . This eliminated the potentially useful option of having real zeros of odd order, but I did not feel like special-casing those.
Three digits

Real part: 0.916, 1.186, 1.54, 1.783
Imaginary part: 0.399, 0.572, 0.502, 0.199
Here and below, only the zeros with positive imaginary part are listed (in the left-to-right order), the others being their conjugates.

Real part: 0.878, 1.0673, 1.3626, 1.6514, 1.8277
Imaginary part: 0.3661, 0.5602, 0.6005, 0.4584, 0.171
Four digits

Real part: 0.8398, 0.9358, 1.1231, 1.357, 1.5899, 1.776, 1.8788
Imaginary part: 0.3135, 0.4999 ,0.6163, 0.637, 0.553, 0.3751, 0.1326

Real part: 0.8397, 0.9358, 1.1231, 1.3571, 1.5901, 1.7762, 1.879
Imaginary part: 0.3136, 0.5, 0.6164, 0.6372, 0.5531, 0.3751, 0.1326
No, I didn’t post the same picture twice. The polynomials are just that similar. But as the list of zeros shows, there are tiny differences…
Five digits

Real part: 0.81527, 0.8553, 0.96028, 1.1082, 1.28274, 1.46689, 1.63723, 1.76302, 1.82066, 1.86273
Imaginary part: 0.2686, 0.42952, 0.556, 0.63835, 0.66857, 0.63906, 0.54572, 0.39701, 0.23637, 0.08842

Real part: 0.81798, 0.85803, 0.95788, 1.09239, 1.25897, 1.44255, 1.61962, 1.76883, 1.86547, 1.89069
Imaginary part: 0.26631, 0.4234, 0.54324, 0.62676, 0.66903, 0.65366, 0.57719, 0.44358, 0.26486, 0.07896
Again, nearly the same polynomial works for upper and lower bounds. The fact that the absolute value of each of these polynomials is below 1 (for lower bounds) or greater than 1 (for upper bounds) can be ascertained by sampling them and using an upper estimate on the derivative; there is enough margin to trust computations with double precision.
Finally, the Python script I used. The function “obj” is getting minimized while function “values” returns the actual values of interest: the minimum and maximum of polynomial. The degree of polynomial is 2n, and the radius under consideration is r. The sample points are collected in array s. To begin with, the roots are chosen randomly. After minimization runs (inevitably, ending in a local minimum of which there are myriads), the new starting point is obtained by randomly perturbing the local minimum found. (The perturbation is smaller if minimization was particularly successful.)
import numpy as np from scipy.optimize import minimize def obj(r): rc = np.concatenate((r[:n]+1j*r[n:], r[:n]-1j*r[n:])).reshape(-1,1) p = np.prod(np.abs(s-rc)**2, axis=0) return np.var(p) def values(r): rc = np.concatenate((r[:n]+1j*r[n:], r[:n]-1j*r[n:])).reshape(-1,1) p = np.prod(np.abs(s-rc), axis=0) return [np.min(p), np.max(p)] r = 0.84384 n = 10 record = 2 s = np.exp(r * np.exp(1j*np.arange(0, np.pi, 0.01))) xr = np.random.uniform(0.8, 1.8, size=(n,)) xi = np.random.uniform(0, 0.7, size=(n,)) x0 = np.concatenate((xr, xi)) while True: res = minimize(obj, x0, method = 'BFGS') if res['fun'] < record: record = res['fun'] print(repr(res['x'])) print(values(res['x'])) x0 = res['x'] + np.random.uniform(-0.001, 0.001, size=x0.shape) else: x0 = res['x'] + np.random.uniform(-0.05, 0.05, size=x0.shape)