Retraction by contraction

The Euclidean space {\mathbb R^n} has a nice property: every closed convex subset {C\subset \mathbb R^n} is the image of the whole space under a map {f} that is simultaneously:

  • a contraction, meaning {\|f(a)-f(b)\|\le \|a-b\|} for all {a,b\in\mathbb R^n};
  • a retraction, meaning {f(a)=a} for all {a\in C}.

Indeed, we can simply define {f(x)} to be the (unique) nearest point of {C} to {x}; it takes a bit of work to verify that {f} is a contraction, but not much.


In other normed spaces, this nearest point projection does not work that well. For example, take {X=\ell_1^2}, the two-dimensional space with the Manhattan metric. Consider the line {C=\{(2t,t)\colon t\in\mathbb R\}} which is a closed and convex set. The nearest point of {C} to {(2,0)} is {(2,1)}: moving straight up, since changing the first coordinate doesn’t pay off. Since {(0,0)} remains fixed, the nearest point projection increases some pairwise distances, in this case from {2} to {3}.

However, there is a contractive retraction onto this line, given by the formuls {T(x_1,x_2) =    \left(\frac23(x_1+x_2), \frac13(x_1+x_2)\right)}. Indeed, this is a linear map that fixes the line {x_1=2x_2} pointwise and has norm {1} because

\displaystyle \|T(x)\|_1 = |x_1+x_2|\le \|x\|_1


More generally, in every normed plane, every closed convex subset admits a contractive retraction. To prove this, it suffices to consider closed halfplanes, since a closed convex set is an intersection of such, and contractions form a semigroup. Furthermore, it suffices to consider lines, because having a contractive retraction onto a line, we can redefine it to be the identity map on one side of the line, and get a contractive retraction onto a halfplane.

Such a retraction onto a line, which is a linear map, is illustrated below.

Every line in a normed plane is 1-complemented
Every line in a normed plane is 1-complemented

Given the unit circle (black) and a line (blue), draw supporting lines (red) to the unit circle at the points where it meets the line. Project onto the blue line along the red ones. By construction, the image of the unit disk under this projection is contained in the unit disk. This precisely means that the map has operator norm {1}.


In spaces of dimensions {3} or higher, there are closed convex subsets without a contractive retraction. For example, consider the plane in {\ell_\infty^3} passing through the points {A = (2,2,0)}, {B= (2,0,2)}, and {C= (0,2,2)}. This plane has equation {x_1+x_2+x_3=4}. The point {D=(1,1,1)} is at distance {1} from each of A,B,C, and it does not lie on the plane. For any point E other than D, at least one of the distances AE, BE, CE exceeds 1. More precisely, the best place to project D is {(4/3, 4/3, 4/3)} which is at distance {4/3} from A, B, and C.


Two natural questions: (a) is there a nice characterization of normed spaces that admit a contractive retraction onto every closed convex subset? (b) what is the smallest constant {L=L(n)} such that every {n}-dimensional normed space admits an {L}-Lipschitz retraction onto every closed convex subset?

(The answers may well be known, but not to me at present.)

Spam-fighting strategy: rationing API requests

Based on a true story

Given: {n} websites on which users can post stuff. For {k=1,\dots, n} the {k}th site gets {p_k} posts per day on average. Some of the posts are spam or otherwise harmful.

One of the ways to detect spam automatically is to send an API request to any of the sites, get post content and analyze it. The total number of such requests cannot exceed {10000} per day, while the total number of posts, namely {\sum p_k}, is substantially larger.

This necessitates rationing. The current strategy is to wait until a site has {a_k} new posts, and then request all of those at once. The upside is that only {p_k/a_k} requests are needed. The downside is that spam may sit undetected for a period of time proportional to {a_k/p_k}.

One would like to minimize the mean lifetime of spam. Assuming for now that spam is uniformly distributed among the sites, this lifetime is proportional to

\displaystyle \sum p_k \frac{a_k}{p_k} = \sum a_k

So, we need to minimize {\sum a_k} subject to {\sum p_k/a_k \le 10000}. The method of Lagrange multipliers says that the gradient of objective function should be proportional to the gradient of constraint. Hence,

\displaystyle    \frac{p_k}{a_k^2} = \lambda

with {\lambda} independent of {k}. This leads to {a_k = \sqrt{p_k/\lambda}}.

That is, the size of API queue should be proportional to the square root of the level of activity.


An excellent proxy for overall activity on a site is QPD, questions per day. For example, the QPD ratio of Mathematics to Computer Science is currently 452:11, so the ratio of queue sizes should be about {\sqrt{452/11} \approx 6.4} (in reality, the queue size is currently {10} and {2}, respectively).

The value of {\lambda} is easily found from the condition {\sum p_k/a_k = 10000} (pushing the API quota to the limit), with the final result

\displaystyle    a_k =\sqrt{p_k} \, \frac{\sum_{i=1}^n\sqrt{p_i}}{ 10000 }


In practice, spam is unevenly distributed. Let’s say that {s_k} is the proportion of spam on the {k}th site. Then the mean lifetime is proportional to

\displaystyle \sum p_k s_k \frac{a_k}{p_k} = \sum a_k s_k

This time, the method of Lagrange multipliers yields

\displaystyle    \frac{p_k}{a_k^2} = \lambda s_k

hence {a_k = \sqrt{p_k/(\lambda s_k)}}. That is, the size of API queue should be proportional to {\sqrt{p_k/s_k}}.

Theoretically, the optimal size is

\displaystyle    a_k = \sqrt{p_k/s_s}\, \frac{\sum_{i=1}^n \sqrt{p_i/s_i}}{10000}

In practice, one does not know {s_k} beyond some general idea of how spammy a site is. Drupal Answers gets a lot more than Computer Science; so even though its QPD exceeds CS by the factor of about four, its API queue size is set to the minimal possible value of {1}.

Oscillatory explosion

Nonlinear differential equations don’t necessarily have globally defined solutions, even if the equation is autonomous (no time is involved explicitly). The simplest example is {y'=y^2} with initial condition {y(0)=1}: the solution is {y(t) = 1/(1-t )}, which ceases to exist at {t=1}, escaping to infinity.


This kind of behavior can often be demonstrated without solving the ODE. Consider {y''=y^3} with {y(0)=1}, {y'(0)=0}. I’ve no idea what the explicit solution is, but it’s clear that the quantity {E(t) = \frac12 (y')^2 - \frac14 y^4} remains constant: indeed, {E'(t) = y''y' - y^3 y' = 0}. (Here, {E} is analogous to the sum of kinetic and potential energy in physics.)

The solution {y} is increasing and convex. Let {t_n} be such that {y(t_n)=n}, e.g., {t_1=0}. The invariance of {E} yields {y'(t_n) = 2^{-1/2} (n^2 - 1)}. By the mean value theorem, {t_{n+1}-t_n \le \sqrt{2}(n^2-1)^{-1}} for {n=2,3,\dots}. Since the series on the right converges, the limit {T = \lim_{n\rightarrow\infty }t_n} is finite; this is the blow-up time.


But no blow-up occurs for the equation {y''=-y^3}, where the nonlinear term pushes back toward equilibrium. Indeed, the invariant energy is now {E= \frac12 (y')^2 + \frac14 y^4}, which implies that {y} and {y'} stay bounded. In the phase space {(y,y')} the solution with initial values {y(0)=1}, {y'(0)=0} traces out this curve:

Closed trajectory in the phase space (y,y')
Closed trajectory in the phase space (y,y’)

Accordingly, the solution is a periodic function of {t} (although this is not a trigonometric function):

Periodic solution, globally defined
Periodic solution, globally defined

Everything so far has been pretty straightforward. But here is a stranger animal: {y'''=-y^3}. As in the previous example, nonlinear term pushes toward equilibrium. Using initial conditions {y(0)=1}, {y'(0)=y''(0)=0}, I get this numerical solution up to time {T=5}:

0<t<5
0<t<5

As in the previous example, {y} oscillates. But the speed and amplitude of oscillation are increasing.

0<t<5.4
0<t<5.4
0<t<5.7
0<t<5.7

Rapidly increasing:

0<t<5.9
0<t<5.9
0<t<6
0<t<6

In the plane {(y,y')} the solution spirals out:

The plane (y,y')
The plane (y,y’)

The plots make it clear that the solution ceases to exist in finite time, but I don’t have a proof. The issue is that the function {y} does not escape to infinity in just one direction. Indeed, if {y(t_0)>0}, then regardless of the values of {y'} and {y''} at {t_0}, the negative third derivative will eventually make the function {y} decrease, and so {y} will attain the value {0} at some {t_1>t_0}. After that, the third derivative is positive, guaranteeing the existence of time {t_2>t_1} when {y} returns to {0} again. I haven’t succeeded in proving that the limit {\lim t_n} is finite, giving the time when oscillatory explosion occurs.


The plots were made in SageMathCloud:

var('t y y1 y2')
P = desolve_system_rk4([y1,y2,-y^3],[y,y1,y2], ics=[0,1,0,0],ivar=t,end_points=5,step=0.01)
Q=[ [i,j] for i,j,k,l in P]
list_plot(Q, plotjoined=True)

Between a cubic spline and the line of regression

Given some data points {(x_i,y_i)} one can devise various functions that extract information from them. Such as

  • Regression line (considered in When the digits of {\pi} go to 11): it detects a linear trend {y=ax+b}, minimizing the sum of squares of residuals {y_i-(ax_i+b)}.
  • Natural cubic spline (considered in Connecting dots naturally), which passes through every data point while minimizing the amount of wiggling, measured by the square of its second derivative. Like this:
Natural cubic spline
Natural cubic spline

A smoothing spline is something in between the above extremes: it insists on neither being a line (i.e., having zero second derivative) nor on passing through given points (i.e., having zero residuals). Instead, it minimizes a weighted sum of both things: the integral of second derivative squared, and the sum of residuals squared. Like this plot, where the red points are given but the spline chooses to interpolate the green ones:

Smoothing spline
Smoothing spline

I’ll demonstrate a version of a smoothing spline that might not be exactly canonical, but is very easy to implement in Matlab or Scilab (which I prefer to use). As in Connecting dots naturally, assume the knots {a=x_0<x_1<\dots<x_n=b} equally spaced at distance {h=(b-a)/n}. The natural cubic spline is determined by the values of its second derivative at {x_1,\dots,x_{n-1}}, denoted {z_1,\dots,z_n}. As explained earlier, these can be found from the linear system

\displaystyle    \frac{h}{6}\begin{pmatrix} 4 & 1 & 0 & 0 & \ldots & 0 & 0 \\ 1 & 4 & 1 & 0 & \ldots & 0 & 0 \\ 0 & 1 & 4 & 1 & \ldots & 0 & 0 \\ \vdots & \vdots & \vdots & \vdots & \ddots & \ldots & \ldots \\ 0 & 0 & 0 & 0 & \ldots & 1 & 4 \end{pmatrix} \begin{pmatrix} z_1 \\ z_2 \\ \vdots \\ z_{n-1}\end{pmatrix}   = - \frac{1}{h} \begin{pmatrix} \Delta^2 y_1 \\ \Delta^2 y_2 \\ \vdots \\ \Delta^2 y_{n-1}\end{pmatrix}

where the column on the right contains the amounts by which the derivative of the piecewise linear interpolant through given points jumps at every knot. The notation {\Delta^2y } means the second difference of {(y_i)}, for example {\Delta^2y_1 =y_2-2y_1+y_0}.

A smoothing spline is also a natural spline, but for a different set of points {(x_i,\tilde y_i)}. One has to find {\tilde y_i} that minimize a weighted sum of {\sum_i (\tilde y_i-y_i)^2} and of {\int_a^b (f''(x))^2\,dx}. The latter integral is easily computed in terms of {z_i}: it is equal to {\frac{h}{3}\sum_{i=1}^{n}(z_i^2+z_{i-1}^2+z_iz_{i-1})}. Since this quadratic form is comparable to {\sum_{i=1}^{n}z_i^2}, I’ll work with the latter instead.

The idea is to set up an underdetermined system for {z_i} and {\tilde y_i-y_i}, and let Scilab find a least squares solution of that. Let’s introduce a weight parameter {\lambda>0} that will determine the relative importance of curvature vs residuals. It is convenient to let {\tilde y_i=y_i+\lambda h^2 \delta_i}, so that both {\delta_i} and {z_i} (second derivative) scale the same way. The terms {\delta_i} contributes to the linear system for {z_i}, since the right hand side now has {\tilde y_i} instead of {y_i}. This contribution is {- \lambda h \Delta^2 \delta}. Moving it to the left hand-side (since {\delta_i} are unknowns) we obtain the following system.

\displaystyle    \begin{pmatrix} A & | & B \end{pmatrix} \begin{pmatrix} z \\ \delta \end{pmatrix}   = - \frac{1}{h} \Delta^2 y

where {A} is the same tridiagonal matrix as above, and {B} is the rectangular Laplacian-type matrix

\displaystyle    B = \frac{\lambda}{h} \begin{pmatrix} -1 & 2 & -1 & 0 & \ldots & 0 & 0 \\ 0 & -1 & 2 & -1 & \ldots & 0 & 0 \\ 0 & 0 & -1 & 2 & \ldots & 0 & 0 \\ \vdots & \vdots & \vdots & \vdots & \ddots & \ldots & \ldots \\ 0 & 0 & 0 & 0 & \ldots & 2 & -1 \end{pmatrix}

All in all, the system has {2n } unknowns {z_1,\dots,z_{n-1},\delta_0,\dots,\delta_n}, and {(n-1)} equations, reflecting the continuity of first derivative at each interior knot. The lsq command of Scilab finds the solution with the least sum of squares of the unknowns, which is what we are after.

Time for some examples. {\lambda=0} and {\lambda=0.05} can be seen above. Here are more:

lambda = 0.1
lambda = 0.1
lambda = 1
lambda = 1

As {\lambda} increases, the spline approaches the regression line:

lambda = 10
lambda = 10
lambda=100
lambda=100

Finally, the Scilab code. It is almost the same as for natural spline; the difference is in five lines from B=... to newy=... The code after that is merely evaluation and plotting of the spline.

a = 0; b = 10
y = [3 1 4 1 5 9 2 6 5 3 5]
lambda = 0.1

n = length(y)-1
h = (b-a)/n
jumps = diff(y,2)/h
A = (h/6)*(diag(4*ones(1,n-1))+diag(ones(1,n-2),1)+diag(ones(1,n-2),-1))

B = lambda/h*(diag(-2*ones(1,n+1))+diag(ones(1,n),1)+diag(ones(1,n),-1))
C = [A, B(2:$-1,:)]
sol = lsq(C,-jumps')'
z = [0,sol(1:n-1),0]
newy = y + lambda*h^2*sol(n:$)

allx = []
spl = []
for i=1:n  
   xL = a+h*(i-1)
   xR = a+h*i
   x = linspace(xL,xR,100)
   linear = newy(i)*(xR-x)/h + newy(i+1)*(x-xL)/h
   correction = ((z(i+1)+2*z(i))*(xR-x)+(2*z(i+1)+z(i))*(x-xL)).*(xR-x).*(x-xL)/(6*h)
   allx = [allx, x]   
   spl = [spl, linear+correction] 
end

plot(allx, spl) 
plot(a:h:b, newy, 'g*')
plot(a:h:b, y, 'r*')

Stack Exchange sites: questions and visits

After a year, it is time to update the Hertzsprung–Russell diagram of Stack Exchange sites. Here is the new and much improved version: a bubble chart made with Google Spreadsheets. The data is imported with ImportXML function, which apparently means it will be automatically refreshed every two hours. Just in case, here is a (heavily zoomed out) static image. It’s not as usable as the interactive chart you see above, because one can’t read the text on obscured bubbles by hovering over them.

Static snapshot of the diagram
Static snapshot of the diagram

As a rule of thumb, the sites with at least 10000 visits/day and 10 questions/day are those that “made it”: they either graduated from beta or will probably graduate in foreseeable future (assuming the quality of content is decent and the community is somewhat capable of self-organization).


A year ago, I joked about Beer, Italian Language, and Expatriates being either closed or merged into one site. Neither thing happened yet. Of the three, Expatriates site now has the largest numbers of visits and questions, though the quality is often an issue. Italian Language caught up with Russian Language (not to be confused with Русский язык, which is a recently imported site from Hashcode.ru and differentiates itself by targeting native speakers rather than language learners). Of the three, Beer appears to have the least amount of energy at present.

Some of the sites at the bottom, around 100 visits/day, are very new and will not stay there for long. The future of Community Building seems less certain.


Here’s the technical part. The data is/are obtained from this list of sites with the command

=IMPORTXML("http://stackexchange.com/sites?view=list", "//div[contains(@class,'category')]/@class|//input/@value")

The XPath parameter consists of two: //div[contains(@class,'category')]/@class picks up the class of every div element with “category” in its list of classes. This is later used to categorize as Technology, Science, Life/Arts, etc. The other part is //input/@value, which grabs all other data: number of questions, number of answers, number of users… most of which I have no use for. To sort this column by data type I put MOD(ROW()-2,9) next to it:

+------------------------------------------------+------+
|                      Raw                       | Item |
+------------------------------------------------+------+
| lv-item category-technology site-stackoverflow |    0 |
| 1217462400                                     |    1 |
| 9410733                                        |    2 |
| 15707906                                       |    3 |
| 73.95805406                                    |    4 |
| 4258590                                        |    5 |
| 7232805                                        |    6 |
| 7956.285714                                    |    7 |
| Stack Overflow                                 |    8 |
| lv-item category-technology site-serverfault   |    0 |
| 1241049600                                     |    1 |
+------------------------------------------------+------+

The columns C-F filter the content of A by item type. For example, D is the site name: =FILTER(A:A,B:B=8), etc. The Category column needs a bit of extra (routine) work to make it user-friendly. Otherwise, the table is finished:

+---------------------+-------------------+----------------+
|        Name         | Questions per day | Visits per day |
+---------------------+-------------------+----------------+
| Stack Overflow      | 7956.3            |        7232805 |
| Server Fault        | 103.1             |         353739 |
| Super User          | 162.9             |         734077 |
| Meta Stack Exchange | 21.8              |           8680 |
| Web Applications    | 12.7              |          44341 |
| Arqade              | 36.5              |         247448 |
| Webmasters          | 13.6              |          12433 |
| Seasoned Advice     | 6.9               |          80308 |
| Game Development    | 21.1              |          25836 |
| Photography         | 8.5               |          26560 |
| Cross Validated     | 84.4              |          63317 |
| Mathematics         | 616.4             |         150744 |
| Home Improvement    | 17.3              |          71148 |
+---------------------+-------------------+----------------+

The bubble chart uses this table for the log-log scale plot that you see above.

The volume of the intersection of balls

Fix two positive numbers {R} and {r}. The volume of the intersection of balls of radii {R} and {r} is a function of the distance {d} between their centers. Is it a decreasing function?

Intersecting balls
Intersecting balls

Wolfram MathWorld gives a formula for this volume in the three-dimensional case

\displaystyle    V(d) = \frac{\pi}{12 d} (R+r-d)^2(d^2+2rd-3r^2+2Rd+6Rr-3R^2)

(Here and in the sequel it is assumed that {d\le R+r}.) From this formula it is not at all obvious that {V(d)} is decreasing. And this is just for three dimensions.

Let’s begin with the one-dimensional case, then. Consider the intervals {[-R,R]} and {[d-r,d+r]} where {d\le R+r}. A point {x} belongs to their intersection if and only if

\displaystyle  \max(-R,d-r)\le x \le \min(R,d+R)

Hence, the length of the intersection is {L(d)=\min(R,d+r)-\max(-R,d-r)}. Even this simple formula does not quite make it evident that the function is decreasing for {d>0}. If {d+r>R} this is clear enough, but the case {d+r -d-r>-r}.

Fortunately, Fubini’s theorem reduces the general case to the one-dimensional one. Take any line {L} parallel to the line connecting the centers. The intersection of each ball with {L} is a line segment whose length is independent of {d}. The distance between the midpoints of these segments is {d}; thus, the length of the intersection is a decreasing function of {d}.

Gromov’s note Monotonicity of the volume of intersection of balls (1987) deals with the more general case of {k\le n+1} balls in {\mathbb R^n}: if two sets of balls {B(x_i,r_i)} an {B(x_i',r_i)} satisfy {\|x_i'-x_j'\|\le \|x_i-x_j\|} for all {i,j}, then the volume of {\bigcap B(x_i,r_i)} does not exceed the volume of {\bigcap B(x_i',r_i)}.

The paper ends with

Question: Who is the author of [this result]? My guess is this was known to Archimedes. Undoubtedly the theorem can be located […] somewhere in the 17th century.

Meanwhile, the corresponding problem for the volume of the union remains open.

Implicit Möbius strip

Reading the Wolfram MathWorld™ article on the Möbius strip I came across an equation that was news to me.

After the familiar parametric form

\displaystyle    x=(1+s\cos(t/2))\cos t,\quad y=(1+s\cos(t/2))\sin t, \quad z = s\sin(t/2) \qquad (1)

it is said that “In this parametrization, the Möbius strip is therefore a cubic surface with equation…”

\displaystyle  y(x^2+y^2+z^2-1)-2z(x^2+y^2+x) =0 \qquad\qquad\qquad(2)

That’s a neat cubic equation for sure. But… wouldn’t the sign of the left hand side of (2) (call it F) distinguish the “positive” and “negative” sides of the surface, contrary to its non-orientability?

I went to SageMathCloud™ to check and got this plot of (2):

Implicit plot, looks complicated
Implicit plot, looks complicated

That’s… not exactly the Möbius strip as I know it. But it’s true that (1) implies (2): the cubic surface contains the Möbius strip. Here is the plot of (1) superimposed on the plot of (2).

Same with the Moebius band superimposed
Same with the Moebius band superimposed

The self-intersection arises where the “positive side” F > 0 suddenly transitions to negative F< 0 .

Trigonometric approximation and the Clenshaw-Curtis quadrature

Trying to approximate a generic continuous function on {[-1,1]} with the Fourier trigonometric series of the form {\sum_n (A_n\cos \pi nx+B_n\sin \pi nx)} is in general not very fruitful. Here’s such an approximation to {f(x)=\exp(x)}, with the sum over {n\le 4}:

Poor approximation to exp(x)
Poor approximation to exp(x)

It’s better to make a linear change of variable: consider {f(2x-1)} on the interval {[0,1]}, and use the formula for the cosine series. This results in {\exp(2x-1)}, which is approximated by the partial sum {\sum_{n=0}^4 A_n\cos \pi nx} of its cosine Fourier series as follows.

Better approximation after a change of variable
Better approximation after a change of variable

But one can do much better with a different, nonlinear change of variable. Consider {f(\cos x)} on the interval {[0,\pi]}, and again use the formula for the cosine series. This results in {\exp(\cos x)}, which is approximated by the partial sum {\sum_{n=0}^4 A_n\cos nx} of its cosine Fourier series as follows.

Excellent approximation after nonlinear change of variable
Excellent approximation after nonlinear change of variable

Yes, I can’t see any difference either: the error is less than {10^{-3}}.

The composition with cosine improves approximation because {f(\cos t)} is naturally a periodic function, with no jumps or corners in its graph. Fourier series, which are periodic by nature, follow such functions more easily.


A practical implication of this approximation of {f(\cos t)} is the Clenshaw-Curtis integration method. It can be expressed in one line:

\displaystyle    \int_{-1}^1 f(x)\,dx = \int_0^\pi f(\cos t)\sin t\,dt \approx \int_0^\pi \sum_{n=0}^N a_n \cos nt \sin t\,dt   = \sum_{n=0}^N \frac{(1+(-1)^n) a_n}{1-n^2}

The integral {\int_{-1}^1 f(x)\,dx} is approximated by summing {2a_{2k}/(1-4k^2)}, where {a_{2k}} are even-numbered cosine coefficients of {f(\cos t)}. In the example with {f(x)=\exp(x)} using just three coefficients yields

\displaystyle    \frac{2a_0}{1}+\frac{2a_2}{-3}+\frac{2a_4}{-15} \approx 2.350405

while the exact integral is {\approx 2.350402}.


At first this doesn’t look practical at all: after all, the Fourier coefficients are themselves found by integration. But since {f(\cos t)} is so close to a trigonometric polynomial, one can sample it at equally spaced points and apply the Fast Fourier transform to the result, quickly obtaining many coefficients at once. This is what the Clenshaw-Curtis quadrature does (at least in principle, the practical implementation may streamline these steps.)

Normalizing to zero mean or median

Pick two random numbers {x_1,x_2} from the interval {[0,1]}; independent, uniformly distributed. Normalize them to have mean zero, which simply means subtracting their mean {(x_1+x_2)/2} from each. Repeat many times. Plot the histogram of all numbers obtained in the process.

Two random numbers  normalized to zero mean
Two random numbers normalized to zero mean

No surprise here. In effect this is the distribution of {Y=(X_1-X_2)/2} with {X_1,X_2} independent and uniformly distributed over {[0,1]}. The probability density function of {Y} is found via convolution, and the convolution of {\chi_{[0,1]}} with itself is a triangular function.

Repeat the same with four numbers {x_1,\dots,x_4}, again subtracting the mean. Now the distribution looks vaguely bell-shaped.

Four random numbers  normalized to zero mean
Four random numbers normalized to zero mean

With ten numbers or more, the distribution is not so bell-shaped anymore: the top is too flat.

Ten random numbers  normalized to zero mean
Ten random numbers normalized to zero mean

The mean now follows an approximately normal distribution, but the fact that it’s subtracted from uniformly distributed {x_1,\dots,x_{10}} amounts to convolving the Gaussian with {\chi_{[0,1]}}. Hence the flattened top.


What if we use the median instead of the mean? With two numbers there is no difference: the median is the same as the mean. With four there is.

Four random numbers  normalized to zero median
Four random numbers normalized to zero median

That’s an odd-looking distribution, with convex curves on both sides of a pointy maximum. And with {10} points it becomes even more strange.

Ten random numbers  normalized to zero median
Ten random numbers normalized to zero median

Scilab code:

k = 10
A = rand(200000,k)
A = A - median(A,'c').*.ones(1,k)
histplot(100,A(:))

Lattice points in a disk

The closed disk of radius {72} has area {\pi \cdot 72^2\approx 16286}. But it happens to contain only {16241} points with integer coordinates. Here is a picture of one quarter of this disk.

Unusually few lattice points in the disk
Unusually few lattice points in the disk

The radius {72} is somewhat notable is that the discrepancy of {45} between the area and the number of integer points is unusually large. Here is the plot of the absolute value of the difference |area-points| as a function of integer radius {n}. The curve in red is {y = 1.858 r^{0.745}}, which is an experimentally found upper bound for the discrepancy in this range of {n}.

Radius from 1 to 100
Radius up to 100

On the scale up to {n=1000}, the upper bound is {4.902 r^{0.548}}, and the radii bumping against this bound are {449} and {893}. The exponent {0.548} begins to resemble the conjectural {0.5+\epsilon} in the Gauss circle problem.

Radius up to 1000
Radius up to 1000

Finally, over the interval {1000\le n\le 3000} the upper bound comes out as {6.607n^{0.517}}. The exponent {0.517} looks good.

Radius from 1000 to 3000
Radius from 1000 to 3000

This little numerical experiment in Matlab involved using the convex hull function convhull on log-log scale. The function identifies the vertices of the convex hull, which is a polygon. I pick the side of the polygon lying over the midpoint of the range; this yields a linear upper bound for the set of points. On the normal scale, this line becomes a power function. Matlab code is given below; it’s divided into three logical steps.

Find the difference between area and the number of lattice points
a = 1000; 
b = 3000;
R = a:b;
E = [];
for n = a:b
    [X,Y] = meshgrid(1:n, 1:n);
    pts = 4*n + 1 + 4*nnz(X.^2+Y.^2<=n^2);
    E = [E, max(1,abs(pts - pi*n^2))];
end
Pick a suitable side of log-log convex hull
ix = convhull(log(R), log(E));
k = numel(ix);
while (R(ix(k))<(a+b)/2)
    k = k-1;
end
Plot the result and output the parameters of the upper bound
R1 = R(ix(k)); E1 = E(ix(k));
R2 = R(ix(k+1)); E2 = E(ix(k+1));
b = log(E1/E2)/log(R1/R2);
a = E1/R1^b;
plot(R, E, '.');
hold on
plot(R, a*R.^b , 'r-');
axis tight
hold off
fprintf('a = %.3f, b = %.3f\n', a, b);