Calculating 2*2 with high precision

The definition of derivative,

\displaystyle    f'(x) = \lim_{h\rightarrow 0}\frac{f(x+h) - f(x)}{h}   \ \ \ \ \ \ \ \ \ \ (1)

is not such a great way to actually find the derivative numerically. Its symmetric version,

\displaystyle    f'(x) = \lim_{h\rightarrow 0}\frac{f(x+h) - f(x-h)}{2h}   \ \ \ \ \ \ \ \ \ \ (2)

performs much better in computations. For example, consider the derivative {f(x)=e^x } at the point {x=1}. We know that {f'(1)=2.718281828\dots}. Numerically, with {h=0.001}, we get

\displaystyle    \frac{f(1+h) - f(1)}{h} \approx \mathbf{2.71}9 \dots

(error {>10^{-3}}) versus

\displaystyle    \frac{f(1+h) - f(1-h)}{2h} \approx \mathbf{2.71828}2 \dots

(error {<10^{-6}}).

Considering this, why don’t we ditch (1) altogether and adopt (2) as the definition of derivative? Just say that by definition,

\displaystyle    f'(x) = \lim_{h\rightarrow 0}\frac{f(x+h)-f(x-h)}{2h}

whenever the limit exists.

This expands the class of differentiable functions: for example, {f(x)=|x|} becomes differentiable with {f'(0)=0}. Which looks more like a feature than a bug: after all, {f} has a minimum at {0}, and the horizontal line through the minimum is the closest thing to the tangent line that it has.

Another example: the function

\displaystyle     f(x) = \begin{cases} x ,\quad & x\le 0 \\ 3x,\quad & x>0 \end{cases}

has {f'(0)=2} under this definition, because

\displaystyle    \lim_{h\rightarrow 0^+}\frac{f(x+h)-f(x-h)}{2h} = \lim_{h\rightarrow 0^+}\frac{3h-(-h)}{2h} = 2

and

\displaystyle    \lim_{h\rightarrow 0^-}\frac{f(x+h)-f(x-h)}{2h} = \lim_{h\rightarrow 0^-}\frac{h-3(-h)}{2h} = 2

This example also makes sense: since {f(x)=|x|+2x}, getting {f'(0)=0+2} is expected. In fact, with the new definition we still have basic derivative rules: if {f,g} are differentiable, then {f+g}, {f-g}, {fg}, {f/g} are also differentiable (with the usual caveat about {g\ne 0}) and the familiar formulas hold.

Let’s test the chain rule on the function {g = f\circ f}. The rule says

\displaystyle     g'(0) = f'(f(0)) f'(0)    \ \ \ \ \ \ \ \ \ \ (3)

Since {f(0)=0}, the product on the right is {2\cdot 2}. On the other hand,

\displaystyle     g(x) = \begin{cases} x ,\quad & x\le 0 \\ 9x,\quad & x>0 \end{cases}

which implies, by a computation similar to the above, that {g'(0)=5}. So, if we want to have the chain rule (3), we must accept that

\displaystyle     \mathbf{2\cdot 2 = 5}

This is where the desire for high numerical precision leads.

Plenty of other things go wrong with the symmetric definition:

  • Maximum or minimum of {f} may be attained where {f'} exists and is nonzero.
  • A differentiable function may be discontinuous.
  • Having {f'>0} everywhere does not imply that {f} is increasing.
  • Mean Value Theorem fails.

Polygonal inequalities: beyond the triangle

(Related to previous post but can be read independently). The triangle inequality, one of the axioms of a metric space, can be visualized by coloring the vertices of a triangle by red and blue.

Triangle with colored vertices
Colored Triangle

The inequality says that the sum of monochromatic distances must not exceed the sum of dichromatic distances. That is, for every assignment of the vertices to points in the space, the sum of all red-red and blue-blue distances does not exceed the sum of red-blue distances. An assignment is just a map from the set of vertices {V} to the metric space {X}; it need not be injective.

But why stop at the triangle? We can take any number of points (vertices), color them in some way, and require the same polygonal inequality:

\displaystyle \text{monochromatic } \le \text{ dichromatic}

Already for the square we have two distinct plausible colorings to explore: even red-blue split

Evenly colored square
Evenly Colored Square

and predominantly red square

(mostly) Red Square
(mostly) Red Square

But it turns out that the second coloring is useless: the inequality it defines fails in every metric space with more than one point. More generally, suppose we have {R} red points and {B} blue ones, and {R- B\ge 2}. Pick two distinct points {a,b\in X}. Assign one red point to {a} and all others to {b}. The sum of monochromatic distances is {(R-1)\,d(a,b)} while the sum of dichromatic distances is {B\,d(a,b)}, which is strictly less.

So, we are limited to nearly-even colorings: those with {|R-B|\le 1}. For even numbers of vertices this means even split, while odd numbers should be divided as evenly as possible: like 3+2 for the pentagon.

Here is the pentagram again.
Colored Pentagon

The inequalities turn out to be related. For every {n}, the {n}-gonal inequality implies the {(n-2)}-gonal inequality, because if we assign two opposite-colored vertices to the same point, their contributions cancel out. More interestingly: when {n} is odd, the {n}-gonal inequality implies the {(n-1)}-gonal inequality. Indeed, suppose we have {(n-1)} points, evenly colored. Add an arbitrary {n}th point. Whether the added point is blue or red, the {n}-gonal inequality holds. Averaging these two inequalities, we see the effect of the added points canceling out.

So, if the {n}-gonal inequality holds for all odd {n}, it holds for all {n}. This property is exactly the hypermetric property from the previous post. Except it was stated there in a different form:

\displaystyle  \sum_{i,j}b_i b_j d(x_i , x_j ) \le 0

for every finite sequence of points {x_i\in X} and every choice of integers {b_i} such that {\sum_i b_i=1}. But if the point {x_i} is repeated {|b_i|} times, we can replace {b_i} by {\mathrm{sign}\,b_i}. Then represent +1 as blue and -1 as red.

The hypermetric inequalities were introduced by John B. Kelly (a student of William Ted Martin) in late 1960s. He showed they are necessary for the space to be embeddable into the space {\ell^1}. It would be great if they were also sufficient (and for some classes of spaces they are), but this is not so: a counterexample was given by Patrice Assouad in 1977.

It is also interesting to consider the {n}-gonal inequalities for even {n} only. By repetition of vertices, they are equivalent to requiring

\displaystyle  \sum_{i,j}b_i b_j d(x_i , x_j ) \le 0 \quad\quad \quad (1)

for every finite sequence of points {x_i\in X} and every choice of integers {b_i} such that {\sum_i b_i=0}. But of course then we have (1) for rational {b_i} (just clear the denominators), hence for all real {b_i} (by approximation), as long as they sum to {0}. So, the requirement amounts to the matrix {(d(x_i,d_j))} being negative semidefinite on the subspace {\sum x_i=0}. Such metrics are called metrics of negative type.

Their relation to embeddability of the space is well-known: {(X,d)} is of negative type if and only if the “snowflake” {(X,\sqrt{d})} isometrically embeds into a Hilbert space. In other words, we can “draw” any finite metric space of negative type in a Euclidean space, with the understanding that Euclidean distances represent the square roots of the actual distances. This embedding result is a 1935 theorem of Isaac Schoenberg who is also known for connecting dots naturally (introducing splines).

Pentagrams and hypermetrics

The Wikipedia article Metric (mathematics) offers a plenty of flavors of metrics, from common to obscure: ultrametric, pseudometric, quasimetric, semimetric, premetric, hemimetric and pseudoquasimetric (I kid you not).

One flavor it does not mention is a hypermetric. This is a metric {d} on a set {X} such that the inequality

\displaystyle  \sum_{i,j}b_i b_j d(x_i , x_j ) \le 0 \ \ \ \ \ \ \ \ \ (1)

holds for every finite sequence of points {x_i\in X} and every choice of integers {b_i} such that {\sum_i b_i=1}. The requirement that {b_i} be integers gives some combinatorial meaning to (1); this is not just some quadratic form being negative semidefinite.

As a warm-up, observe that (1) contains in it the triangle inequality: with {b_1=b_2=1} and {b_3=-1} we get {d(x_1,x_2)-d(x_1,x_3)-d(x_2,x_3)\le 0}. But it appears that (1) says something about “polygons” with more vertices too.

To make (1) worth thinking about, it should be satisfied by some important metric space. Such as the real line {\mathbb R}, for example. It is not quite obvious that the inequality

\displaystyle  \sum_{i,j}b_i b_j |a_i - a_j| \le 0 \ \ \ \ \ \ \ \ \ (2)

holds for all reals {a_i} and all integers {b_i} adding up to {1}. It helps to order the numbers: {a_1\le \dots\le a_m} and focus on the contribution of a particular gap {[a_k,a_{k+1}]} to the sum (2). The amount it contributes is {|a_k-a_{k+1}|} multiplied by

\displaystyle    \sum_{i\le k<j} b_i b_j = \left(\sum_{i\le k} b_i \right) \left(\sum_{j > k} b_j \right) = \left(\sum_{i\le k} b_i \right) \left(1-\sum_{i\le k} b_i \right) \le 0

because {n(1-n)\le 0} for every integer {n}. This proves (2).

Now that we have one hypermetric space, {\mathbb R}, other such spaces can be created easily. If {X} is any set and {f \colon X\rightarrow\mathbb R} any function, consider {d(x,y) = |f(x)-f(y)|}, the pullback pseudometric on {X}. By applying (2) to the numbers {f(x_i)}, we see that {d} satisfies the hypermetric inequality. Since (1) is additive in {d}, we can take any family of functions {f_\alpha \colon X\rightarrow\mathbb R} and add together the corresponding pseudometrics. Or even integrate them against a positive measure: {d(x,y)=\int |f_\alpha(x)-f_\alpha(y)|\,d\mu(\alpha)}.

For example, the plane {\mathbb R^2} is a hypermetric space, because the distance between two points {(x_1,y_1)} and {(x_2,y_2)}, besides the familiar form

\displaystyle  \sqrt{(x_1-x_2)^2 + (y_1-y_2)^2}

can also be represented as an integral of the aforementioned pullbacks:

\displaystyle  \frac12 \int_0^\pi \big| (x_1-x_2)\cos \alpha + (y_1-y_2) \sin\alpha \big| \,d\alpha

A similar integral representation holds in all dimensions; thus, all Euclidean spaces are hypermetric.

Okay, what is not a hypermetric then? For example, the cube distance induced by the norm {\|x\|_\infty=\max |x_i|} is not, in dimensions 3 and higher. Specifically, (1) fails as the five-point inequality with {(b_1,\dots,b_5) =(1,1,1,-1,-1)}. I’ll call it the pentagram inequality:

also known as K_5 in mystical literature
also known as K_5 in mystical literature

It says that for any five points in the space the sum of monochromatic distances does not exceed the sum of all bi-chromatic (red-blue) distances.

The pentagram inequality fails when {x_1,\dots,x_5} are the columns of the matrix

\displaystyle  \begin{pmatrix}    1& 1& 1& 2& 0\\    0& 2& 2& 1& 1\\    0& 1& 0& 1& 0\\    \end{pmatrix}

(first three columns blue, the last two red). Indeed, the sum of monochromatic distances is {2+1+2+2=7} while the sum of bichromatic distances is {1+1+1+1+1+1=6}.

If the above example does not look conceptual enough, it’s because I found it via computer search. I don’t have much intuition for the pentagram inequality.

Anyway, the example delivers another proof that taking the maximum of three numbers is hard. More precisely, there is no isometric embedding of {\mathbb R^3} with the maximum metric into {\ell_1}. Unlike the earlier proof, this one does not assume the embedding is linear.

A good reference for hypermetric inequalities is the book Geometry of cuts and metrics by Deza and Laurent.

Maximum of three numbers: it’s harder than it sounds

This simple identity hold for any two real numbers {x,y}:

\displaystyle   \max(|x|,|y|) = \frac12\,(|x+y|+|x-y|)   \ \ \ \ \  \ \ \ \ \ (1)

Indeed, if {|x|} realizes the maximum, then both {x+y} and {x-y} have the same sign as {x}. After opening the absolute value signs, we get {y} to cancel out.

So, (1) represents {\max(|x|,|y|)}, also known as the {\ell_\infty} norm, as the sum of absolute values of linear functions. Let’s try the same with {\max(|x|,|y|,|z|)}. Since the right hand side of (1) is just the average of {|\pm x \pm y|} over all possible choices of {\pm } signs, the natural thing to do is to average {|\pm x \pm y \pm z|} over all eight choices. The sign in front of {x} can be taken to be {+}, which simplifies the average to

\displaystyle    \frac14\,(|x+y+z|+|x+y-z|+|x-y+z|+|x-y-z|)    \ \ \ \ \  \ \ \ \ \ (2)

Does this formula give {\max(|x|,|y|,|z|)}? Instead of trying random numbers, let’s just plot the unit ball for the norm given by (2). If the identity works, it will be a cube. I used Maple:

with(plots): f:=(abs(x+y+z)+abs(x+y-z)+abs(x-y-z)+abs(x-y+z))/4:
implicitplot3d(f=1,x=-1/4..1/4,y=-1/4..1/4,z=-1/4..1/4,grid=[25,25,25]);
My precious!
My precious!

Close, but no cube. Luckily, this is my favorite Archimedean solid, the cuboctahedron.

Although all terms of (2) look exactly the same, the resulting shape has both triangular and square faces. Where does the difference of shapes come from?

More importantly, is (2) really the best we can do? Is there some other sum of moduli of linear functions that will produce {\max(|x|,|y|,|z|)}?

— No.

Even if negative coefficients are allowed?

— Even then. (But you can come arbitrarily close.)

What if we allow integrals with respect to an arbitrary (signed) measure, as in

\displaystyle    \iiint |\alpha x+\beta y+\gamma z|\,d \mu(\alpha, \beta, \gamma)    \ \ \ \ \  \ \ \ \ \ (3)

— Still no. But if {\mu} is allowed to be a distribution of higher order (an object more singular than a measure), then a representation exists (W. Weil, 1976). Yes, one needs the theory of distributions to write the maximum of three numbers as a combination of linear functions.

I’ll only prove that there is no identity of the form

\displaystyle  \max(|x|,|y|,|z|) = \sum_{i=1}^N |\alpha_i x+\beta_i y+ \gamma_i z|

Indeed, such an identity amounts to having an isometric embedding {T\colon \ell_\infty^3 \rightarrow \ell_1^N}. The adjoint operator {T^* \colon \ell_\infty^N \rightarrow \ell_1^3} is a submetry meaning that it maps the unit ball of {\ell_\infty^N } onto the unit ball {\ell_1^3}. The unit ball of {\ell_\infty^N } is just a cube; all of its faces are centrally symmetric, and this symmetry is preserved by linear maps. But {\ell_1^3} is an octahedron, with triangular faces. A contradiction. {\ \Box}

An aside: what if instead of averaging {|\pm x \pm y|} over all {\pm } choices (i.e., unimodular real coefficients) we take the average over all unimodular complex coefficients? This amounts to {\|(x,y)\| = \frac{1}{2\pi} \int_0^{2\pi} |x+e^{it}y|\,dt}. I expected something nice from this norm, but

Complex averaging in real space
Complex averaging in real space

it’s a strange shape whose equation involves the complete elliptic integral of second kind. Yuck.

Connecting dots naturally

How to draw a nice curve through given points {(x_i,y_i)}?

Some data
Some data

One way is to connect them with straight lines, creating a piecewise linear function:

Piecewise linear interpolant
Piecewise linear interpolant

This is the shortest graph of a function that interpolates the data. In other words, the piecewise linear function minimizes the integral

\displaystyle    \int_0^{10} \sqrt{1+(f'(x))^2}\,dx

among all functions with {f(x_i)=y_i}. As is often the case, the length functional can be replaced with the elastic energy

\displaystyle    \mathcal{E}(f) = \int_0^{10} (f'(x))^2\,dx

because the piecewise linear {f} (and only it) minimizes it too.

Of course, it is not natural for the connecting curve to take such sharp turns at the data points. One could try to fit a polynomial function to these points, which is guaranteed to be smooth. With 11 points we need a 10th degree polynomial. The result is disappointing:

Interpolating polynomial
Interpolating polynomial

It is not natural for a curve connecting the points with {1\le y\le 9} to shoot up over {y=40}. We want a connecting curve that does not wiggle more than necessary.

To reduce the wiggling and remove sharp turns at the same time, one can minimize the bending energy of the function, thinking of its graph as a thin metal rod. This energy is

\displaystyle    \mathcal{B}(f) = \int_0^{10} (f''(x))^2\,dx

and the function that minimizes it subject to conditions {f(x_i)=y_i} looks very nice indeed:

Natural cubic spline
Natural cubic spline

The Euler-Lagrange equation for the functional {\mathcal{B}} dictates that the fourth derivative of {f} is zero in the intervals between the knots {x_i}. Thus, {f} is a piecewise cubic polynomial. Also, both {f} and {f'} must be continuous for any function with integrable second derivative. More delicate analysis is required for {f''}, but it also can be shown to be continuous for minimizing function {f}; moreover, {f''} must vanish at the endpoints {0} and {10}. Taken together, these properties (all derived from the variational problem) complete the description of a natural cubic spline.

It remains to actually construct one. I prefer to think of this process as adding a correction term {C} to the piecewise linear interpolant {L}. Here the spline is shown together with {L} (green) and {C} (magenta).

PL interpolant, correction term, and their sum: the cubic spline
PL interpolant, correction term, and their sum: cubic spline

On each interval {[x_i,x_{i+1}]} the correction term {C} is a cubic polynomial vanishing at both endpoints. The space of such polynomials is two-dimensional: thus,

\displaystyle    C(x) = \alpha_i (x_{i+1}-x)^2(x-x_i) + \beta_i (x_{i+1}-x)(x-x_i)^2

on this interval. There are 20 coefficients {\alpha_i}, {\beta_i} to find. At each of 9 knots 1,2,…9 we have two conditions: {C''} must have removable singularity and {C'} must jump by the amount opposite to the jump of {L'}. Since {C''} also vanishes at {1,10}, there are 20 linear equations for 20 unknowns.

It is easier to set up a linear system in terms of {z_i=C''(x_i)}. Indeed, the values of {C''} at two consecutive knots determine the correction term within: {\alpha_i= \dfrac{z_{i+1}+2z_i}{6} } and {\beta_i = \dfrac{2z_{i+1}+z_i}{6}}. This leaves {n-1} equations (from the jumps of {C'}) for {n-1} unknowns. The best part is that the matrix of this system is really nice: tridiagonal with dominant diagonal.

\displaystyle    \frac{1}{6}\begin{pmatrix} 4 & 1 & 0 & 0 & \ldots & 0 & 0 \\    1 & 4 & 1 & 0 & \ldots & 0 & 0 \\    0 & 1 & 4 & 1 & \ldots & 0 & 0 \\    \vdots & \vdots & \vdots & \vdots & \ddots & \ldots & \ldots \\    0 & 0 & 0 & 0 & \ldots & 1 & 4    \end{pmatrix}

One can solve the system for {z_i} within a for loop, but I used the Scilab solver instead. Here is the Scilab code for the most interesting part: the spline. The jumps of the derivative of the piecewise linear interpolant are obtained from the second order difference of the sequence of y-values.

a = 0; b = 10
y = [3 1 4 1 5 9 2 6 5 3 5]
n = length(y)-1
h = (b-a)/n
jumps = diff(y,2)/h
A = (h/6)*(diag(4*ones(1,n-1))+diag(ones(1,n-2),1)+diag(ones(1,n-2),-1))
z = [0,-jumps/A,0] 
allx = []; spl = []
for i=1:n  
   xL = a+h*(i-1)
   xR = a+h*i
   x = linspace(xL,xR,100)
   linear = y(i)*(xR-x)/h + y(i+1)*(x-xL)/h
   cor = ((z(i+1)+2*z(i))*(xR-x)+(2*z(i+1)+z(i))*(x-xL)).*(xR-x).*(x-xL)/(6*h)
   allx = [allx, x]   
   spl = [spl, linear + cor] 
end
plot(allx, spl) 
plot(a:h:b, y, 'r*')

Rectangular boxes: handle with care

A rectangular box (aka parallelepiped) looks like a sturdy object:

Rainbow sheep inside
Rainbow sheep inside

But this particular box, with dimensions 7.147 by 6.021 by 4.095, took me the better part of an hour to figure out.

It was a part of a numerical methods assignment: find the dimensions of a box with given volume {V}, surface area {S}, and diameter {D} (i.e., space diagonal). Algebraic approach leads to pretty ugly expressions, which is the motivation for a numerical method. Specifically, the task was to apply the Newton-Raphson method to the map

\displaystyle    F(x,y,z) = \begin{pmatrix} xyz-V  \\ 2(xy+yz+xz)-S  \\ x^2+y^2+z^2-D^2 \end{pmatrix}

Of course, I understood that not every triple {(V,S,D)} is attainable. Also realized that the Jacobian of {F} is degenerate when two of the coordinates coincide, which is a problem for the method. So I thought: let’s generate some random {x,y,z} values that are not too close to one another, and give students the resulting parameters {V,S,D}.

With {x=7.147}, {y=6.021}, and {z=4.095} the parameters are {V=176.216}, {S=193.91}, and {D=10.203}. Sure, a little rounding can’t hurt when numbers are of this size and we are quite far from the critical points of {F}. So I put {V=176}, {S=194} and {D=10} in one of the versions of the assignment.

But the Newton-Raphson method would not converge… because no such box exists! The rounding did hurt after all.

This motivated me to describe all attainable triples {(V,S,D)} explicitly, which ended up being less of a chore than I expected. It helps to realize that {(x+y+z)^2 = D^2+S}, which reduces the search to the intersection of the sphere {x^2+y^2+z^2=D^2} with the plane {x+y+z=\sqrt{D^2+S}}. This is a circle (called {C} below), and the allowed range for {V} is between the minimum and maximum of {xyz} on {C}.

This goes into Calculus 3 territory. Using Lagrange multipliers with two constraints looks like a tedious job. Instead, I decided to parametrize {C}. Its center is {(c,c,c)} where {c=\dfrac13\sqrt{D^2+S}}. The radius is {\displaystyle r = \sqrt{ D^2 - \frac13(D^2+S)}=\frac13 \sqrt{6D^2-3S}}. We also need an orthonormal basis of the subspace {x+y+z=0}: the vectors

\displaystyle  \frac{1}{\sqrt{6}} \langle 2, -1, -1\rangle \quad \text{and}\quad \frac{1}{\sqrt{2}} \langle 0, 1, -1\rangle

do the job.

So, the circle {C} is parametrized by

\displaystyle    x  = c+\frac{2r}{\sqrt{6}} \cos t \\   y  = c-\frac{r}{\sqrt{6}} \cos t +\frac{r}{\sqrt{2}} \sin t \\   z  = c-\frac{r}{\sqrt{6}} \cos t -\frac{r}{\sqrt{2}} \sin t

This is not as bad as it looks: the product {xyz} simplifies to

\displaystyle xyz = c^3 - \frac{cr^2}{2} + \frac{r^3\sqrt{6}}{18} \cos 3t

which tells us right away that the volume {V} lies within

\displaystyle  c^3 - \frac{cr^2}{2} \pm \frac{r^3\sqrt{6}}{18}

In terms of the original data {S,D} the bounds for {V} take the form

\displaystyle    \frac{5S-4D^2}{54}\sqrt{S+D^2} \pm \frac{\sqrt{2}}{54} (2D^2-S)^{3/2}

(And of course, {V} cannot be negative even if the lower bound is.) It is easy to see that {2D^2-S\ge 0} with equality only for a cube; however {2D^2-S} can be relatively small even when the box does not look very cube-ish. For the box pictured above, the tolerance {\frac{\sqrt{2}}{54} (2D^2-S)^{3/2}} is approximately {1.4}; after rounding {S\approx 194 } and {D\approx 10} this drops to {0.38}, and the desired volume of {176} is way out of the allowable range {181\pm 0.38}.

Yes, the set of attainable triples {(V,S,D)} is quite thin. Parallelepipeds are fragile objects: handle them with care.

Attainable (V,S,D)
Attainable (V,S,D)

From boring to puzzling in 30 iterative steps

The function {f(x)=2\cos x } may be nice and important as a part of trigonometric basis, but there is nothing exciting in its graph:

f(x) = 2 cos x
f(x) = 2 cos x

Let’s look at its iterations {f^n=f\circ f\circ \dots \circ f} where {n } is the number of iterations, not an exponent. Here is the graph of {f^{14}}:

14th iteration
14th iteration

A rectangular pattern is already visible above; further iterations only make it stronger. For example, {f^{30} }:

30 iterations
30 iterations

It may be impossible to see on the graph, but the rectangles are slightly apart from one another (though of course they are connected by the graph of continuous function). This is easier to see on the histogram of the values {f^{n}(0) } for {n=0,\dots, 10000 }, which contains two small gaps in addition to a large one:

Histogram of an orbit of f
Histogram of an orbit of f

What goes on here? The range of {f} on {[-2,2]}, as well as the range of any of its iterates, is of course connected: it is the closed interval {[f^{2}(0),f(0)] = [2 \cos 2, 2]}. But the second iterate {f^2=f\circ f} also has two invariant subintervals, marked here by horizontal lines:

Second iterate
Second iterate

Namely, they are {I_1=[f^{2}(0), f^{4}(0)]} and {I_2=[f^{3}(0),2]}. It is easy to see that {f(I_1)=I_2} and {f(I_2)=I_1}. The gap between {I_1} and {I_2} contains the repelling fixed point of {f}, approximately {x=1.03}. Every orbit except for the fixed point itself is pushed away from this point and is eventually trapped in the cycle between {I_1} and {I_2}.

But there is more. A closer look at the fourth iterate reveals smaller invariant subintervals of {f^4}. Here is what it does on {I_2}:

Fourth iterate
Fourth iterate

Here the gap contains a repelling fixed point of {f^2}, approximately {1.8}. The invariant subintervals of {I_2} are {I_{21}=[f^{3}(0), f^{7}(0)]} and {I_{22}=[f^9(0), 2]}. Also, {I_1} contains invariant subintervals {I_{11}=[f^{2}(0), f^{6}(0)]} and {I_{12}=[f^8(0), f^4(0)]}. These are the projections of the rectangles in the graph of {f^{30}} onto the vertical axes.

No more such splitting occurs. The histogram of the values of iterates of {f} indeed consists of four disjoint intervals. Can one get a Cantor-type set in this way, starting from some boring function?

Selecting email recipients in WeBWork

I like WeBWork for many reasons besides being free and open-source. One minor feature is customization of classroster emails based on the recipients’s name and other information. Including grades via spreadsheet merge would not be FEPRA-compliant, but an email saying that an exam has been graded might be worded differently based on the score that the student received. I used conditional operators in a spreadsheet for this purpose, then merged the data into email.

On the other hand, there is no way to send email only to an algorithmically selected subset of the class, for example to those who have not yet started on the homework that’s due tonight, or those that appear to need extra help. (At least not in the version of WeBWork I am using, which is not the latest one.)

So I wrote a jQuery script that selects email recipients based on the data in a Google spreadsheet. The spreadsheet need not contain sensitive information: only usernames of addressees.

A couple of years ago I found the JSON encoding of a Google Spreadsheet completely opaque. With the experience, reading it became easy. It helps to request /feeds/list/ (rather than /feeds/cells/), so that the spreadsheet is treated as a list of rows rather than a list of cells. Each row becomes an object with “title” being the first column entry, and “content” being the rest of the row data (an object with column headers as keys and cell content as values). The relevant part of the JSON string is actually quite simple:

"title":{"type":"text","$t":"username"}

I process it by creating an object selectUser in which the keys are usernames of addresses. (Could also be an array, of course). This is what the command

selectUser[user.title.$t] = 1

does below. Then, parsing the elements of the relevant <select> list, the script selects those for which selectUser[$(item).attr('value')] is defined.

function selectStudents() {
    $.get('https://spreadsheets.google.com/feeds/list/__spreadsheet ID here___/oda/public/basic?alt=json', function (data) {
        var userList = data.feed.entry;
        var i, user, selectUser = {}, item;
        for (i = 0; user = userList[i]; i++) {
            selectUser[user.title.$t] = 1;
        }
        $('select.ScrollingRecordList > option').each(function () {
            item = this;
            if (selectUser[$(item).attr('value')]) {
                $(item).attr('selected', 'selected');
            }
        });
    }, 'json');
}

To have this function called only when needed, my Chrome extension inserts a link “Select recipients” under “Email” in the WeBWork dashboard.

$('<li><strong>Select recipients</strong></li>').insertAfter($('#Email')).click(function () {
        selectStudents();
});

That’s all.

Oh, and you need to use the old format of Google Spreadsheet for this to work. The new (otherwise far superior) format does not currently offer JSON, as far as I know. If you have switched to the new format, any new spreadsheets will be created in it; however the existing ones remain in the old format, and so do their copies. The solution is to keep some old-format spreadsheet for this purpose, just as a source of copies.

Since the id of the spreadsheet is fixed in the script, all I have to do is to locate in the Drive and Import the .csv file generated by WeBWork scoring tool as a new sheet. Then sort and filter the data by the desired criteria, and copy the recipients of the message to the first sheet.

2013 syllabus

A sample of what you could have learned by taking Calculus VII in 2013.

Quadrature rules and quadrature domains

The midpoint rule of numerical integration

\displaystyle \int_a^b f(x)\,dx \approx (b-a)f\left(\frac{a+b}{2}\right)

is approximate in general, but exact for linear functions (polynomials of degree at most one).

Midpoint Rule
Midpoint Rule

With two sample points we can integrate any cubic polynomial exactly. The choice of sample points is not obvious: they are to be placed at distance {\dfrac{1}{2\sqrt{3}}(b-a)} from the midpoint. On the example below, {a=0} and {b=1}, so the sample points are {x_{1,2} = \dfrac12 \pm \dfrac{1}{2\sqrt{3}}}. The integral is equal to {\dfrac12 f(x_1)+\dfrac12 f(x_2)}. One can say that each sample point has weight {1/2} in this rule.

Two-point quadrature: areas of yellow rectangles add up to the integral
Two-point quadrature: areas of yellow rectangles add up to the integral

Three sample points are enough for polynomials of degrees up to and including five. This time, the weights attached to the sample points are not equal. The midpoint is used again, this time with the weight of {4/9}. Two other sample points are at distance {\dfrac{\sqrt{3}}{2\sqrt{5}}(b-a)} from the midpoint, and their weights are {5/18} each. This contraption exactly integrates polynomials of degrees up to five.

Three-point quadrature
Three-point quadrature: rectangles of unequal width

Compare this with Simpson’s rule, which also uses three sample points but is exact only up to degree three.

The above are examples of Gaussian quadrature: for each positive integer {n}, one can integrate polynomials of degree up to {2n-1} by taking {n} samples at the right places, and weighing them appropriately.


Let’s move from the real line to the complex plane. If one accepts that the analog of interval {(a,b)} is a disk in the plane, then quadrature becomes very simple: for any disk {D} and any complex polynomials {p},

\displaystyle   \iint_D p = \pi r^2 p(c)

where {c} is the center of the disk and {r} is its radius. One sample point is enough for all degrees! The proof is easy: rewrite {p} in terms of powers of {(z-c)} and integrate them in polar coordinates. The same works for any holomorphic function, as long as it is integrable in {D}.

Disk: a quadrature domain with one node
Disk: a quadrature domain with one node

But maybe the disk is a unique such shape? Not at all: there are other such quadrature domains. A simple family of examples is Neumann ovals (Neumann as in “boundary condition”). Geometrically, they are ellipses inverted in a concentric circle. Analytically, they are (up to linear transformations) images of the unit disk {\mathbb{D}} under

\displaystyle  \varphi(z)=\frac{z}{1-c^2 z^2}\quad (0\le c<1)

This image, denoted {\Omega } below, looks much like an ellipse when {c} is small:

Neumann oval with c=0.3
Neumann oval with c=0.3

Then it becomes peanut-shaped:

Neumann oval with c=0.6
Neumann oval with c=0.6

For {c\approx 1} it looks much like the union of two disks (but the boundary is smooth, contrary to what the plot suggests):

Neumann oval with c=0.95
Neumann oval with c=0.95

In each of these images, the marked points are the quadrature nodes. Let’s find out what they are and how they work.

Suppose {f} is holomorphic and integrable in {\Omega}. By a change of variables,

\displaystyle  \iint_{\Omega } f = \iint_{\mathbb D} (f\circ \varphi)\, \varphi' \,\overline{\varphi'}

Here { (f\circ \varphi) \varphi'} is holomorphic, but { \overline{\varphi'} } is anti-holomorphic. We want to know what { \overline{\varphi'} } does when integrated against something holomorphic. Power series to the rescue:

\displaystyle    \varphi(z) = z\sum_{n=0}^\infty c^{2n} z^{2n} = \sum_{n=0}^\infty c^{2n} z^{2n+1}

hence

\displaystyle  \overline{\varphi'(z)} = \sum_{n=0}^\infty c^{2n} (2n+1) \bar z^{2n}

Multiply this by {z^{k}} and integrate over {\mathbb D} in polar coordinates: the result is {0} if {k} is odd and

\displaystyle    2\pi c^{k} (k+1) \int_0^1 r^{2k} r\,dr= \pi c^{k}

if {k} is even. So, integration of {\overline{\varphi'(z)}} against a power series {g(z) = \sum a_k z^k} produces the sum of {a_{k} c^{k}} over even powers {k} only (with the factor of {\pi}). The process of dropping odd powers amounts to taking the even part of {g}:

\displaystyle  \iint_{\mathbb D} g\, \overline{\varphi'} = \frac{\pi}{2} ( g(c)+ g(-c))

Yes, there’s something about { \overline{\varphi'} } that’s magic (key words: reproducing kernel, specifically Bergman kernel). Plug {g= (f\circ \varphi)\, \varphi' } to conclude

\displaystyle  \iint_{\Omega } f = \frac{\pi}{2} \Big\{f(\varphi(c))\, \varphi'(c) + f(\varphi(-c) ) \,\varphi'(-c)\Big\}

So, the nodes are {\varphi(\pm c) = \pm c/(1-c^4)}. They have equal weight, because {\varphi'(\pm c) = \dfrac{1+c^4}{(1-c^4)^2}}. Final result:

\displaystyle  \iint_{\Omega } f = \frac{\pi (1+c^4)}{2 (1-c^4)^2}\, \left\{f \left( \frac{c}{1-c^4} \right) + f \left(- \frac{c}{1-c^4} \right) \right\}

Again, this is a two-point quadrature formula that is exact for all complex polynomials.

As a bonus, put {f\equiv 1} to find that the area of {\Omega} is {\dfrac{\pi (1+c^4)}{ (1-c^4)^2}}.