## Condition number and maximal rotation angle

The condition number ${K(A)}$ of an invertible matrix ${A}$ is the product of norms ${\|A\|\,\|A^{-1}\|}$, or, in deciphered form, the maximum of the ratio ${|Au|/|Av|}$ taken over all unit vectors ${u,v}$. It comes up a lot in numerical linear algebra and in optimization problems. One way to think of the condition number is in terms of the image of the unit ball under ${A}$, which is an ellipsoid. The ratio of longest and shortest axes of the ellipsoid is ${K(A)}$.

But for positive definite matrices ${A}$ the condition number can also be understood in terms of rotation. First, note that the positivity of ${\langle Av,v\rangle }$ says precisely that that the angle between ${v}$ and ${Av}$ is always less than ${\pi/2}$. Let ${\gamma}$ be the maximum of such angles taken over all ${v\ne 0}$ (or over all unit vectors, the same thing). Then

$\displaystyle K(A)=\frac{1+\sin\gamma }{1-\sin\gamma} \ \ \ \ \ \ \ \ \ (1)$

One can also make (1) a little less geometric by introducing ${\delta=\delta(A)}$ as the largest number such that ${\langle Av,v\rangle \ge \delta|Av|\,|v|}$ holds for all vectors ${v}$. Then ${\delta=\cos \gamma}$, and (1) takes the form

$\displaystyle K=\frac{1+\sqrt{1-\delta^2}}{1-\sqrt{1-\delta^2}} = \left(\frac{1+\sqrt{1-\delta^2}}{\delta}\right)^2 \ \ \ \ \ \ \ \ \ \ (2)$

Could this be of any use? The inequality

$\displaystyle \langle Av,v\rangle \ge \delta\,|Av|\,|v| \ \ \ \ \ \ \ \ \ \ (3)$

is obviously preserved under addition of matrices. Therefore, it is preserved by integration. In particular, if the Hessian of a twice differentiable convex function ${u}$ satisfies (3) at every point, then integration along a line segment from ${x}$ to ${y}$ yields

$\displaystyle \langle \nabla u(x)- \nabla u(y),x-y\rangle \ge \delta\,|\nabla u(x)-\nabla u(y)|\,|x-y| \ \ \ \ \ \ \ \ \ \ (4)$

Conversely, if ${u}$ is a twice differentiable convex function such that (4) holds, then (by differentiation) its Hessian satisfies (3), and therefore admits a uniform bound on the condition number by virtue of (2). Thus, for such functions inequality (4) is equivalent to uniform boundedness of the condition number of the Hessian.

But the Hessian itself does not appear in (4). Condition (4) expresses “uniform boundedness of the condition number of the Hessian” without requiring ${u}$ to be twice differentiable. As a simple example, take ${u(x_1,x_2)=|x|^{4/3}}$. The Hessian matrix is

$\displaystyle \frac{4}{9}|x|^{-4/3} \begin{pmatrix} x_1^2+3x_2^2 & -2x_1x_2 \\ -2x_1x_2 & 3x_1^2+x_2^2 \end{pmatrix}$

The eigenvalues are ${\frac{4}{3}|x|^{-1/3}}$ and ${\frac{4}{9}|x|^{-1/3}}$. Thus, even though the eigenvalues blow up at the origin and decay at infinity, the condition number of the Hessian remains equal to ${3}$. Well, except that the second derivatives do not exist at the origin. But if we use the form (4) instead, with ${\delta = \sqrt{3}/2}$, then non-differentiability becomes a non-issue.

Let’s prove (2). It suffices to work in two dimensions, because both ${K(A)}$ and ${\delta(A)}$ are determined by the restrictions of ${A}$ to two-dimensional subspaces. In two dimensions we can represent the linear map ${A}$ as ${z\mapsto \alpha z+\beta \bar z}$ for some complex numbers ${\alpha,\beta}$. Actually, ${\alpha}$ is real and positive because ${A}$ is symmetric positive definite. As ${z}$ runs through unimodular complex numbers, the maximum of ${|\alpha z+\beta \bar z|}$ is ${\alpha+|\beta|}$ and the minimum is ${\alpha-|\beta|}$. Therefore, ${K(A)=\frac{1+|\beta|/\alpha}{1-|\beta|/\alpha}}$.

When ${|z|=1}$, the angle ${\gamma}$ that the vector ${\alpha z+\beta \bar z}$ forms with ${z}$ is equal to the argument of ${\bar z (\alpha z+\beta \bar z)=\alpha+\beta \bar z^2}$. The latter is maximized when ${0, \alpha, \alpha+\beta \bar z^2}$ form a right triangle with hypotenuse ${\alpha}$.

Proof by picture

Hence, ${\sin\gamma = |\beta|/\alpha}$. This proves ${K(A)=\frac{1+|\beta|/\alpha}{1-|\beta|/\alpha}}$, and (2) follows.

There are similar observations in the literature on quasiconformality of monotone maps, including an inequality similar to (2) (for general matrices), but I have not seen either (1) or (2) stated as an identity for positive definite matrices.

## Convexity of polar curves

Everybody knows the second derivative test for the convexity of Cartesian curves ${y=y(x)}$. What is the convexity test for polar curves ${r=r(\theta)}$? Google search brought up Robert Israel’s answer on Math.SE: the relevant inequality is

$\displaystyle \mathcal{C}[r]:=r^2+2(r')^2-r\,r''\ge 0 \ \ \ \ \ (1)$

But when using it, one should be aware of the singularity at the origin. For example, ${r=1+\cos \theta}$ satisfies $\mathcal{C}[r] = 3(1+\cos \theta)\ge 0$ but the curve is not convex: it’s the cardioid.

FooPlot graphics today: fooplot.com

The formula (1) was derived for ${r> 0}$; the points with ${r=0}$ must be investigated directly. Actually, it is easy to see that when ${r }$ has a strict local minimum with value ${0}$, the polar curve has an inward cusp and therefore is not convex.

As usual, theoretical material is followed by an exercise.

Exercise: find all real numbers ${p}$ such that the polar curve ${r=(1+\cos 2\theta)^p}$ is convex.

All values ${p>0}$ are ruled out by the cusp formed at ${\theta=\pi/2}$. For ${p=0}$ we get a circle, obviously convex. When ${p<0}$, some calculations are in order:

$\displaystyle \mathcal{C}[r] = (1+4p+4p^2+(1-4p^2)\cos 2\theta)(1+\cos 2\theta)^{2p-1}$

For this to be nonnegative for all ${\theta}$, we need ${1+4p+4p^2\ge |1-4p^2|}$. Which is equivalent to two inequalities: ${p(4+8p) \ge 0}$ and ${2+4p\ge 0}$. Since ${p<0}$, the first inequality says that ${p\le -1/2}$, which is the opposite of the second.

Answer: ${p=0}$ and ${p=-1/2}$.

This exercise is relevant to the problem from the previous post, finding the “right” interpolation method in polar coordinates. Given a set of ${(r,\theta)}$ values, I interpolated ${(r^{1/p},\theta)}$ with a trigonometric polynomial, and then raised that polynomial to power ${p}$.

If the given points had Cartesian coordinates ${(\pm a,0), (0,\pm b)}$ then this interpolation yields ${r=(\alpha+\beta \cos \theta)^p}$ where ${\alpha,\beta}$ depend on ${a,b}$ and satisfy ${\alpha>|\beta|}$. Using the exercise above, one can deduce that ${p=-1/2}$ is the only nonzero power for which the interpolated curve is convex for any given points of the form ${(\pm a,0), (0,\pm b)}$.

In general, curves of the form ${r=P(\theta)^{-1/2}}$, with ${P}$ a trigonometric polynomial, need not be convex. But even then they look more natural than their counterparts with powers ${p\ne -1/2}$. Perhaps this is not surprising: the equation ${r^2P(\theta)=1}$ has a decent chance of being algebraic when the degree of ${P}$ is low.

Random example at the end: ${r=(3-\sin \theta +2\cos \theta+\cos 2\theta-2\sin 2\theta)^{-1/2}}$

Celebration of power -1/2

## Peanut allergy and Polar interpolation

Draw a closed curve passing through the points ${(\pm 3,0)}$ and ${(0,\pm 1)}$: more specifically, from (3,0) to (0,1) to (-3,0) to (0,-1) and back to (3,0).

How hard could it be?

I’ll wait.

… Done?

This is an interpolation problem. The geometry of the problem does not allow for curves of the form ${y=f(x)}$. But polar graphs ${\rho=f(\theta)}$ should work. (Aside: in ${(x,y)}$ the variable usually taken to be independent is listed first; in ${(\rho,\theta)}$ it is listed second. Is consistent notation too much to ask for? Yes, I know the answer…) Since ${f}$ must be ${2\pi}$-periodic, it is natural to interpolate ${(3,0),(1,\pi/2),(3,\pi),(1,3\pi/2)}$ by a trigonometric polynomial. The polynomial ${f(\theta)=2+\cos 2\theta}$ does the job. But does it do it well?

I did not order peanuts

This is not what I would draw. My picture would be more like the ellipse with the given points as vertices:

Ellipse

The unnecessary concavity of the peanut graph comes from the second derivative ${f''}$ being too large at the points of minimum (and too small at the points of maximum). We need a function with flat minima and sharp maxima. Here is the comparison between ${f(\theta)=2+\cos 2\theta}$ (red) and the function that describes the ellipse in polar coordinates (blue):

We want blue, not red.

I thought of pre-processing: apply some monotone function ${\varphi\colon (0,\infty)\rightarrow \mathbb R}$ to the ${\rho}$-values, interpolate, and then compose the interpolant with ${\varphi^{-1}}$. The function ${\varphi}$ should have a large derivative near ${0}$, so that its inverse has a small derivative there, flattening the minima.

The first two candidates ${\varphi(r)=\log r}$ and ${\varphi(r)=1/r}$ did not improve the picture enough. But when I tried ${\varphi(r)=1/r^2}$, the effect surprised me. Interpolation between ${(1/9,0),(1,\pi/2),(1/9,\pi),(1,3\pi/2)}$ yields ${(5-4\cos 2\theta)/9}$, which after application of ${\varphi^{-1}}$ produces ${\rho = 3/\sqrt{5-4\cos 2\theta}}$exactly the ellipse pictured above.

Next, I tried a less symmetric example: curve through ${(5,0);(0,2);(-3,0);(0,-2)}$.

Egg without preprocessing

Preprocessed egg

Again, the second approach produces a more natural looking curve.

Finally, a curve with five-fold symmetry, with ${\rho}$ ranging from ${2}$ to ${5}$.

Star without preprocessing

Star with preprocessing

The interpolation and plotting were done in Scilab, by invoking the functions given below with commands polar1([3,1,3,1]) or polar2([2,5,2,5,2,5,2,5,2,5]), etc. Because I considered equally spaced interpolation nodes, the coefficients of interpolated polynomials are nothing but the (inverse) discrete Fourier transform of the given values. First function interpolates the radius ${\rho}$ itself.

function polar1(R)
clf()
p = fft(R,1)
t = 0:.01:2*%pi
rho = zeros(t)
for i = 1:length(p)
rho = rho + real(p(i)*exp(-%i*(i-1)*t))
end
polarplot(t,rho,style=5)
t = resize_matrix(linspace(0,2*%pi,length(R)+1),1,length(R))
plot(R.*cos(t), R.*sin(t), 'o')
endfunction

The second function includes preprocessing: it interpolates ${1/\rho^2}$ and then raises the interpolant to power ${-1/2}$.

function polar2(R)
clf()
p = fft(R^(-2),1)
t = 0:.01:2*%pi
rho = zeros(t)
for i = 1:length(p)
rho = rho + real(p(i)*exp(-%i*(i-1)*t))
end
rho = max(rho,0.01*ones(rho))
polarplot(t,rho^(-1/2),style=5)
t = resize_matrix(linspace(0,2*%pi,length(R)+1),1,length(R))
plot(R.*cos(t), R.*sin(t), 'o')
endfunction

(Idea taken from a Math.SE question).

Added: one more comparison, ${\rho^{1}}$ vs ${\rho^{-2}}$ vs ${\rho^{-3}}$; the last one loses convexity again.

Raising to -3 is going too far.

And then to ${\rho^{-7}}$:

Power -7

## Golden ratio in stereometry

Is there really such a thing as icosahedron?

Euclid found this problem difficult enough to be placed near the end of the Elements, and few of his readers ever mastered his solution. A beautiful direct construction was given by Luca Pacioli, a friend of Leonardo da Vinci, in his book De divina proportione (1509).

from Mathematics and its History by John Stillwell

The model consists of three golden-ratio rectangles passing one through another cyclically. Besides the central slits, a topological obstruction requires a temporary cut in one rectangle, which is then taped over. The convex hull of the union is an icosahedron, its vertices being the ${3\cdot 4=12}$ vertices of the rectangles. Indeed, if the rectangles have dimensions ${2\varphi\times 2}$, then the vertices are ${(\pm \varphi,\pm 1,0)}$, ${(0,\pm \varphi,\pm 1)}$, ${(\pm 1,0,\pm \varphi)}$. To prove that the faces are regular triangles, it suffices to check that ${\varphi^2 + (1-\varphi)^2+1^2 =4}$, which quickly turns into ${\varphi^2=\varphi+1}$.

Proof of the existence of icosahedron

I used the Fibonacci approximation ${\varphi \approx F_{n+1}/F_n}$ to draw the rectangles (specifically, their dimensions are ${89\times 55}$). The central slit in rectangle of size ${F_{n+1}\times F_{n}}$ should begin at distance ${F_{n-1}/2}$ from the shorter side.

The group of rotational symmetries of the paper model is smaller than the icosahedral group ${A_5}$: it has order ${12}$ and acts freely on the vertices. Come to think of it, the group is ${A_4}$.

The second model is my favorite Catalan solid, rhombic triacontahedron. It is formed by 30 golden-ratio rhombi. I folded it from a net created by Robert Webb. The net looks pretty cool itself:

Robert Webb’s net

Assembly took a lot more scotch tape (and patience) than the first model.

Rhombic triacontahedron

## Three-point test for being holomorphic

This is a marvelous exercise in complex analysis; I heard it from Steffen Rohde but don’t remember the original source.

Let ${D=\{z\in \mathbb C\colon |z|<1\}}$. Suppose that a function ${f\colon D\rightarrow D}$ satisfies the following property: for every three points ${z_1,z_2,z_3\in D}$ there exists a holomorphic function ${g\colon D\rightarrow D}$ such that ${f(z_k)=g(z_k)}$ for ${k=1,2,3}$. Prove that ${f}$ is holomorphic.

No solution here, just some remarks.

• The domain does not matter, because holomorphicity is a local property.
• The codomain matters: ${D}$ cannot be replaced by ${\mathbb C}$. Indeed, for any function ${f\colon D\rightarrow\mathbb C}$ and any finite set ${z_1,\dots, z_n\in D}$ there is a holomorphic function that agrees with ${f}$ at ${z_1, \dots, z_n}$ — namely, an interpolating polynomial.
• Two points ${z_1,z_2}$ would not be enough. For example, ${f(z)=\mathrm{Re}\,z}$ passes the two-point test but is not holomorphic.

Perhaps the last item is not immediately obvious. Given two points ${z_1,z_2\in D}$, let ${x_k=\mathrm{Re}\,z_k}$. The hyperbolic distance ${\rho}$ between ${z_1}$ and ${z_2}$ is the infimum of ${\displaystyle \int_\gamma \frac{1}{1-|z|^2}}$ taken over all curves ${\gamma}$ connecting ${z_1}$ to ${z_2}$. Projecting ${\gamma}$ onto the real axis, we obtain a parametrized curve ${\tilde \gamma}$ connecting ${x_1}$ to ${x_2}$.

Projection does not increase the hyperbolic distance.

Since

$\displaystyle \int_{\tilde \gamma} \frac{1}{1-|z|^2} = \int_{\tilde \gamma} \frac{1}{1-|\mathrm{Re}\,z|^2}\le \int_{\gamma} \frac{1}{1-|\mathrm{Re}\,z|^2}\le \int_\gamma \frac{1}{1-|z|^2}$

it follows that ${\rho(x_1,x_2)\le \rho(z_1,z_2)}$. That is, ${f}$ is a nonexpanding map in the hyperbolic metric of the disk.

We can assume that ${x_1\le x_2}$. There is a Möbius map ${\phi}$ such that ${\phi(z_1)=x_1}$; moreover, we
can arrange that ${\phi(z_2)}$ is a real number greater than ${x_1}$, by applying a hyperbolic rotation about ${x_1}$. Since ${\phi}$ is a hyperbolic isometry, ${\rho(x_1,\phi(z_2))\ge \rho(x_1,x_2)}$, which implies ${\phi(z_2)\ge x_2}$. Let ${\lambda(z)=x_1+(z-x_1)\dfrac{x_2-x_1}{\phi(z_2)-x_1}}$; this is a Euclidean homothety such that ${\lambda(x_1)=x_1}$ and ${\lambda(\phi(z_2))= x_2}$. By convexity of ${D}$, ${\lambda(D)\subset D}$. The map ${g=\lambda\circ \phi}$ achieves ${g(z_k)=x_k}$ for ${k=1,2}$.

The preceding can be immediately generalized: ${f}$ passes the two-point test if and only if it is a nonexpanding map in the hyperbolic metric. Such maps need not be differentiable even in the real-variable sense.

However, the three-point test is a different story.

## Dirichlet vs Fejér

Convolution of a continuous function ${f}$ on the circle ${\mathbb T=\mathbb R/\mathbb (2\pi \mathbb Z)}$ with the Fejér kernel ${\displaystyle F_N(x)=\frac{1-\cos (N+1)x}{(N+1)(1-\cos x)}}$ is guaranteed to produce trigonometric polynomials that converge to ${f}$ uniformly as ${N\rightarrow\infty}$. For the Dirichlet kernel ${\displaystyle D_N(x)=\frac{\sin((N+1/2)x)}{\sin(x/2)}}$ this is not the case: the sequence may fail to converge to ${f}$ even pointwise. The underlying reason is that ${\int_{\mathbb T} |D_N|\rightarrow \infty }$, while the Fejér kernel, being positive, has constant ${L^1}$ norm. Does this mean that Fejér’s kernel is to be preferred for approximation purposes?

Let’s compare the performance of both kernels on the function ${f(x)=2\pi^2 x^2-x^4}$, which is reasonably nice: ${f\in C^2(\mathbb T)}$. Convolution with ${D_2}$ yields ${\displaystyle \frac{1}{2\pi}\int_{-\pi}^{\pi} f(t)D_2(x-t)\,dt = \frac{7\pi^4}{15} -48 \cos x +3 \cos 2x }$. The trigonometric polynomial is in blue, the original function in red:

Convolution with D_2

I’d say this is a very good approximation.

Now try the Fejér kernel, also with ${N=2}$. The polynomial is ${\displaystyle \frac{1}{2\pi}\int_{-\pi}^{\pi} f(t)K_2(x-t)\,dt = \frac{7\pi^4}{15} - 32 \cos x + \cos 2x }$

Convolution with F_2

This is not good at all.

And even with ${N=20}$ terms the Fejér approximation is not as good as Dirichlet with merely ${N=2}$.

Convolution with F_20

The performance of ${F_{50}}$ is comparable to that of ${D_2}$. Of course, a ${50}$-term approximation is not what one normally wants to use. And it still has visible deviation near the origin, where the function ${f}$ is ${C^\infty}$ smooth:

Convolution with F_50

In contrast, the Dirichlet kernel with ${N=4}$ gives a low-degree polynomial
${\displaystyle \frac{7\pi^4}{15} -48 \cos x +3 \cos 2x -\frac{16}{27}\cos 3x+\frac{3}{16}\cos 4x}$ that approximates ${f}$ to within the resolution of the plot:

Convolution with D_4

What we have here is the trigonometric version of Biased and unbiased mollification. Convolution with ${D_N}$ amounts to truncation of the Fourier series at index ${N}$. Therefore, it reproduces the trigonometric polynomials of low degrees precisely. But ${F_N}$ performs soft thresholding: it multiplies the ${n}$th Fourier coefficient of ${f}$ by ${(1-|n|/(N+1))^+}$. In particular, it transforms ${\cos x}$ into ${(N/(N+1))\cos x}$, introducing the error of order ${1/N}$ — a pretty big one. Since this error is built into the kernel, it limits the rate of convergence no matter how smooth the function ${f}$ is. Such is the price that must be paid for positivity.

This reminds me of a parenthetical remark by G. B. Folland in Real Analysis (2nd ed., page 264):

if one wants to approximate a function ${f\in C(\mathbb T)}$ uniformly by trigonometric polynomials, one should not count on partial sums ${S_mf}$ to do the job; the Cesàro means work much better in general.

Right, for ugly “generic” elements of ${C(\mathbb T)}$ the Fejér kernel is a safer option. But for decently behaved functions the Dirichlet kernel wins by a landslide. The function above was ${C^2}$-smooth; as a final example I take ${f(x)=x^2}$ which is merely Lipschitz on ${\mathbb T}$. The original function is in red, ${f*D_4}$ is in blue, and ${f*F_4}$ is in green.

Dirichlet wins again

Added: the Jackson kernel ${J_{2N}}$ is the square of ${F_{N}}$, normalized. I use ${2N}$ as the index because squaring doubles the degree. Here is how it approximates ${f(x)=2\pi^2 x^2-x^4}$:

Convolution with J_2

Convolution with J_4

Convolution with J_20

The Jackson kernel performs somewhat better than ${F_N}$, because the coefficient of ${\cos x}$ is off by ${O(1/N^2)}$. Still not nearly as good as the non-positive Dirichlet kernel.

Posted in Uncategorized | | 2 Comments

## Student mortality

The death of a former student brought to mind the question: what is the expected number of students that a professor will have outlived after N years of teaching? The obvious source of data is the CDC mortality table. However, a recent study found (perhaps unsurprisingly) that the mortality rate of U.S. college students is much lower than among the general U.S. population of same age. I don’t have access to the article itself, but the numbers quoted in the abstract add up to 19.44 deaths per 100,000. Compare this to the CDC rates for general population:

Age Mortality rate
18 64.4
19 71.1
20 76.5
21 87.2
22 86.8
23 88.4

The rate jumps up at 21 so much that it stabilizes for two years afterwards.

I decided to use the 19.44 rate for ages up to 22, and switched to CDC rates afterwards. Unfortunately, CDC does not group results by educational level; presumably, college graduates would have a lower rate even after leaving the protective shell of their campus. To simplify the computation, I assumed the student age to be 20 at the time of taking my class. Over my first 8 years of full-time teaching I averaged 170 undergraduate students per academic year. Assuming the trend continues, I came up with the following prediction:

Years of teaching Deceased former students
8 2
10 4
15 12
20 25
25 45
30 74
35 120
40 188

Also in the graph form:

Deaths vs years of teaching

The computation is straightforward: let ${M(k,n)}$ be the number of former students of age ${k}$ after ${n}$ years. Then ${M(20,n)=170}$ and ${M(k+1,n+1)=M(k,n)(1-r_k)}$ for ${k\ge 20}$. It remains to sum ${M(k,n)}$ over ${k}$ and subtract the result from ${170\,n}$.