## Bounded convex function with no continuous boundary extension

Suppose ${f\colon \Omega\rightarrow\mathbb R}$ is a convex function on a bounded convex domain ${\Omega\subset\mathbb R^n}$. Does it have a continuous extension to ${\overline{\Omega}}$?

Of course not if ${f}$ is unbounded, like ${f(x)=1/x}$ on the interval ${(0,1)}$. So let’s assume ${f}$ is bounded. Then the answer is positive on one dimension, and easy to prove: ${f}$ is monotone in some neighborhood of a boundary point, and being bounded, must have a finite limit there.

In higher dimensions, the above argument does not work anymore. And indeed, a bounded convex function on a nice domain (like a disk) may fail to have a limit at some boundary points.

The example I describe is geometrically natural, but doesn’t seem to have a neat closed form equation. Let ${U}$ be the upper half of the open unit disk. Its boundary consists of the diameter ${I}$ and the semicircle ${C}$. Define

$\displaystyle f(x,y) = \sup \{\ell(x,y) : \ell \text{ is affine }, \ \ell \le 0 \text{ on }I, \ \ell\le 1 \text{ on }C\}$

Equivalently, take the convex hull of the set ${(I\times \{0\} )\cup( C\times \{1\})}$ in ${\mathbb R^3}$, and let ${z=f(x,y)}$ be the equation of its bottom surface.

This is a convex function by construction. It takes a bit of (routine) work to show that ${f}$ has limit ${0}$ everywhere on ${I}$, except the endpoints, and that it has limit ${1}$ on ${C}$, again except the endpoints. Consequently, there is no limit of ${f(x,y)}$ as ${(x,y)\rightarrow (1,0)}$.

Here’s a plot of ${f}$:

To obtain it in a reasonably efficient way, I had to narrow down the class of affine functions without changing the supremum. Note that if ${\sup_U \ell<1}$, then dividing ${\ell}$ by ${\sup_U \ell}$ gives a better contributor to the supremum. (This increases ${\ell}$ where it is positive, and the parts where it is negative do not matter anyway since ${f\ge0}$.)

Let ${(\cos t, \sin t)}$ be the point of ${C}$ where ${\ell }$ attains the value ${1}$. Then ${\nabla \ell}$ is parallel to the radius at that point. To fulfill the condition ${\ell\le 0}$ on ${I}$, the function must decay quickly enough along the radius toward the center. The required rate is found by projecting ${(\pm 1,0)}$ onto the radius: it is ${(1-|\cos t|)^{-1}}$. Hence, we only need to consider

$\displaystyle \ell(x,y) = \frac{x\cos t+y\sin t-|\cos t|}{1-|\cos t|}$

The one-parameter supremum over ${0 is not hard to evaluate numerically. Here is the Matlab code that produced the plot above.

[R,T] = meshgrid(0:.01:1, 0:.01:pi);
X = (2-R).*R.*cos(T); Y = (2-R).*R.*sin(T);
Z = zeros(size(X));
t = .001:.001:pi-.001;
for k = 1:length(t)
Z = max(Z, (X*cos(t(k))+Y*sin(t(k))-abs(cos(t(k))))./(1-abs(cos(t(k)))));
end
Z = min(Z, 1);
surf(X, Y, Z)

The truncation step Z = min(Z, 1); would be unnecessary in exact arithmetics, but it helps to cut out floating point errors.

The factor (2-R) is attached to polar coordinate transformation so that there are more polar circles near the boundary, where the function changes most rapidly.

Finally, note that the nonsmoothness of domain ${U}$ is not an issue here. The function ${f}$ naturally extends to a convex function on the open unit disk, by letting ${f=0}$ in the bottom half.

(Adapted from Extension of bounded convex function to boundary)

## Condition number and maximal rotation angle

The condition number ${K(A)}$ of an invertible matrix ${A}$ is the product of norms ${\|A\|\,\|A^{-1}\|}$, or, in deciphered form, the maximum of the ratio ${|Au|/|Av|}$ taken over all unit vectors ${u,v}$. It comes up a lot in numerical linear algebra and in optimization problems. One way to think of the condition number is in terms of the image of the unit ball under ${A}$, which is an ellipsoid. The ratio of longest and shortest axes of the ellipsoid is ${K(A)}$.

But for positive definite matrices ${A}$ the condition number can also be understood in terms of rotation. First, note that the positivity of ${\langle Av,v\rangle }$ says precisely that that the angle between ${v}$ and ${Av}$ is always less than ${\pi/2}$. Let ${\gamma}$ be the maximum of such angles taken over all ${v\ne 0}$ (or over all unit vectors, the same thing). Then

$\displaystyle K(A)=\frac{1+\sin\gamma }{1-\sin\gamma} \ \ \ \ \ \ \ \ \ (1)$

One can also make (1) a little less geometric by introducing ${\delta=\delta(A)}$ as the largest number such that ${\langle Av,v\rangle \ge \delta|Av|\,|v|}$ holds for all vectors ${v}$. Then ${\delta=\cos \gamma}$, and (1) takes the form

$\displaystyle K=\frac{1+\sqrt{1-\delta^2}}{1-\sqrt{1-\delta^2}} = \left(\frac{1+\sqrt{1-\delta^2}}{\delta}\right)^2 \ \ \ \ \ \ \ \ \ \ (2)$

Could this be of any use? The inequality

$\displaystyle \langle Av,v\rangle \ge \delta\,|Av|\,|v| \ \ \ \ \ \ \ \ \ \ (3)$

is obviously preserved under addition of matrices. Therefore, it is preserved by integration. In particular, if the Hessian of a twice differentiable convex function ${u}$ satisfies (3) at every point, then integration along a line segment from ${x}$ to ${y}$ yields

$\displaystyle \langle \nabla u(x)- \nabla u(y),x-y\rangle \ge \delta\,|\nabla u(x)-\nabla u(y)|\,|x-y| \ \ \ \ \ \ \ \ \ \ (4)$

Conversely, if ${u}$ is a twice differentiable convex function such that (4) holds, then (by differentiation) its Hessian satisfies (3), and therefore admits a uniform bound on the condition number by virtue of (2). Thus, for such functions inequality (4) is equivalent to uniform boundedness of the condition number of the Hessian.

But the Hessian itself does not appear in (4). Condition (4) expresses “uniform boundedness of the condition number of the Hessian” without requiring ${u}$ to be twice differentiable. As a simple example, take ${u(x_1,x_2)=|x|^{4/3}}$. The Hessian matrix is

$\displaystyle \frac{4}{9}|x|^{-4/3} \begin{pmatrix} x_1^2+3x_2^2 & -2x_1x_2 \\ -2x_1x_2 & 3x_1^2+x_2^2 \end{pmatrix}$

The eigenvalues are ${\frac{4}{3}|x|^{-1/3}}$ and ${\frac{4}{9}|x|^{-1/3}}$. Thus, even though the eigenvalues blow up at the origin and decay at infinity, the condition number of the Hessian remains equal to ${3}$. Well, except that the second derivatives do not exist at the origin. But if we use the form (4) instead, with ${\delta = \sqrt{3}/2}$, then non-differentiability becomes a non-issue.

Let’s prove (2). It suffices to work in two dimensions, because both ${K(A)}$ and ${\delta(A)}$ are determined by the restrictions of ${A}$ to two-dimensional subspaces. In two dimensions we can represent the linear map ${A}$ as ${z\mapsto \alpha z+\beta \bar z}$ for some complex numbers ${\alpha,\beta}$. Actually, ${\alpha}$ is real and positive because ${A}$ is symmetric positive definite. As ${z}$ runs through unimodular complex numbers, the maximum of ${|\alpha z+\beta \bar z|}$ is ${\alpha+|\beta|}$ and the minimum is ${\alpha-|\beta|}$. Therefore, ${K(A)=\frac{1+|\beta|/\alpha}{1-|\beta|/\alpha}}$.

When ${|z|=1}$, the angle ${\gamma}$ that the vector ${\alpha z+\beta \bar z}$ forms with ${z}$ is equal to the argument of ${\bar z (\alpha z+\beta \bar z)=\alpha+\beta \bar z^2}$. The latter is maximized when ${0, \alpha, \alpha+\beta \bar z^2}$ form a right triangle with hypotenuse ${\alpha}$.

Hence, ${\sin\gamma = |\beta|/\alpha}$. This proves ${K(A)=\frac{1+|\beta|/\alpha}{1-|\beta|/\alpha}}$, and (2) follows.

There are similar observations in the literature on quasiconformality of monotone maps, including an inequality similar to (2) (for general matrices), but I have not seen either (1) or (2) stated as an identity for positive definite matrices.

## Convexity of polar curves

Everybody knows the second derivative test for the convexity of Cartesian curves ${y=y(x)}$. What is the convexity test for polar curves ${r=r(\theta)}$? Google search brought up Robert Israel’s answer on Math.SE: the relevant inequality is

$\displaystyle \mathcal{C}[r]:=r^2+2(r')^2-r\,r''\ge 0 \ \ \ \ \ (1)$

But when using it, one should be aware of the singularity at the origin. For example, ${r=1+\cos \theta}$ satisfies $\mathcal{C}[r] = 3(1+\cos \theta)\ge 0$ but the curve is not convex: it’s the cardioid.

The formula (1) was derived for ${r> 0}$; the points with ${r=0}$ must be investigated directly. Actually, it is easy to see that when ${r }$ has a strict local minimum with value ${0}$, the polar curve has an inward cusp and therefore is not convex.

As usual, theoretical material is followed by an exercise.

Exercise: find all real numbers ${p}$ such that the polar curve ${r=(1+\cos 2\theta)^p}$ is convex.

All values ${p>0}$ are ruled out by the cusp formed at ${\theta=\pi/2}$. For ${p=0}$ we get a circle, obviously convex. When ${p<0}$, some calculations are in order:

$\displaystyle \mathcal{C}[r] = (1+4p+4p^2+(1-4p^2)\cos 2\theta)(1+\cos 2\theta)^{2p-1}$

For this to be nonnegative for all ${\theta}$, we need ${1+4p+4p^2\ge |1-4p^2|}$. Which is equivalent to two inequalities: ${p(4+8p) \ge 0}$ and ${2+4p\ge 0}$. Since ${p<0}$, the first inequality says that ${p\le -1/2}$, which is the opposite of the second.

Answer: ${p=0}$ and ${p=-1/2}$.

This exercise is relevant to the problem from the previous post, finding the “right” interpolation method in polar coordinates. Given a set of ${(r,\theta)}$ values, I interpolated ${(r^{1/p},\theta)}$ with a trigonometric polynomial, and then raised that polynomial to power ${p}$.

If the given points had Cartesian coordinates ${(\pm a,0), (0,\pm b)}$ then this interpolation yields ${r=(\alpha+\beta \cos \theta)^p}$ where ${\alpha,\beta}$ depend on ${a,b}$ and satisfy ${\alpha>|\beta|}$. Using the exercise above, one can deduce that ${p=-1/2}$ is the only nonzero power for which the interpolated curve is convex for any given points of the form ${(\pm a,0), (0,\pm b)}$.

In general, curves of the form ${r=P(\theta)^{-1/2}}$, with ${P}$ a trigonometric polynomial, need not be convex. But even then they look more natural than their counterparts with powers ${p\ne -1/2}$. Perhaps this is not surprising: the equation ${r^2P(\theta)=1}$ has a decent chance of being algebraic when the degree of ${P}$ is low.

Random example at the end: ${r=(3-\sin \theta +2\cos \theta+\cos 2\theta-2\sin 2\theta)^{-1/2}}$