Nearest point projection, part II: beware of two-dimensional gifts

To avoid the complications from the preceding post, let’s assume that X is a uniformly convex Banach space: such a space is automatically reflexive, and therefore any closed subspace A has a well-defined nearest point projection P\colon X\to A.

Recalling that in a Hilbert space P is a linear operator (and a self-adjoint one at that), we might ask if our P is linear. Indeed, the invariance of distance under translations shows that P(x+a)=P(x)+a for all x\in X, a\in A. Consequently, all fibers P^{-1}(a) are translates of one another. The map P is also homogeneous: P(\lambda x)=\lambda P(x), which follows from the homogeneity of the norm. In particular, P^{-1}(0) is a two-sided cone: it’s closed under multiplication by scalars.

In the special case \dim (X/A)=1 we conclude that P^{-1}(0) is a line, and the direct sum decomposition X= A+P^{-1}(0) identifies P as a (possibly skewed) linear projection.

Well, one of things that the geometry of Banach spaces teaches us is that 2-dimensional examples are often too simple to show what is really going on, while a 3-dimensional example may suffice. For instance, the \ell_1 and \ell_\infty norms define isometric spaces in 2 dimensions, but not in 3 or more.

So, let’s take X to be 3-dimensional with the \ell_p norm \|x\|^p=|x_1|^p+|x_2|^p+|x_3|^p, where 1<p<\infty. Let A=\lbrace x_1=x_2=x_3\rbrace, so that the codimension of A is 2. What is the set P^{-1}(0)? We know it is a ruled surface: with each point it contains the line through that point and the origin. More precisely, P(x)=0 exactly when the minimum of d(t):=|x_1-t|^p+|x_2-t|^p+|x_3-t|^p is attained at t=0. (The minimum point is unique, since the function is strictly convex.) Differentiation reveals that

\displaystyle P^{-1}(0)=\lbrace x\colon |x_1|^{p-2}x_1+|x_2|^{p-2}x_2+|x_3|^{p-2}x_3 = 0\rbrace

which is a plane only when p=2. Here is this surface for p=4, when the equation simplifies to x_1^3+x_2^3+x_3^3=0:

A fiber of the nearest point projection

The entire 3-dimensional space is foliated by the translates of this surface in the direction of the vector (1,1,1).

The nearest point projection is likely to be the first nonlinear map one encounters in functional analysis. It is not even Lipschitz in general, although in decent spaces such as \ell_p for 1<p<\infty it is Hölder continuous (I think the optimal exponent is \frac{p\wedge 2}{p\vee 2}).

After a little thought, the nonlinearity of NPP is not so surprising: minimization of distance amounts to solving an equation involving the gradient of the norm, and this gradient is nonlinear unless the norm is a quadratic functions, i.e., unless we are in a Hilbert space.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s