The condition number of an invertible matrix is the product of norms , or, in deciphered form, the maximum of the ratio taken over all unit vectors . It comes up a lot in numerical linear algebra and in optimization problems. One way to think of the condition number is in terms of the image of the unit ball under , which is an ellipsoid. The ratio of longest and shortest axes of the ellipsoid is .

But for positive definite matrices the condition number can also be understood in terms of *rotation*. First, note that the positivity of says precisely that that the angle between and is always less than . Let be the maximum of such angles taken over all (or over all unit vectors, the same thing). Then

One can also make (1) a little less geometric by introducing as the largest number such that holds for all vectors . Then , and (1) takes the form

Could this be of any use? The inequality

is obviously preserved under addition of matrices. Therefore, it is preserved by integration. In particular, if the Hessian of a twice differentiable convex function satisfies (3) at every point, then integration along a line segment from to yields

Conversely, if is a twice differentiable convex function such that (4) holds, then (by differentiation) its Hessian satisfies (3), and therefore admits a uniform bound on the condition number by virtue of (2). Thus, for such functions inequality (4) is equivalent to uniform boundedness of the condition number of the Hessian.

But the Hessian itself **does not appear** in (4). Condition (4) expresses “uniform boundedness of the condition number of the Hessian” without requiring to be twice differentiable. As a simple example, take . The Hessian matrix is

The eigenvalues are and . Thus, even though the eigenvalues blow up at the origin and decay at infinity, the condition number of the Hessian remains equal to . Well, except that the second derivatives do not exist at the origin. But if we use the form (4) instead, with , then non-differentiability becomes a non-issue.

Let’s prove (2). It suffices to work in two dimensions, because both and are determined by the restrictions of to two-dimensional subspaces. In two dimensions we can represent the linear map as for some complex numbers . Actually, is real and positive because is symmetric positive definite. As runs through unimodular complex numbers, the maximum of is and the minimum is . Therefore, .

When , the angle that the vector forms with is equal to the argument of . The latter is maximized when form a right triangle with hypotenuse .

Hence, . This proves , and (2) follows.

There are similar observations in the literature on quasiconformality of monotone maps, including an inequality similar to (2) (for general matrices), but I have not seen either (1) or (2) stated as an identity for positive definite matrices.