Derivations and the curvature tensor

Let {M} be a Riemannian manifold with Riemannian connection {\nabla}. A connection is a thing that knows how to differentiate a vector field {Y} in the direction of a vector field {X}; the result is denoted by {\nabla_X Y} and is also a vector field. For consistency of notation, it is convenient to write {\nabla_X f } for the derivative of scalar function {f} in the direction {X}, even though this derivative does not need a connection: vector fields are born with the ability to differentiate functions.

The pairs {(f,Y)}, with {f} a scalar function and {Y} a vector field, form a funky nonassociative algebra {\mathcal{A}} described in the previous post. And {\nabla_X} is a derivation on this algebra, because

  • {\nabla_X(fY) = f\nabla_X Y + (\nabla_X f)Y } by the definition of a connection
  • {\nabla_X\langle Y, Z\rangle = \langle \nabla_X Y, Z\rangle + \langle Y, \nabla_X Z\rangle} by the metric property of the Riemannian connection.

Recall that the commutator of two derivations is a derivation. Or just check this again: if {{}'} and {{}^\dag} are derivations, then

{ {(ab)'}^\dag = (a'b+ab')^\dag = {a'}^\dag b+a'b^\dag +a^\dag b'+a{b'}^\dag }
{  {(ab)^\dag}' = (a^\dag b+ab^\dag)' = {a^\dag }' b+a^\dag b' +a' b^\dag +a{b^\dag}' }

and the difference {{(ab)'}^\dag-{(ab)^\dag}'} simplifies to what it should be.

Thus, for any pair {X,Y} of vector fields the commutator {\nabla_X \nabla_Y-\nabla_Y\nabla_X} is a derivation on {\mathcal{A}}. The torsion-free property of the connection tells us how it works on functions:

\displaystyle  (\nabla_X \nabla_Y-\nabla_Y\nabla_X) f =   \nabla_{[X,Y]}f=\nabla_{\nabla_XY}f -\nabla_{\nabla_YX}f

Subtracting {\nabla_{[X,Y]}f} from the commutator, we get a derivation that kills scalar functions,

\displaystyle  R(X,Y) = \nabla_X \nabla_Y-\nabla_Y\nabla_X - \nabla_{[X,Y]}

But a derivation that kills scalar functions is linear over functions:

\displaystyle R(X,Y)(fZ) = R(X,Y)(f)\, Z + f\,R(X,Y)Z = f\,R(X,Y)Z

In plain terms, {R(X,Y)} processes any given vector field {Z} pointwise, applying some linear operator {L_p} to the vector {Z_p} at every point {p} of the manifold. No derivatives of {Z} are actually taken, either of first or of second order.

Moreover, the derivation property immediately implies that {R(X,Y)} is a skew-symmetric operator: for any vector fields {Z,W}

\displaystyle  \langle R(X,Y)Z,W\rangle + \langle R(X,Y)W,Z\rangle  = R(X,Y)\langle Z,W\rangle =0

because {R(X,Y)} kills scalar functions.

The other kind of skew-symmetry was evident from the beginning: {R(X,Y)=-R(Y,X)} by definition.

What is not yet evident is that {R(X,Y)} is also a tensor in {X} and {Y}, that is, it does not differentiate the direction fields themselves. To prove this, write {R(X,Y)=\nabla_{X,Y}^2-\nabla_{Y,X}^2} where \displaystyle  \nabla_{X,Y}^2 = \nabla_X \nabla_Y - \nabla_{\nabla_X Y}  should be thought of as the pointwise second-order derivative in the directions {X,Y} (i.e., the result of plugging two direction vectors into the Hessian matrix). By symmetry, it suffices to show that {\nabla_{X,Y}^2} is a tensor in {X} and {Y}. For {X}, this is clear from the definition of connection. Concerning {Y}, we have

{ \nabla_{X,fY}^2 = \nabla_X (f\nabla_{Y}) - \nabla_{(\nabla_X f) Y+f\nabla_X Y} }
{= f \nabla_X \nabla_{Y} + (\nabla_X f )\nabla_{Y} - (\nabla_X f) \nabla_{ Y} - f \nabla_{\nabla_X Y}  }
{= f \nabla_{X,Y}^2  }

That’s it, we have a tensor that takes three vector fields {X,Y,Z} and produces another one, denoted {R(X,Y)Z}. Now I wonder if there is a way to use the language of derivations to give a slick proof of the first Bianchi identity, {R(X,Y)Z+R(Y,Z)X+R(Z,X)Y=0}

To avoid having two picture-less posts in a row, here is something completely unrelated:

New design of the Command key?
New design of the Command key?

This is the image of the unit circle {|z|=1} under the polynomial {z^3-\sqrt{3}\,\bar z}. Which area is larger: red or green? Answer hidden below.

They are equal.

Derivations

A map {D} is a derivation if it satisfies the Leibniz rule: {D(ab)=D(a)b+aD(b)}. To make sense out of this, we need to be able to

  • multiply arguments of {D} together
  • multiply values of {D} by arguments of {D}
  • add the results

For example, if {D\colon R\to M} where {R} is a ring and {M} is a two-sided module over {R}, then all of the above makes sense. In practice it often happens that {M=R}. In this case, the commutator (Lie bracket) of two derivations {D_1,D_2} is defined as {[D_1,D_2]=D_1\circ D_2-D_2\circ D_1} and turns out to be a derivation as well. If {R} is also an algebra over a field {K}, then {K}-linearity of {D} can be added to the requirements of being a derivation, but I am not really concerned about that.

What I am concerned about is that two of my favorite instances of the Leibniz rule are not explicitly covered by the ring-to-module derivations. Namely, for smooth functions {\varphi\colon{\mathbb R}\rightarrow{\mathbb R}}, {F\colon{\mathbb R}\rightarrow{\mathbb R}^n} and {G\colon{\mathbb R}\rightarrow{\mathbb R}^n} we have

\displaystyle     (\varphi F)' = \varphi' F + \varphi F' \quad \text{and} \quad (F\cdot G)' = F'\cdot G+F\cdot G'    \ \ \ \ \ (1)

Of course, {{\mathbb R}^n} could be any {{\mathbb R}}-vector space {V} with an inner product.

It seems that the most economical way to fit (1) into the algebraic concept of derivation is to equip the vector space {{\mathbb R}\oplus V} with the product

\displaystyle   (\alpha,u)(\beta,v)= (\alpha\beta+u\cdot v, \alpha v+\beta u)  \ \ \ \ \ (2)

making it a commutative algebra over {{\mathbb R}}. Something tells me to put {-u\cdot v} there, but I resist. Actually, I should have said “commutative nonassociative algebra”:

\displaystyle    \{(\alpha,u)(\beta,v)\}(\gamma,w) = (\alpha\beta+u\cdot v, \alpha v+\beta u) (\gamma,w) \\ \\    = (\alpha\beta\gamma+\gamma u\cdot v+ \alpha v\cdot w+\beta u\cdot w,    \alpha\beta w + \gamma \alpha v+ \gamma \beta u +(u\cdot v) w)

Everything looks nice, except for the last term {(u\cdot v) w}, which destroys associativity.

Now we can consider maps {{\mathbb R}\rightarrow {\mathbb R}\oplus V}, which are formal pairs of scalar functions and vector-valued functions. The derivative acts component-wise {(\varphi,F)'=(\varphi',F')} and according to (1), it is indeed a derivation:

\displaystyle     \left\{(\varphi,F)(\psi,G)\right\}'= (\varphi,F)'(\psi,G)+(\varphi,F)(\psi,G)'   \ \ \ \ \ (3)

Both parts of (1) are included in (3) as special cases {(\varphi,0)(0,F)} and {(0,F)(0,G)}.

If (2) has a name, I do not know it. Clifford algebras do a similar thing and are associative, but they are also larger. If I just want to say that (1) is a particular instance of a derivation on an algebra, (2) looks like the right algebra structure to use (maybe with {-u\cdot v} if you insist). If {V} has no inner product, the identity {(\varphi F)' = \varphi' F + \varphi F'} can still be expressed via (2) using the trivial inner product {u\cdot v=0}.