# Derivations and the curvature tensor

Let ${M}$ be a Riemannian manifold with Riemannian connection ${\nabla}$. A connection is a thing that knows how to differentiate a vector field ${Y}$ in the direction of a vector field ${X}$; the result is denoted by ${\nabla_X Y}$ and is also a vector field. For consistency of notation, it is convenient to write ${\nabla_X f }$ for the derivative of scalar function ${f}$ in the direction ${X}$, even though this derivative does not need a connection: vector fields are born with the ability to differentiate functions.

The pairs ${(f,Y)}$, with ${f}$ a scalar function and ${Y}$ a vector field, form a funky nonassociative algebra ${\mathcal{A}}$ described in the previous post. And ${\nabla_X}$ is a derivation on this algebra, because

• ${\nabla_X(fY) = f\nabla_X Y + (\nabla_X f)Y }$ by the definition of a connection
• ${\nabla_X\langle Y, Z\rangle = \langle \nabla_X Y, Z\rangle + \langle Y, \nabla_X Z\rangle}$ by the metric property of the Riemannian connection.

Recall that the commutator of two derivations is a derivation. Or just check this again: if ${{}'}$ and ${{}^\dag}$ are derivations, then ${ {(ab)'}^\dag = (a'b+ab')^\dag = {a'}^\dag b+a'b^\dag +a^\dag b'+a{b'}^\dag }$ ${ {(ab)^\dag}' = (a^\dag b+ab^\dag)' = {a^\dag }' b+a^\dag b' +a' b^\dag +a{b^\dag}' }$

and the difference ${{(ab)'}^\dag-{(ab)^\dag}'}$ simplifies to what it should be.

Thus, for any pair ${X,Y}$ of vector fields the commutator ${\nabla_X \nabla_Y-\nabla_Y\nabla_X}$ is a derivation on ${\mathcal{A}}$. The torsion-free property of the connection tells us how it works on functions: $\displaystyle (\nabla_X \nabla_Y-\nabla_Y\nabla_X) f = \nabla_{[X,Y]}f=\nabla_{\nabla_XY}f -\nabla_{\nabla_YX}f$

Subtracting ${\nabla_{[X,Y]}f}$ from the commutator, we get a derivation that kills scalar functions, $\displaystyle R(X,Y) = \nabla_X \nabla_Y-\nabla_Y\nabla_X - \nabla_{[X,Y]}$

But a derivation that kills scalar functions is linear over functions: $\displaystyle R(X,Y)(fZ) = R(X,Y)(f)\, Z + f\,R(X,Y)Z = f\,R(X,Y)Z$

In plain terms, ${R(X,Y)}$ processes any given vector field ${Z}$ pointwise, applying some linear operator ${L_p}$ to the vector ${Z_p}$ at every point ${p}$ of the manifold. No derivatives of ${Z}$ are actually taken, either of first or of second order.

Moreover, the derivation property immediately implies that ${R(X,Y)}$ is a skew-symmetric operator: for any vector fields ${Z,W}$ $\displaystyle \langle R(X,Y)Z,W\rangle + \langle R(X,Y)W,Z\rangle = R(X,Y)\langle Z,W\rangle =0$

because ${R(X,Y)}$ kills scalar functions.

The other kind of skew-symmetry was evident from the beginning: ${R(X,Y)=-R(Y,X)}$ by definition.

What is not yet evident is that ${R(X,Y)}$ is also a tensor in ${X}$ and ${Y}$, that is, it does not differentiate the direction fields themselves. To prove this, write ${R(X,Y)=\nabla_{X,Y}^2-\nabla_{Y,X}^2}$ where $\displaystyle \nabla_{X,Y}^2 = \nabla_X \nabla_Y - \nabla_{\nabla_X Y}$ should be thought of as the pointwise second-order derivative in the directions ${X,Y}$ (i.e., the result of plugging two direction vectors into the Hessian matrix). By symmetry, it suffices to show that ${\nabla_{X,Y}^2}$ is a tensor in ${X}$ and ${Y}$. For ${X}$, this is clear from the definition of connection. Concerning ${Y}$, we have ${ \nabla_{X,fY}^2 = \nabla_X (f\nabla_{Y}) - \nabla_{(\nabla_X f) Y+f\nabla_X Y} }$ ${= f \nabla_X \nabla_{Y} + (\nabla_X f )\nabla_{Y} - (\nabla_X f) \nabla_{ Y} - f \nabla_{\nabla_X Y} }$ ${= f \nabla_{X,Y}^2 }$

That’s it, we have a tensor that takes three vector fields ${X,Y,Z}$ and produces another one, denoted ${R(X,Y)Z}$. Now I wonder if there is a way to use the language of derivations to give a slick proof of the first Bianchi identity, ${R(X,Y)Z+R(Y,Z)X+R(Z,X)Y=0}$

To avoid having two picture-less posts in a row, here is something completely unrelated:

This is the image of the unit circle ${|z|=1}$ under the polynomial ${z^3-\sqrt{3}\,\bar z}$. Which area is larger: red or green? Answer hidden below.