Derivations

A map {D} is a derivation if it satisfies the Leibniz rule: {D(ab)=D(a)b+aD(b)}. To make sense out of this, we need to be able to

  • multiply arguments of {D} together
  • multiply values of {D} by arguments of {D}
  • add the results

For example, if {D\colon R\to M} where {R} is a ring and {M} is a two-sided module over {R}, then all of the above makes sense. In practice it often happens that {M=R}. In this case, the commutator (Lie bracket) of two derivations {D_1,D_2} is defined as {[D_1,D_2]=D_1\circ D_2-D_2\circ D_1} and turns out to be a derivation as well. If {R} is also an algebra over a field {K}, then {K}-linearity of {D} can be added to the requirements of being a derivation, but I am not really concerned about that.

What I am concerned about is that two of my favorite instances of the Leibniz rule are not explicitly covered by the ring-to-module derivations. Namely, for smooth functions {\varphi\colon{\mathbb R}\rightarrow{\mathbb R}}, {F\colon{\mathbb R}\rightarrow{\mathbb R}^n} and {G\colon{\mathbb R}\rightarrow{\mathbb R}^n} we have

\displaystyle     (\varphi F)' = \varphi' F + \varphi F' \quad \text{and} \quad (F\cdot G)' = F'\cdot G+F\cdot G'    \ \ \ \ \ (1)

Of course, {{\mathbb R}^n} could be any {{\mathbb R}}-vector space {V} with an inner product.

It seems that the most economical way to fit (1) into the algebraic concept of derivation is to equip the vector space {{\mathbb R}\oplus V} with the product

\displaystyle   (\alpha,u)(\beta,v)= (\alpha\beta+u\cdot v, \alpha v+\beta u)  \ \ \ \ \ (2)

making it a commutative algebra over {{\mathbb R}}. Something tells me to put {-u\cdot v} there, but I resist. Actually, I should have said “commutative nonassociative algebra”:

\displaystyle    \{(\alpha,u)(\beta,v)\}(\gamma,w) = (\alpha\beta+u\cdot v, \alpha v+\beta u) (\gamma,w) \\ \\    = (\alpha\beta\gamma+\gamma u\cdot v+ \alpha v\cdot w+\beta u\cdot w,    \alpha\beta w + \gamma \alpha v+ \gamma \beta u +(u\cdot v) w)

Everything looks nice, except for the last term {(u\cdot v) w}, which destroys associativity.

Now we can consider maps {{\mathbb R}\rightarrow {\mathbb R}\oplus V}, which are formal pairs of scalar functions and vector-valued functions. The derivative acts component-wise {(\varphi,F)'=(\varphi',F')} and according to (1), it is indeed a derivation:

\displaystyle     \left\{(\varphi,F)(\psi,G)\right\}'= (\varphi,F)'(\psi,G)+(\varphi,F)(\psi,G)'   \ \ \ \ \ (3)

Both parts of (1) are included in (3) as special cases {(\varphi,0)(0,F)} and {(0,F)(0,G)}.

If (2) has a name, I do not know it. Clifford algebras do a similar thing and are associative, but they are also larger. If I just want to say that (1) is a particular instance of a derivation on an algebra, (2) looks like the right algebra structure to use (maybe with {-u\cdot v} if you insist). If {V} has no inner product, the identity {(\varphi F)' = \varphi' F + \varphi F'} can still be expressed via (2) using the trivial inner product {u\cdot v=0}.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s