A map is a derivation if it satisfies the Leibniz rule: . To make sense out of this, we need to be able to

- multiply arguments of together
- multiply values of by arguments of
- add the results

For example, if where is a ring and is a two-sided module over , then all of the above makes sense. In practice it often happens that . In this case, the commutator (Lie bracket) of two derivations is defined as and turns out to be a derivation as well. If is also an algebra over a field , then -linearity of can be added to the requirements of being a derivation, but I am not really concerned about that.

What I am concerned about is that two of my favorite instances of the Leibniz rule are not explicitly covered by the ring-to-module derivations. Namely, for smooth functions , and we have

Of course, could be any -vector space with an inner product.

It seems that the most economical way to fit (1) into the algebraic concept of derivation is to equip the vector space with the product

making it a commutative algebra over . Something tells me to put there, but I resist. Actually, I should have said “commutative *nonassociative* algebra”:

Everything looks nice, except for the last term , which destroys associativity.

Now we can consider maps , which are formal pairs of scalar functions and vector-valued functions. The derivative acts component-wise and according to (1), it is indeed a derivation:

Both parts of (1) are included in (3) as special cases and .

If (2) has a name, I do not know it. Clifford algebras do a similar thing and are associative, but they are also larger. If I just want to say that (1) is a particular instance of a derivation on an algebra, (2) looks like the right algebra structure to use (maybe with if you insist). If has no inner product, the identity can still be expressed via (2) using the trivial inner product .