Let’s admit it: it’s hard to keep track of signs when multiplying numbers. Being lazy people, mathematicians seek ways to avoid this chore. One popular way is to work in the enchanted world of , where
. I’ll describe another way, which is to redefine multiplication by letting the factors reach a consensus on what the sign of their product should be.
If both and
are positive, let their product be positive. And if they are both negative, the product should also be negative. Finally, if the factors can’t agree on which sign they like, they compromise at 0.
In a formula, this operation can be written as , but who wants to see that kind of formulas? Just try using it to check that the operation is associative (which it is).
But I hear someone complaining that is just an arbitrary operation that does not make any sense. So I’ll reformulate it. Represent real numbers by ordered pairs
, for example
becomes
and
becomes
. Define multiplication component-wise. Better now? You don’t have to keep track of minus signs because there aren’t any.
This comes in handy when multiplying the adjancency matrices of quivers. The Wikipedia article on Quiver illustrates the concept with this picture:

But in mathematics, a quiver is a directed graph such as this one:

Recall that the adjancency matrix of a graph on vertices
has
if there is an edge between
and
, and
otherwise. For a directed graph we modify this definition by letting
if the arrow goes from
to
, and
if it goes in the opposite direction. So, for the quiver shown above we get
For an undirected graph the square counts the number of ways to get from
to
in exactly 2 steps (and one can replace 2 by n). To make this work for the directed graph, we represent numbers as pairs
and
and carry on multiplying and adding:
For instance, there are two ways to get from 3 to 4 in two steps, but none in the opposite direction. This works for any powers, and also for multigraphs (with more than one edge between same vertices). Logically, this is the same as separating the adjacency matrix into its positive and negative parts, and multiplying them separately.

The last example is matrix mutation from the theory of cluster algebras. Given a (usually, integer) matrix
and a positive integer
, we can mutate
in the direction
by doing the following:
- Dump nuclear waste on the
th row and
th column
- To each non-radioactive element
add
, that is, the
product of the radioactive elements to which
is exposed.
- Clean up by flipping the signs of all radioactive elements.
The properties of should make it clear that each mutation is an involution: mutating for the second time in the same direction recovers the original matrix. However, applying mutations in different directions, one can obtain a large, or even infinite, class of mutation-equivalent matrices.