Let's say there was something. And you could either do something to it or leave it along.
Doing something to it would change it, and we could make this look mathy by describing the change as new = f(old). But there are a great many things that we could do that would also result in changes. Let us parameterize f by some set listing the things we could do. Then we have new_A = f_A (old). But we could also do nothing. Let us call f-that-means-do-nothing as f_I. Then f_I(old) = new_I = old. In fact, f_I(anything) = anything. So f_I is an identity transform.
Now we could also do multiple things multiple times. Clearly f_I(f_A(anything)) = f_A(f_I(anything)). But f_B(f_A(somthing)) might not be the same as f_A(f_B(somthing)). And there might be a way to say "do A then B" and we could call that C. In that case f_B(f_A(anything)) = f_C(anything)).
Mathematicians would call this the operation of a monoid on a set. The set would be the old's and the new's. And the monoid would be I, A, B, C, .... and the rules which let you talk about combining them. But most of the interesting math is on the side of telling how combining things makes things. The rules for a monoid are simple:
- For any two elements of a monoid, A and B, you can always get an element by combing them, which we write as A ⊗ B.
- There is an element, I, such that I ⊗ A = A ⊗ I = A for any element of the monoid, A.
- For any three (not necessarily distinct) elements of the monoid, A ⊗ ( B ⊗ C ) = ( A ⊗ B ) ⊗ C.
The first monoid we learn about is called the natural numbers under addition. A ⊗ B is just a new name for A + B and I = 0, because 0 + A = A + 0 = A.
Another monoid is the natural numbers under multiplication. Of course, here I = 1. It is still a monoid if you remove 0, because if neither A nor B are zero then AB is not zero either. But you can't remove just 6 because 2 * 3 = 6.
(left + right + right) is a monoid on the natural numbers with I = 0.
1 ⊗ 2 = (1 + 2 + 2 ) = 5 but 2 ⊗ 1 = ( 2 + 1 + 1 ) = 4
Multiplication of quaternionic integers (a,b,c,d)⊗(A,B,C,D) = (aA - bB - cC - dD, aB + bA + cD - dC, aC - bD + cA + dB, aD + bC - cB + dA) is an example of a monoid with I = (1,0,0,0) which X⊗Y might not equal Y⊗X.
(0,1,0,1)⊗(1,0,1,0)=(0,0,0,2) but (1,0,1,0)⊗(0,1,0,1)=(0,2,0,0)
For a final example, not all monoids have an infinite number of elements. Boolean AND works on a universe of just TRUE and FALSE.General (Groups)
A group is a monoid where every element has an inverse element.
Often we can glue on inverses where none exist.
The natural numbers under addition + inverse = the integers under addition. I = 0 is its own inverse.
The natural numbers (excluding zero) under multiplication = the positive rational numbers under multiplication. Not only do we have to glue on 1/2 to be the inverse of 2, but 3 times 1/2 also needs to be included, etc.
Back to our original example. Inverses let us undo what has been done.
f_1/2(f_2(anything)) = f_1(anything) = anything.
So a group is a monoid that lets us describe undoing.General (Lie Group)
(Pronounce Lie as if it was spelled "Lee")
What if we had a group were no matter what we wanted to do, we could always break it up into two equal steps from the identity. If A was in the group, then so was B, such that B ⊗ B = A and C was in the group such that C ⊗ C ⊗ C ⊗ C = A, etc. Then would we be well set? Not quite since we would like at least one more property. We would like I < C < B < A in some sense. Generalizing, we would always like to write A = scale_up(r,Ã
) where Ã
is a direction and r is a distance. Then B = scale_up(r/2,Ã
) and C = scale_up(r/4,Ã
) and we would like scale_up(0,Ã
) = I, and a concept of smoothness. This is called a Lie group, and it introduces a concept of continuity which allows us to bring in concepts from differential calculus.General (Special Orthogonal Group)
The special orthogonal group in one dimension, SO(2), is like complex multiplication of the numbers with absolute value 1. We can write these numbers as e^iθ = cos θ + i sin θ = scale_up(θ,e^i). Each multiplication by these numbers is like a rotation about the 0. So a smaller θ implies a smaller rotation, at least when θ is small.
Clearly scale_up(0,e^i) = e^0 = 1, which makes sense. And θ makes a lot of sense about out idea of breaking things up.
But we can also define (because of smoothness) lim x->0 (scale_up(x,Ã
) - I)/x = d(e^ix -1)/dx = i. So that for very small θ, scale_up(θ,Ã
) ≈ 1 + iθ.
So not only do small values of r (or θ) add like real numbers, but in a certain sense there is a well-defined direction near I (or near r=0). That's what we meant by smoothness.
The special orthogonal group in three dimensions SO(3) can be thought as rotating three dimensional objects every which way. Here instead of using complex numbers, we can use multiplication of 3x3 real matrices. Instead of only having one possible direction, we can have three independent directions.
But given a 3x3 rotation matrix A, we can parameterize A = scale_up(θ,Ã
) = P + (1 - P) cos θ + Q sin θ. Where P and Q are special matrices which can be related to three normalized numbers which indicate a direction.
Yadda, yadda, yadda. The important thing is that the universe of rotation matrices form a Lie group, and every Lie group is a group, which means every rotation can be undone.
The other thing, is that the rotation in 2- or 3- dimensions preserves the length of all line segments, the areas of all triangles and the measure of all angles. All rotations are rigid motions in the Euclidean sense.