I am reading the book Representation Theory of Finite Groups by Benjamin Steinberg due to my interest in probability theory on groups and other algebraic structure. This is related to my earlier post on random walk on n-gons which I found to be very interesting. This post will be my workspace for reading through this book and other related material, notably Group Representations in Probability and Statistics by Persi Diaconis.
Representation theory studies how a group act on vector spaces. Another way to think about it is that for any group , what are the ways to embed or map it to the general linear group . We need to first review what a group action is. The action of a group on a set is a homomorphism , assigning each group member some permutation of the set that is compatible with the structure of the group . This is always possible because one can do this trivially (the trivial action) such that for all , mapping every element of to the identity permutation. If we take , then it is also always possible because of Cayley’s Theorem, which states that every group is isomorphic to a group of permutation or a subgroup of the symmetric group . The set can have some additional structure, and if we take to be a vector space over field and ensure the map respect the additional structure of the vector space as well, then the group action is called a group representation.
A representation of group is the homomorphism from to the general linear group of the vector space . The degree of is the dimension of .
We write the linear map as , and , sometimes written as for the action of on . Below is an example of the representation on .
Example (Representation of )
Immediately from the definition we see the analogy of trivial action as a representation: the trivial representation given by for all . We also see that the degree of the trivial representation is 1. It is interesting to see some other representations of a small group such as the group of all permutations on 3 elements. Aside from the trivial representation, we can compute two others: the alternating representation and the standard representation or the permutation representation.
Let be a permutation on 3 elements, the alternating representation is the function where if can be written as a product of an even number of transpositions, and otherwise. More precisely, define the signum of by
then the map is a group homomorphism of to the subgroup of the multiplicative group . Thus the number of transpositions in a representation of is either even or odd. We can check that this is a representation. For we have
so . This representation has degree 1.
The standard representation takes each permutation to a matrix in on standard basis such that rows of the matrix is permuted according to . In other words, . For example
The standard representation has degree 3.
If we start with a representation where is a basis for , and if we have another isomorphism with a basis , then is another representation. These two representation are intuitively the “same”. More generally, we have the definition of equivalence
Two representations and are said to be equivalent if there exists an isomorphism such that for all , or , or in picture, the following diagram commutes:
If is equivalent to , we write .
An example of equivalent representations is given in Sternberg as follows:
Define and by
To show that , we need to find an an invertible matrix such that . This matrix also represents a simple basis transformation in . Let us find the eigenvectors of the two operators. The eigenvectors for are and , and for is and the standard basis, while the eigenvalues for are and and for are and . By Euler’s formula, the two sets of eigenvalues are the same, so we can deduce that the change of basis matrix is the matrix with columns of eigenvectors of , namely,
Indeed, after some calculation.
The notion of equivalent representation are the same as matrix similarity, except that the former describes the equivalence between operators and the latter their matrix representation. Also note that in the example above there are two eigenspaces for , namely and . For , we have for all as well. This motivates the definition of -invariant subspace.
Definition (-Invariant Subspace)
Let be a representation. A subspace is -invariant if, for all and , one has .
We can use this definition to find the -invariant subspace of the standard representation . Note that the 1-dimensional space spanned by is a -invariant subspace, since for any ,
is again in the . Also note that its compliment is also -invariant. Indeed, if is a representation and is a -invariant subspace, then restricting to yields a subrepresentation The following computation of restricting the standard representation to follows that of Disconis.
Computation of the 2-Dimensional Representation of
Let and . Let then since the components of add to . then yields . Therefore and are basis for . Let’s consider the action on this basis.
This gives the 2-dimensional representation of
The 2-dimensional representation of is also irreducible, which means
Definition: Irreducible Representation
A non-zero representation of a group is said to be irreducible if the only -invariant subspaces of are and .
To show that, assume a non-zero vector (say ) and let be the span of this vector. If is a sub representation, then . Since , and so their difference is also in . If , then . Since , must be in as well, so . If then . . Take the difference and we have . Then must also be in , so again . This irreducible representation is 2-dimensional. We will see that the restriction of the permutation representation of to is an irreducible -dimensional representation.
Since we can form direct sum of vector spaces, we can similarly define a direct sum of representations. We want to do this because we want to show that if and are two -invariant subspaces and is a representation, then it makes sense to talk about a representation that maps from to .
Definition: Direct Sum of Representations
Let and be two representations, then their external direct sum
is defined by
and has dimension .
If we restrict ourselves to matrices, then suppose and . Then
has the block matrix form
The representation in the example about equivalence is a good demonstration of how to form direct product of representations. It should also be clear that if a representation can be written as a direct sum of subrepresentations, then it is not irreducible. Therefore. the representations in our equivalence example are not irreducible.
Related to the notion of irreducible representation are the decomposable and completely reducible representations.
To Be Continued…