I have decided to take a detour from the ongoing series on operator algebra to briefly discuss the titular question, which I think has confounded most of us physical/inorganic chemists at some point. For some, perhaps it still does! There is no shame in this – in my opinion, chemists do a pretty lousy job teaching the representation theory of point groups. Do not take this as an indictment of chemical education, however. This is just a consequence of the fact that, as chemists, we are more interested in applications of group theory than the nuts and bolts. Many are perfectly happy treating group theory as a black box, and this is fine if your work demands no more than an undergraduate level of sophistication. For ‘higher-level’ manipulations involving applied group theory, however, I think it is important (on principle!) to appreciate the nuts and bolts. Here, I will try to offer a lucid explanation of what representation theory is all about. Note that my explanation will assume a working understanding of rings and homomorphisms. These concepts are both very important and relatively friendly, so if they are foreign, I encourage you to take a few minutes to familiarize yourself before moving forward. It will be worth the effort!
What is a representation?
What I want to do is pick apart the definition of ‘representation’. I will toss in some examples along the way to make things concrete. An n-dimensional real/complex representation of a group G is a group homomorphism from G into the general linear group (respectively). therefore identifies each element of G with an n x n invertible matrix, and the resulting collection of matrices (the image of G under ) obeys the group multiplication law. From the definition of group homomorphism: for , .
I want to emphasize right off the bat that we are concerned with , the homomorphism, and not the image of G under ! While the and its graph ultimately provide the same information, I find it much easier to wrap my head around the former. Note that this is very much akin to how we can study a function f without reference to its argument (e.g. Dirac formalism). Let me illustrate my point with an example. The spherical harmonics sharing the same value of span the -dimensional irreducible representation of the special orthogonal group . This means that the provide a homomorphism from into : for ,
This example illustrates why necessarily transforms like (i.e. ‘looks’ like) the , and how we can study the transformation properties of with no knowledge of the image . I think it also provides some nice insight into why we so often visualize representations as symmetrical objects, or collections of symmetrical objects. It follows that the statement
“ spans the irreducible representation of “
really means that, for , . To make this more concrete, consider an example:
Note that there was nothing particularly special about in all of this – we just need an object that transforms like to construct the representation and some inner product structure.
Where do representations ‘live’?
Vectors live in vector spaces, and operators corresponding to physical observables inhabit a Lie algebra. So where do representations live? Anybody who has manipulated representations using a character table has probably realized that representations behave suspiciously like vectors. As you might have guessed, this is no coincidence! Let be an m-dimensional complex representation of the group , and consider the graph . We can identify (and therefore ) in a natural way with the ordered n-tuple
This vector-like quantity, which is equivalent to , is an element of the free module (edit: thanks u/AngelTC for pointing out that this is a free module). Free modules are very much like vector spaces. The crucial difference is that their elements are expressed in terms of coordinates taken from a ring (e.g. the integers, n x n matrices, etc.), not (necessarily) a field. For example, let Z denote the integers. Then ZxZxZ is a free module, while RxRxR is a vector space. It follows that all vector spaces are free modules (since all fields are rings), but not all free modules are vector spaces.
The operations we use to combine representations (i.e. the direct sum “” and Kronecker product ““) do not rely on modular structure beyond what provided by the base ring, however. For two complex representations of G of dimension n and m ( and , respectively), we have and . In other words, these operations take us into a different module than the ones we started in.
The integers form a ring under addition and multiplication, and the representations of a group G under and have similar structure (thanks to u/eruonna for pointing out that only form a semi-ring, however, as there is no additive inverse defined in this case). The details can really give you a headache here, but we have a tool at our disposal that makes everything nice and tidy: the character function , given as
where Tr denotes the matrix trace. has some nice properties, including:
It follows that is a map from the semi-ring of representations of the group G (under the operations “” and ““) into the ring of functions (thanks again to u/eruonna for your correction). If is an m-dimensional complex representation of the group , then
What does this buy us? Well, for one, this is where character tables come from! And the inner product on (which formally should be taken as a module in this case) gives us a simple way to define geometric relationships between representations of different dimension. This structure provides the basis for the ‘great orthogonality theorem’, aka Schur orthogonality relations - the machinery that we use to painlessly decompose reducible representations into direct sums of irreducible representations. And to think, all of this wonderful structure was hiding below the surface, just waiting to be revealed by ! Of course there is even more to be seen, but I will save it for another day. In my next post, we will return to the scheduled programming. Thanks for reading!