[Math] Structure constants for and the adjoint representation and meaning in $sl(2,F)$

lie-algebrasrepresentation-theory

First, what I know is that given the basis:

$$e = \left(\begin{array}{cc} 0 & 1 \\ 0 & 0 \end{array}\right),f = \left(\begin{array}{cc} 0 & 0 \\ 1 & 0 \end{array}\right),h = \left(\begin{array}{cc} 1 & 0 \\ 0 & -1 \end{array}\right)$$

I want to find the 'structure constants', but furthermore that the adjoint representation of $sl(2,F)$, with respect to the basis given we get $$ad \, h = \left(\begin{array}{cc} 0 & 0 & 0\\ 0 & 2 & 0 \\ 0 & 0 & -2 \end{array}\right)$$ to similarly find the matrix representation representing $ad \, e$ and $ad \, f$.

Now I know that the structure constants (at least the answer given is $[e,f] = h$, $[e,h] = -2e$, and $[f,h] = 2f$. If I look at the structure constant formula $[x_i,x_j] = \sum_{k = 1}^{3} a_{ij}^kx_k$ where I let $x_1 = e, x_2 = f, x_3 = h$ so $i,j \in \{1,2,3\}$ I get things such as $(ad \, x_3)(x_1) = a_{3,1}^1x_1+a_{3,1}^2x_2+ a_{3,1}^3x_3 = 2x_1$.

We could only have the case such that $a_{3,1}^1 = -2$. But then as the structure constants are of the form $a_{ij}^k$ how come we come up with $2 $ instead of $[h,e]=2e$? Furthermore, while I see that I also get $(ad \, h)(h) = [h,h]=0$, $(ad \, h)(f) = -2f$ we have our $0,2, -2$ that so happen to appear in the matrix $ad \, h$, how is it that they are arranged as they are?

As a bonus, what is the major points of these "structure constants"? Why are they useful – especially as I can seemingly just calculate the lie bracket to figure them out and don't need to figure out some summation.

Thanks for any help.

Best Answer

The adjoint ${\rm ad}\,h$ is the linear map $x\mapsto[h,x]$, or simply $[h,-]$ for abbreviation. To determine the matrix of this linear map, we calculate its effect on the basis vectors $e,f,h$:

$$\color{Red}{[h,}e\color{Red}{]}=\color{Blue}{2}e+\color{Blue}{0}f+\color{Blue}{0}h$$

$$\color{Red}{[h,}f\color{Red}{]}=\color{Blue}{0}e\color{Blue}{-2}f+\color{Blue}{0}h \tag{$\circ$}$$

$$\color{Red}{[h,}h\color{Red}{]}=\color{Blue}{0}e+\color{Blue}{0}f+\color{Blue}{0}h $$

Therefore the matrix of this linear map is given by

$${\rm ad}\,h=\begin{pmatrix}2 & \,0 & 0 \\ 0 & -2 & 0 \\ 0 & \,0 & 0\end{pmatrix} $$

how come we come up with $2$ instead of $[h,e]=2e$?

Constants and equations are different things. If we compute the lie bracket of two basis vectors, the result will be expressible as a linear combination of basis vectors. "Structure constants" refer to the coefficients of these basis vectors in such sums.

The coordinates of the vector $(1,0,0)\in\Bbb C^3$ are not $(1,0,0)$, the coordinates are the actual scalars $1,0,0$ in that order. Similarly the structure constants that appear when writing $[h,e]$ as a linear combination of $e,f,h$ are $2,0,0$ in that order.

what is the major points of these "structure constants"? Why are they useful - especially as I can seemingly just calculate the lie bracket to figure them out and don't need to figure out some summation.

Do you really want to compute $(\begin{smallmatrix} 1 & 0 \\ 0 & -1\end{smallmatrix}) (\begin{smallmatrix}0 & 1 \\ 0 & 0\end{smallmatrix}) - (\begin{smallmatrix}0 & 1 \\ 0 & 0\end{smallmatrix}) (\begin{smallmatrix} 1 & 0 \\ 0 & -1\end{smallmatrix})$ every single time you need $[h,e]$? That's a lot of superfluous matrix multiplication when all you'd have to do instead is memorize the simple fact that $[h,e]=2e$ to avoid all of that tedious work. What if the elements of the lie algebra are $8\times8$ matrices, would you rather compute every lie bracket over and over again by hand for the rest of your life, or simply compute them once and get it over with? What if the elements of the lie algebra aren't matrices at all, they're just abstract vectors - in what sense are you "calculating" the lie brackets then?

Not to mention, if you want to write the product of basis vectors as a linear combination of the basis vectors, then yes you do need to "figure out some summation" one way or another. One might as well figure out it once, write down the appropriate coefficients of the basis vectors (the structure constants), and then reuse that information later whenever it comes up again.

Suppose $R$ is a not necessarily associative or unital $S$-algebra (I am thinking in particular of $S$s that are commutative domains or fields like $S=\Bbb Z,\Bbb Q,\Bbb R,\Bbb C$ but these facts are more general) which has basis elements $r_1,\cdots,r_n$. That is, every element is uniquely expressible as a sum $s_1r_1+\cdots+s_nr_n$ for scalars $s_1,\cdots,s_n\in S$. Then for each $1\le i,j\le n$ we can write the products $r_ir_j$ as a $S$-linear combination of basis elements, say as $r_ir_j=\sum_{k=1}^n c_{ij}^k r_k$. These structure constants $c_{ij}^k$ completely determine the structure of the ring. All you would need to do is to write down the structure constants for another person to compute anything in the ring. They would know every element is a combination of basis elements, and they'd be able to compute the product of two sums of basis elements using distributivity and these structure constants.

For example, suppose I told you the structure constants of some nonassociative $\Bbb Z$-algebra I have, every element of which is $ax+by$ for some $a,b\in\Bbb Z$, are given by the following equations:

$$\begin{array}{ll} xx=x+y & xy=x \\ yx=y & yy=x-y \end{array}$$

If you want to see the constants more clearly, write it like this:

$$\begin{array}{ll} xx=\color{Blue}{1}x+\color{Blue}{1}y & xy=\color{Blue}{1}x+\color{Blue}{0}y \\ yx=\color{Blue}{0}x+\color{Blue}{1}y & yy=\color{Blue}{1}x\color{Blue}{-1}y \end{array}$$

Notice how this time the product of two basis elements can be a nontrivial combination of basis elements, instead of just a single term (at most) as in our nice $e,f,g$ situation. The act of rewriting the product of two basis elements as a linear combination of basis elements is the act of using structure constants. The constants and the equations are literally the same information. Indeed writing a linear map as a matrix is the same idea: a priori you have a bunch of equations, describing how applying the operator to a basis element yields something that can be written as a sum of basis elements, and then you collect all of the coefficients together in a matrix.

Can you use these equations to compute $(3x+2y)(2x-3y)$ as $ax+by$ for some $a,b\in\Bbb Z$? Sure you can; distribute and then use the equations. If I had omitted any of the four equations, would you still be able to calculate the product? Nope. So you see these structure constants are necessary and sufficient conditions to doing calculations in the ring. In particular this applies with lie algebras, since they are nonassociative nonunital algebras over a field (the lie bracket is the "multiplication" in the ring). If you wanted to store a lie algebra in a computer and then query it later to do lie bracket calculations, you would store the structure constants and then program the computer to distribute and then evaluate products of basis elements using structure constants.

Related Question