[Physics] Proof that $(1/2,1/2)$ Lorentz group representation is a 4-vector

group-representationsgroup-theorylie-algebralorentz-symmetryrepresentation-theory

Taken from Quantum Field Theory in a Nutshell by Zee, problem II.3.1:

Show by explicit computation that $(\frac{1}{2},\frac{1}{2})$ is indeed the Lorentz vector.

This has been asked here:

How do I construct the $SU(2)$ representation of the Lorentz Group using $SU(2)\times SU(2)\sim SO(3,1)$ ?

but I can't really digest the formality of this answer with only a little knowledge of groups and representations.

By playing around with the Lorentz group generators it is possible to find the basis $J_{\pm i}$ that separately have the Lie algebra of $SU(2)$, and thus can be separately given spin representations.

My approach has been to write $$J_{+i}=\frac{1}{2}(J_{i}+iK_{i})=\frac{1}{2}\sigma_{i}$$ $$J_{-i}=\frac{1}{2}(J_{i}-iK_{i})=\frac{1}{2}\sigma_{i}$$ which implies that $$J_{i}=\sigma_{i}$$ $$K_{i}=0$$ However I don't really get where to go next.

Best Answer

First let's recall how to construct the finite dimensional irreducible representations of the Lorentz group. Say $J_i$ are the three rotation generators and $K_i$ are the three boost generators. \begin{align*} L_x = &\begin{pmatrix} 0&0&0&0 \\ 0&0&0&0 \\ 0&0&0&-1 \\ 0&0&1&0 \end{pmatrix}& L_y = &\begin{pmatrix} 0&0&0&0 \\ 0&0&0&1 \\ 0&0&0&0 \\ 0&-1&0&0 \end{pmatrix}& L_z = &\begin{pmatrix} 0&0&0&0 \\ 0&0&-1&0 \\ 0&1&0&0 \\ 0&0&0&0 \end{pmatrix}\\ K_x = &\begin{pmatrix} 0&1&0&0 \\ 1&0&0&0 \\ 0&0&0&0 \\ 0&0&0&0 \end{pmatrix}& K_y = &\begin{pmatrix} 0&0&1&0 \\ 0&0&0&0 \\ 1&0&0&0 \\ 0&0&0&0 \end{pmatrix}& K_z = &\begin{pmatrix} 0&0&0&1 \\ 0&0&0&0 \\ 0&0&0&0 \\ 1&0&0&0 \end{pmatrix}\\ \end{align*} They satisfy $$ [J_i, J_j] = \varepsilon_{ijk} J_k \hspace{1 cm} [K_i, K_j] = -\varepsilon_{ijk} J_k \hspace{1 cm} [J_i, K_j] = \varepsilon_{ijk}K_k. $$ (Note that I am using the skew-adjoint convention for Lie algebra elements where I did not multiply by $i$.)

We then define $$ A_i = \frac{1}{2} (J_i - i K_i) \hspace{2 cm} B_i = \frac{1}{2}(J_i + i K_i) $$ which satisfy the commutation relations $$ [A_i, A_j] = \varepsilon_{ijk} A_k \hspace{2cm} [B_i, B_j] = \varepsilon_{ijk} B_k \hspace{2cm} [A_i, B_j] = 0. $$

Here is how you construct the representation of the Lorentz group: first, choose two non-negative half integers $j_1$ and $j_2$. These correspond to two spin $j$ representations of $\mathfrak{su}(2)$, which I will label $$ \pi'_{j}. $$ Recall that that $$ \mathfrak{su}(2) = \mathrm{span}_\mathbb{R} \{ -\tfrac{i}{2} \sigma_x, -\tfrac{i}{2} \sigma_y, -\tfrac{i}{2} \sigma_z \} $$ where $$ [-\tfrac{i}{2} \sigma_i, -\tfrac{i}{2} \sigma_j] = -\tfrac{i}{2}\varepsilon_{ijk} \sigma_k. $$ For this question, we only need to know the spin $1/2$ representation of $\mathfrak{su}(2)$, which is given by $$ \pi'_{\tfrac{1}{2}}( -\tfrac{i}{2}\sigma_i) = -\tfrac{i}{2} \sigma_i. $$

So okay, how do we construct the $(j_1, j_2)$ representation of the Lorentz group? Any Lie algebra element $X \in \mathfrak{so}(1,3)$ can written as a linear combination of $A_i$ and $B_i$: $$ X = \sum_{i = 1}^3 (\alpha_i A_i + \beta_i B_i). $$ (Note that we are actually dealing with the complexified version of the Lie algebra $\mathfrak{so}(1,3)$ because our definitions of $A_i$ and $B_i$ have factors of $i$, so $\alpha, \beta \in \mathbb{C}$.)

$A_i$ and $B_i$ form their own independent $\mathfrak{su}(2)$ algebras.

The Lie algebra representation $\pi'_{(j_1, j_2)}$ is then given by \begin{align*} \pi'_{(j_1, j_2)}(X) &= \pi'_{(j_1, j_2)}(\alpha_i A_i + \beta_j B_i) \\ &\equiv \pi'_{j_1}(\alpha_i A_i) \otimes \big( \pi'_{j_2}(\beta_j B_i) \big)^* \end{align*} where the star denotes complex conjugation.

Sometimes people forget to mention that you have to include the complex conjugation, but it won't work otherwise!

If $j_1 = 1/2$ and $j_2 = 1/2$, we have \begin{equation*} \pi_{\frac{1}{2}}'(A_i) = -\frac{i}{2}\sigma_i \otimes I \hspace{2cm} \big(\pi_{\frac{1}{2}}' (B_i)\big)^* = \frac{i}{2} I \otimes\sigma_i^*. \end{equation*} We can explicitly write out these tensor products in terms of a $2 \times 2 = 4$ dimensional basis. (Here I am using the so-called "Kronecker Product" to do this. That just a fancy name for multiplying all the elements of two $2\times 2$ cell-wise to get a $4 \times 4$ matrix.) \begin{align*} \pi_{(\frac{1}{2},\frac{1}{2})}'(A_x) &= -\frac{i}{2}\begin{pmatrix} 0&0&1&0 \\ 0&0&0&1 \\ 1&0&0&0 \\ 0&1&0&0 \end{pmatrix} & \big(\pi_{(\frac{1}{2},\frac{1}{2})}'(B_x)\big)^* &= \frac{i}{2}\begin{pmatrix} 0&1&0&0 \\ 1&0&0&0 \\ 0&0&0&1 \\ 0&0&1&0 \end{pmatrix} \\ \pi_{(\frac{1}{2},\frac{1}{2})}'(A_y) &= \frac{1}{2}\begin{pmatrix} 0&0&-1&0 \\ 0&0&0&-1 \\ 1&0&0&0 \\ 0&1&0&0 \end{pmatrix} & \big(\pi_{(\frac{1}{2},\frac{1}{2})}'(B_y)\big)^*&= \frac{1}{2}\begin{pmatrix} 0&-1&0&0 \\ 1&0&0&0 \\ 0&0&0&-1 \\ 0&0&1&0 \end{pmatrix} \\ \pi_{(\frac{1}{2},\frac{1}{2})}'(A_z) &= -\frac{i}{2}\begin{pmatrix} 1&0&0&0 \\ 0&1&0&0 \\ 0&0&-1&0 \\ 0&0&0&-1 \end{pmatrix} & \big(\pi_{(\frac{1}{2},\frac{1}{2})}'(B_z)\big)^* &= \frac{i}{2}\begin{pmatrix} 1&0&0&0 \\ 0&-1&0&0 \\ 0&0&1&0 \\ 0&0&0&-1 \end{pmatrix} \end{align*} We can then write out the matrices of the rotations and boosts $J_i$ and $K_i$ using $$ J_i = A_i + B_i \hspace{2cm} K_i = i(A_i - B_i). $$ \begin{align*} \pi'_{(\frac{1}{2},\frac{1}{2})}(J_x) &= \frac{i}{2}\begin{pmatrix} 0&1&-1&0 \\ 1&0&0&-1 \\ -1&0&0&1 \\ 0&-1&1&0 \end{pmatrix} & \pi'_{(\frac{1}{2},\frac{1}{2})}(K_x) &= \frac{1}{2}\begin{pmatrix} 0&1&1&0 \\ 1&0&0&1 \\ 1&0&0&1 \\ 0&1&1&0 \end{pmatrix} \\ \pi'_{(\frac{1}{2},\frac{1}{2})}(J_y) &= \frac{1}{2}\begin{pmatrix} 0&-1&-1&0 \\ 1&0&0&-1 \\ 1&0&0&-1 \\ 0&1&1&0 \end{pmatrix} & \pi'_{(\frac{1}{2},\frac{1}{2})}(K_y) &= \frac{i}{2}\begin{pmatrix} 0&1&-1&0 \\ -1&0&0&-1 \\ 1&0&0&1 \\ 0&1&-1&0 \end{pmatrix} \\ \pi'_{(\frac{1}{2},\frac{1}{2})}(J_z) &= \begin{pmatrix} 0&0&0&0 \\ 0&-i&0&0 \\ 0&0&i&0 \\ 0&0&0&0 \end{pmatrix} & \pi'_{(\frac{1}{2},\frac{1}{2})}(K_z) &= \begin{pmatrix} 1&0&0&0 \\ 0&0&0&0 \\ 0&0&0&0 \\ 0&0&0&-1 \end{pmatrix} \\ \end{align*} These are strange matrices, although we can make them look much more suggestive in another basis. Define the matrix \begin{equation*} U = \frac{1}{\sqrt{2}} \begin{pmatrix} 1 & 0 & 0 & 1 \\ 0 & 1 & -i & 0\\ 0 & 1 & i & 0 \\ 1 & 0 & 0 &-1 \end{pmatrix}. \end{equation*} Amazingly, \begin{equation*} U^{-1} \big( \pi'_{(\frac{1}{2},\frac{1}{2})}(L_i) \big) U = L_i \hspace{1cm}U^{-1} \big( \pi'_{(\frac{1}{2},\frac{1}{2})}(K_i) \big) U = K_i. \end{equation*} Therefore, the $(\tfrac{1}{2}, \tfrac{1}{2})$ representation is equivalent to the regular "vector" representation of $SO^+(1,3)$. However, these "vectors" live in $\mathbb{C}^4$, not $\mathbb{R}^4$, which people usually don't mention.