Question on the decomposition of a representation of $D_n$ into irreducible representations.

dihedral-groupsgroup-theoryrepresentation-theory

For simplicity I‘ll just consider the case in which n is even.

$D_n$ is given by $\langle r,t : r^n=s^2=1, (rt)^2=1\rangle $ we can construct a representation on $\mathbb{C}^n$ $\rho: D_n \rightarrow GL(\mathbb{C}^n)$ by commuting the basis vectors in the obvious way:

$\rho(r)= \begin{bmatrix}
0 & 1 & & \\
& \ddots & \ddots & \\
& & \ddots & 1 \\
1 & & & 0\\
\end{bmatrix}, \quad \rho(s)=
\begin{bmatrix}
& & & 1 \\
& & . & \\
& . & & \\
1 & & & \\
\end{bmatrix} $

The entire representation is fixed by these two choices. What I don’t get now is how this is related to the decomposition into irreducible representations.

The irreps on $D_n$ (n even) are given by:

1-dim: $\rho_{++}(r^as^b)=1, \, \rho_{+-}(r^as^b)=(-1)^b, \rho_{-+}(r^as^b)=(-1)^a, \, \rho_{- -}(-1)^{a+b}$

2-dim: $\rho_j(r^a)=
\begin{bmatrix}
\xi^j & 0\\
0 & \xi^{-j}\\
\end{bmatrix}^a \quad \rho_j(r^a s)=
\begin{bmatrix}
\xi^j & 0\\
0 & \xi^{-j}\\
\end{bmatrix}^a \begin{bmatrix}
0 & 1\\
1 &0\\
\end{bmatrix}$
, with $\xi=e^{\frac{2 \pi i}{n}} $ and $j \in \{1, \dots, \frac{n}{2} -1 \}$

Therefore, my professor states:
$\mathbb{C}^n = V_0 \oplus V_k \oplus (\oplus_{j=1}^{k-1}V_j)$ with $n=2k$ where $V_0 \simeq \rho_{++}, \quad V_k \simeq \rho_{- -}, \quad V_j \simeq \rho_{j} $

I don’t really get how I should interpret the last part with the direct sums, and these „isomorphic“ symbols. I also don’t get how these irreps are related to the big matrices at the beginning in my post, in Linear Algebra if a homomorphism was the direct sum of other homomorphisms then its matrix „consisted of blocks of the other matrices“, but I don’t see that here (I hope its clear what I mean by that). Also why don’t $\rho_{-+}, \rho_{+-}$ show up in the sum? I‘d be really happy if someone could clear my confusion.

Best Answer

As @user8675309 aptly comments, the "blocks" will only be visible when a suitable basis is chosen. The more elementary analogue is that a matrix with distinct eigenvalues is certainly not diagonal, but can be diagonalized with respect to a suitable basis (of eigenvectors, after all).

Further, in many cases, there is scant purpose in telling the matrix that does the change of coordinates, but, rather, just give the good basis and tell how the linear transformations act on it.

In your example, the first matrix (of order $n$) has distinct eigenvalues consisting of all $n$th roots of unity, so can be diagonalized (with all $n$th roots of unity on the diagonal). The second matrix maps the $\mu$-eigenvectors $V_\mu$ to the $\mu^{-1}$ eigenvectors $V_{\mu^{-1}}$. So it stabilizes the one-dimensional eigenspaces $V_1$ and $V_{-1}$ and stabilizes the two-dimensional spaces $V_\mu\oplus V_{\mu^{-1}}$ for an $n$th root of unity $\mu\not=\pm 1$.

Thus, we know existence of a matrix to do the change-of-basis to get blocks of $1$ on $V_1$, $-1$ on $V_{-1}$, and two-by-two blocks of diagonal matrices $\pmatrix{\mu&0\cr 0&\mu^{-1}}$ where $\mu$ runs through $n$th roots of unity modulo inverses.