I am trying to show that the group of $n\times n$ permutation matrices is isomorphic to $S_n$, the symmetric group on $n$ elements. It seems clear to me that they wold be isomorphic – we can associate each row with a number, and then each permutation matrix would correspond to some rearrangement of $n$ letters. However, I'm having difficulty coming up with a homomorphism.
[Math] Subgroup of $GL_n$ isomorphic to $S_n$
abstract-algebragroup-theory
Related Solutions
Edit: I've revised the answer to make it more elementary, and to fix the error YACP pointed out (thank you).
Suppose $X\in N_{GL_n(K)}(S_n)$. Then for every permutation matrix $P\in S_n$ we have $XPX^{-1}\in S_n$, so conjugation by $X$ is an automorphism of $S_n$. If $n\ne 2, 6$, then as YACP noted it must be an inner automorphism, i.e. we have some $P'\in S_n$ such that for every $P\in S_n$, $XPX^{-1}=P'P{P'}^{-1}$. Thus $(X^{-1}P')P(X^{-1}P')^{-1}=P$, so $X^{-1}P'\in C_{GL_n(K)}(S_n)$ (the centralizer of $S_n$). Thus $X\in C_{GL_n(K)}(S_n)\cdot S_n$, so all we have to do is find $C_{GL_n(K)}(S_n)$, as $C_{GL_n(K)}(S_n)\cdot S_n\subseteq N_{GL_n(K)}(S_n)$ holds trivially.
Let $\mathcal C$ denote of all matrices (including non-invertible ones) $X$ such that $PXP^{-1}=X$ for all $P\in S_n$. Note that conjugation is linear, i.e. $A(X+Y)A^{-1}=AXA^{-1}+AYA^{-1}$ for any $A,X,Y\in M_{n\times n}(K)$, so $\mathcal C$ is closed under addition. Conjugation also respects scalar multiplication, i.e. $AcXA^{-1}=cAXA^{-1}$, so $\mathcal C$ is closed under scalar multiplication. Recall that $M_{n\times n}(K)$ is a vector space over $K$, so this makes $\mathcal C$ a subspace of $M_{n\times n}$. The use of $\mathcal C$ is that $C_{GL_n(K)}(S_n)=\mathcal C\cap GL_n(K)$, yet unlike $C_{GL_n(K)}(S_n)$ it is a vector subspace, and vector subspaces are nice to work with.
It is easy to see that $\mathcal C$ contains diagonal matrices $D$ with constant diagonal, as well as all matrices $M$ such that the entries $m_{ij}$ are the same for all $i,j$. Since $\mathcal C$ is a vector subspace, this means it contains all sums of these matrices as well. We want to show that every matrix in $\mathcal C$ can be written as $D+M$ where $D$ and $M$ are as above. If $X\in \mathcal C$ then we can subtract a diagonal matrix $D$ and a matrix $M$ of the second kind to get the upper left and right entries to be $0$: $$X-D-M=\begin{pmatrix} 0 & x_{12} & \cdots & x_{1n-1} & 0\\ x_{21} & x_{22} & \cdots & x_{2n-1} & x_{2n}\\ \vdots & \vdots & \ddots & \vdots & \vdots\\ x_{n1} & x_{n2} & \cdots & x_{nn-1} & x_{nn}\\ \end{pmatrix}$$ Call this matrix $X'$; we wish to show $X'=0$. Exchanging the second and last column must be the same as exchanging the second and last row, and since the first action switches $x_{12}$ and $x_{1n}$ while the second leaves the first row unchanged we have $x_{12}=x_{1n}=0$. Continuing in this manner we see that the whole first row is $0$. Exchanging the first and second row is the same as exchanging the first and second column, so the whole second row must be $0$ as well. Continuing in this manner we get that $X'=0$ as desired. Thus $\mathcal C$ is the set of matrices of the form $D+M$, i.e. with $a$ on the diagonal and $b$ off the diagonal.
$C_{GL_n(K)}(S_n)$ is the set of such matrices with nonzero determinant. Let $X\in \mathcal C$ have entries $a$ on the diagonal and $b$ off it. Clearly if $a=b$ then the determinant is $0$, so suppose $a\ne b$. Then we can write $X=(a-b)(I_n+cr)$ where $c$ is a column consisting entirely of $1$'s and $r$ is a row consisting entirely of entries $\frac{b}{a-b}$. By Sylvester's Determinant Theorem the determinant of this is $(a-b)^n(1+rc)$, and $rc=\frac{nb}{a-b}$, which gives us $\det(X)=(a-b)^{n-1}(a-b+nb)$. Thus for any $X\in \mathcal C$, $\det(X)=0$ iff either $a=b$ or $a=(1-n)b$.
Putting this all together, we get that $$N_{GL_n(K)}(S_n)=\left\{\begin{pmatrix} a & b & \cdots & b \\ b & a & \ddots &\vdots \\ \vdots &\ddots & \ddots & b\\ b & \cdots & b & a\\ \end{pmatrix}P: a\neq b, a\neq (1-n)b, P\in S_n \right\}$$
The proof is correct, but one can generalize and shorten it as follows:
Let $G$ be a finite group and assume that the intersection of all non-trivial subgroups of $G$ is non-trivial, i.e. contains some $g \neq 1$. If $G$ acts on a set $A$ with $|A|<|G|$, then for every $a \in A$ we have $|G:G_a|=|Ga| \leq |A| < |G|$. Therefore, $G_a$ is non-trivial, and $g \in \cap_{a \in A} G_a = \ker(G \to \mathrm{Sym}(A))$. Hence, $G$ doesn't embed into $\mathrm{Sym}(A)$.
For the Quaternion group we can take $g=-1$.
Best Answer
The homomorphism is indeed a rather trivial one.
Consider the following map $\psi : S_n \to \mathbb P_{n \times n}$, where the latter is the set of permutation matrices of order $n$. This map is defined as follows : given $\phi \in S_n$, the $i$th column of $\psi(\phi)$ is the column vector with a $1$ in the $\phi(i)$th position, and $0$ elsewhere.
It is easy to see that $\psi(\phi)$ is indeed a permutation matrix, since a $1$ occurs in any position if and only if that position is described by $(\phi(i),i)$, for any $1 \leq i \leq n$.
This map is clearly multiplicative. The proof is awkward to write, but easy to understand. We must prove that $\psi(\alpha \circ \beta)=\psi(\alpha)\psi(\beta)$ for any permutations $\alpha,\beta$.
Note that since these are matrices, it's enough to show that each corresponding entry is equal. So let us take the entry $(i,j)$ of each matrix.
Then : $\psi(\alpha \circ \beta)_{ij} = 1$ if and only if $i = \alpha \circ \beta(j)$, as I said earlier. Otherwise, it is zero.
Note that by ordinary matrix multiplication, $(\psi(\alpha)\psi(\beta))_{ij} = \sum_{k=1}^n \psi(\alpha)_{ik}\psi(\beta)_{kj}$.
Now, we know that $\psi(\alpha)_{ik} = 1$ only when $i = \alpha(k)$. Similarly, $\psi(\beta)_{kj} = 1$ only when $k = \beta(j)$. Hence, their product is one precisely when both of these happen : $i = \alpha(k),k = \beta(j)$. If both these don't happen simultaneously, then whenever one of $\psi(\alpha)_{ik},\psi(\beta)_{kj}$ is one the other will be zero, so the whole sum will be zero. However, this is the same as saying that the sum is one exactly when $i = \alpha \circ\beta(j)$. This description matches with the decription given for $\psi(\alpha \circ \beta)_{ij}$ given earlier.
Hence, entry by entry these matrices are the same. Therefore the matrices are the same, and hence $\psi$ is a homomorphism between the two spaces, an isomorphism as it has trivial kernel and the sets are of the same cardinality.