The short answer is that $J^2$ is a scalar under all rotations, and the only matrix that has that property is the identity matrix, so you have to have $J^2\propto I$.
I think that the process and physics are a bit easier to understand if we start from the elemental rotation matrices in $3$-d about each axis
\begin{align}
R_x & = \left[\begin{array}{ccc}
1 & 0 & 0 \\
0 & \cos\theta & -\sin\theta \\
0 & \sin\theta & \cos\theta
\end{array}\right] & R_y & = \left[\begin{array}{ccc}
\cos\theta & 0 & \sin\theta \\
0 & 1 & 0 \\
-\sin\theta & 0 & \cos\theta
\end{array}\right] & R_z & = \left[\begin{array}{ccc}
\cos\theta & -\sin\theta & 0\\
\sin\theta & \cos\theta & 0 \\
0 & 0 & 1
\end{array}\right].
\end{align}
where the pattern of signs is chosen so that a positive rotation gives a rotation around the axis in the sense defined to be positive by the right hand rule.
A matrix/operator is said to generate a transformation if, for an infinitesimal transformation, it satisfies
$$R(\theta) = I - i \frac{J\theta}{\hbar} + \mathcal{O}(\theta^2),$$
where the transformation is $R$ and the generator is $J$. From that we get that
\begin{align}
J_x & = \left[\begin{array}{ccc}
0 & 0 & 0 \\
0 & 0 & -i\hbar \\
0 & i\hbar & 0
\end{array}\right] & J_y & = \left[\begin{array}{ccc}
0 & 0 & i\hbar \\
0 & 0 & 0 \\
-i\hbar & 0 & 0
\end{array}\right] & J_z & = \left[\begin{array}{ccc}
0 & -i\hbar & 0 \\
i\hbar & 0 & 0 \\
0 & 0 & 0
\end{array}\right].
\end{align}
We now look for the eigenvectors and eigenvalues of $J_z$. The defining equation for the eigen-problem is
$$J_z \vec{v} = \lambda \vec{v}$$
for some eigenvalue $\lambda$ and its corresponding eigenvector $\vec{v}$. We get that the eigenvalue/eigenvector pairs of $J_z$ are
\begin{align}
\lambda_{-1} & = -1 \Rightarrow & \vec{v}_{-1} & = \frac{1}{\sqrt{2}}\left[\begin{array}{c}
i \\
1 \\
0 \end{array}\right], \\
\lambda_0 & = 0 \Rightarrow & \vec{v}_0 & = \left[\begin{array}{c}
0 \\
0 \\
i \end{array}\right],\ \mathrm{and} \\
\lambda_1 & = 1 \Rightarrow & \vec{v}_1 & = \frac{1}{\sqrt{2}}\left[\begin{array}{c}
-i \\
1 \\
0 \end{array}\right],
\end{align}
where I have taken care to normalize the eigenvectors to unit length (i.e. $\vec{v}^* \cdot \vec{v} = 1$), and to set the overall phases of the unit vectors to make the results match certain other conventions.
It is a defining property of the eigenvalue problem for Hermitian matrices that if we define $V \equiv \left[\vec{v}_1,\ \vec{v}_0,\ \vec{v}_{-1}\right]$, the matrix with normalized distinct eigenvectors in the columns, then
\begin{align}
V & = \left[\begin{array}{ccc}
-\frac{i}{\sqrt{2}} & 0 & \frac{i}{\sqrt{2}} \\
\frac{1}{\sqrt{2}} & 0 & \frac{1}{\sqrt{2}} \\
0 & i & 0
\end{array}\right],\ \mathrm{and} &
V^\dagger J_z V &= \hbar \left[\begin{array}{ccc}
1 & 0 & 0 \\
0 & 0 & 0 \\
0 & 0 & -1
\end{array}\right].
\end{align}
Note that $V^\dagger V=I$ and $\operatorname{det}(V)=1$, meaning $V$ is a special unitary matrix. That fact means that $V$ is a transformation from one orthonormal basis to another. If we say that the above matrices are just representations of a more abstract operator in a particular basis, and use a $\rightarrow$ to signify representation, then we can say that in this basis
$$J_z\rightarrow \hbar \left[\begin{array}{ccc}
1 & 0 & 0 \\
0 & 0 & 0 \\
0 & 0 & -1
\end{array}\right].$$
If we examine $J_x$ and $J_y$ in this basis, we get
\begin{align}
J_x &\rightarrow \frac{\hbar}{\sqrt{2}} \left[\begin{array}{ccc}
0 & 1 & 0 \\
1 & 0 & 1 \\
0 & 1 & 0
\end{array}\right],\ \mathrm{and} & J_y \rightarrow i\frac{\hbar}{\sqrt{2}} \left[\begin{array}{ccc}
0 & -1 & 0 \\
1 & 0 & -1 \\
0 & 1 & 0
\end{array}\right].
\end{align}
Now you can evaluate $J^2 = J_x^2 + J_y^2 + J_z^2$ explicitly.
Notice how our original basis was in terms of the three states that are invariant under rotations about their respective axes, the $m_x=0$, $m_y=0$, and $m_z=0$ states, respectively.
Note also that the choice of phases made won't effect $J^2$. That was done so that the $J_x$ and $J_y$ constructed will match those generated by following the development in Wikipedia's ladder operator article.
You can make a more general construction if you look at the commutation relations among $J_x$, $J_y$ and $J_z$ as defined above, and rename the $x$ through $z$ axes as $1$ through $3$ to get that
$$[J_i,\, J_j] = i\hbar \sum_{k=1}^3 \epsilon_{ijk} J_k, \tag1$$
with $\epsilon_{ijk}$ the Levi-Civita symbol.
We then say that any collection of three Hermitian matrices that satisfies the commutation relations in (1) are generators of the symmetry transformation we call rotations in physics, in some particular representation/basis. The next step is to prove that $$[J^2,\, J_i] = 0,$$ which shows that the $J^2$ and $J_z$ can be simultaneously digaonalized. I have forgotten the next step in the process, but this more general approach works for all possible values of orbital angular momentum and spin, making it much more powerful.
You dramatically misunderstood and misquoted Ernie's formula (4.17)!
The whole point of his formula is the term you skipped, $𝑂(𝜃^2)$
on the right-hand side,
$$e^{-i\theta\vec{J}\cdot\hat{n}}=\exp{(-i\theta\ n_x J_x)}\exp{(-i\theta\ n_y J_y)}\exp{(-i\theta n_z J_z)} +𝑂(𝜃^2),
$$
which is to say his rotations only compose like this for infinitesimally small rotations, a point he stresses in (4.28)! This is the essence of Lie groups, namely that the non commutativity first emerges at $𝑂(𝜃^2)$.
You may see this by two sequential applications of the CBH composition of exponentials of noncommuting operators, (4.25), namely
$$
\exp{(-i\theta\ n_x J_x)}\exp{(-i\theta\ n_y J_y)}\exp{(-i\theta n_z J_z)} \\ = \exp \bigl( -i\theta ~\hat n\cdot \vec J -i (\theta^2 /2)(n_xn_y J_z- n_x n_zJ_y + n_y n_z J_x )+ O(\theta^3)\bigr) \\ =
\exp(-i\theta ~\hat n\cdot \vec J ) + O(\theta^2).
$$
cf. (4.29).
As far as non commutativity goes, he threw out the baby with the bathwater, but he is illustrating mappings of sums of matrices, not their noncommutative multiplication...
Best Answer
The two representations are unitarily equivalent to each other, except for an overall factor of $i$.
To be clear, I'll write $J$ and $\tilde J$ for the generators in the two different representations. One representation is $$ J_x = \left( \begin{matrix} 0&0&0\\ 0&0&-1 \\ 0&1&0\end{matrix} \right) \hskip1cm J_y = \left( \begin{matrix} 0&0&1\\ 0&0&0 \\ -1&0&0\end{matrix} \right) \hskip1cm J_z = \left( \begin{matrix} 0&-1&0\\ 1&0&0 \\ 0&0&0\end{matrix} \right) $$ and the other is $$ \tilde J_x = \frac{1}{\sqrt{2}}\left( \begin{matrix} 0&1&0\\ 1&0&1 \\ 0&1&0\end{matrix} \right) \hskip1cm \tilde J_y = \frac{i}{\sqrt{2}}\left( \begin{matrix} 0&-1&0\\ 1&0&-1 \\ 0&1&0\end{matrix} \right) \hskip1cm \tilde J_z = \left( \begin{matrix} 1&0&0\\ 0&0&0 \\ 0&0&-1\end{matrix} \right). $$ The $J$s are anti-hermitian and $\tilde J$s are hermitian. That's just a matter of convention, because we can multiply the $J$s by $i$ to make them hermitian. The unitary matrix $$ U = \frac{1}{\sqrt{2}} \left( \begin{matrix} 1&0&-1\\ i&0&i \\ 0&-\sqrt{2}&0\end{matrix} \right) $$ satisfies $$ i\,J_x U = U\tilde J_x \hskip2cm i\,J_y U = U\tilde J_y \hskip2cm i\,J_z U = U\tilde J_z, $$ which proves that the two representations are equivalent except for the overall factor of $i$.
These identities could be written in the form $i\,J=U\tilde J U^{-1}$ instead, but they way I wrote them above makes them easier to check.