To show that
$$
\left(\sigma\cdot\mathbf{n}\right)^2=\mathbf n\cdot\mathbf n+i\sigma\cdot\left(\mathbf n\times\mathbf n\right)\tag{1}
$$
consider writing the above as
\begin{align}
\left(\sigma\cdot\mathbf a\right)\left(\sigma\cdot\mathbf b\right)&=\sum_j\sigma_ja_j\sum_k\sigma_kb_k\\
&=\sum_j\sum_k\left(\frac12\{\sigma_j,\,\sigma_k\}+\frac12[\sigma_j,\,\sigma_k]\right)a_jb_k\\
&=\sum_j\sum_k\left(\delta_{jk}+i\epsilon_{jkl}\sigma_l\right)a_jb_k\tag{2}
\end{align}
where the 2nd line arises from using the anti-commutating and commutating relation for the matrices. In the third line, we have the Kronecker delta and Levi-Civita symbol. The result (1) follows from completing the math from (2) (that is, writing it in vector notation and replacing $\mathbf a$ and $\mathbf b$ with $\mathbf n$).
The remainder is to show that this is equal to 1. For that, the following two hints should suffice:
- Note that for two vectors $\mathbf a$ and $\mathbf b$, $\mathbf a\times\mathbf b=-\mathbf b\times\mathbf a$. What requirement is needed if $\mathbf b=\mathbf a$: $\mathbf a\times\mathbf a=?$
- For the unit vector, e.g. $\mathbf n=(1,\,0)^T$, what is the dot product?
The identity you used,
$$
\exp(i\theta \, \hat s)=\cos(\theta)+i\sin(\theta)\,\hat s,
\tag{$\ast$}
$$
is crucially dependent on the operator $\hat s$ being idempotent, and particularly on the fact that $\hat{s}^2=\mathbb1$. This is generally not the case for angular momenta other than spin-1/2. In general, the total angular momentum is a scalar within the representation (i.e. $\hat{\mathbf J}^2=\hbar^2j(j+1)\times\mathbb 1$) but outside of spin-1/2 it is no longer the case that each component is idempotent, so if $\hat{\mathbf n}$ is a unit vector, $\hat{\mathbf n}\cdot\hat{\mathbf J}$ no longer squares to a multiple of $\mathbb 1$. In that case, the rotation operator is still given by
$$
\hat U(\vec \alpha)=\exp(-i\vec \alpha\cdot\hat{\mathbf J}/\hbar)
$$
but no further simplifications are possible.
That said, if you have a specific $j$ you want to investigate, there is probably an analogous formula to $(\ast)$, because the $(2j+1)$th and higher powers must be linear combinations of the first $2j$ powers and the identity, so you can again fold the exponential series into $2j+1$ terms,
$$
\exp(-i\theta \hat{\mathbf n}\cdot\hat{\mathbf J})=\sum_{k=0}^{2j}f_k(\theta) (\hat{\mathbf n}\cdot\hat{\mathbf J})^k,
$$
where the $f_k$ are given by appropriate series. Depending on what comes out, this may or may not be useful, but again it is a lot of work for each specific $j$ you're interested in.
To be a bit more explicit, let me show how this works for the simplest nontrivial orbital angular momentum, $j=1$. If you work in the $z$ direction you get, for the different powers of $\hat J_z$,
$$
\mathbb 1=\begin{pmatrix}1&0&0\\0&1&0\\0&0&1\end{pmatrix}
,\quad
\hat J_z=\hbar\begin{pmatrix}1&0&0\\0&0&0\\0&0&-1\end{pmatrix}
,\quad
\hat J_z^2=\hbar^2\begin{pmatrix}1&0&0\\0&0&0\\0&0&1\end{pmatrix}
,\quad
$$
and it's only then that you get stuck in a loop: $\hat J_z^{2+n}=\hbar^{n+1} \hat J_z$ for odd $n\geq1$ and $\hat J_z^{2+n}=\hbar^{n} \hat J_z^2$ for even $n\geq0$. This means that the exponential folds again, but not as neatly:
$$
\exp(-i\theta \hat J_z/\hbar)
=\sum_{n=0}^\infty\frac{(-i\theta \hat J_z/\hbar)^n}{n!}
=
\mathbb 1
-\frac{i\sin(\theta)}{\hbar} \hat J_z
+\frac{\cos(\theta)-1}{\hbar^2}\hat J_z^2.
$$
This is similar, but not quite the same, as the original $(\ast)$. It does yield the desired invariance after $2\pi$ rotations, but it required a lot of work for a single $j$.
This does leave you with the need to prove that for general, integral $j$ all states will return to themselves after a $2\pi$ rotation. The proof is somewhat different, though, and it relies on the fact that the eigenvalues of $\mathbf J$ are all integers. In particular, for any given unit vector $\hat{\mathbf n}$ there will be a basis $\{|m⟩=|j,m,\hat{\mathbf n}⟩\}_m$ of eigenstates of $\hat{\mathbf n}\cdot\hat{\mathbf J}$ with integer eigenvalues:
$$
\hat{\mathbf n}\cdot\hat{\mathbf J}|m⟩=\hbar m|m⟩,\quad m=-j,\ldots,j.
$$
This lets you calculate the action of $\hat U(\vec \alpha)$ on each eigenstate:
$$
\hat U(\vec \alpha)|m⟩
=\exp(-i\vec \alpha\cdot\hat{\mathbf J}/\hbar)|m⟩
=\exp(-i\alpha\hat{\mathbf n}\cdot\hat{\mathbf J}/\hbar)|m⟩
=\exp(-im\alpha)|m⟩.
$$
For $\alpha=2\pi$ and integer $m$, this is exactly $|m⟩$, with no added phase. Since any arbitrary state $|\psi⟩$ can be expressed as a linear combination of the $|m⟩$ and those are unchanged by $\hat U(\vec \alpha)$, it follows that $|\psi⟩$ itself is also unchanged by $\hat U(\vec \alpha)$.
Finally, you should note that this proof also works to show that half-integral $j$s produce $\pi$ phases upon $2\pi$ rotations, since then every $m$ is half-integral, and $e^{-i2\pi m}\equiv -1$ regardless of $m$. Your original proof, though, also fails for $j\geq 3/2$, since the idempotency condition is also not fulfilled.
Best Answer
The same, except that the $\sigma_k$ are now not Pauli matrices but the generators of a su(2) representation of the desired spin. For example, the $3\times 3$ matrices $$ \sigma_\ell:=(2\epsilon_{jk\ell})_{j,k=1:3}$$ define the spin 1 representation on 3-vectors. [Maybe the factor 2 should take a different value.] The corresponding explicit formula comes from the Rodrigues formula $$e^{X(a)}=1+\frac{\sin|a|}{|a|}X(a)+\frac{1-\cos|a|}{|a|}X(a)^2,$$ where $X(a)$ is the matrix mapping a vector $b$ to $X(a)b=a \times b$.
For higher spin, the corresponding formula will depend on how you write the representation. Numerically, one would just diagonalize the matrix in the exponent; then computing the exponential is trivial. I don't know whether for general spin if there is any advantage in having an explicit formula.