You are correct for the spectral radius:
$$\rho (A) = max |\lambda|,$$
where $\lambda$ is an eigenvalue of $A$.
To write the Jacobi iteration, we solve each equation in the system as:
$E1: x_1 = -2x_2 + 1$
$E2: x_2 = -3x_1 + 0$
This is typically written as, $Ax = (D - L - U)x = b$,
where $D$ is the diagonal, $-L$ is the lower triangular and $-U$ is the upper triangular. Solving this system results in:
$x = D^{-1}(L + U)x + D^{-1}b$ and the matrix form of the Jacobi iterative technique is:
$x_{k} = D^{-1}(L + U)x_{k-1} + D^{-1}b, k = 1, 2, \ldots$
Writing these out, gives:
$$A = \begin{pmatrix} 1&2 \\ 3&1\end{pmatrix} = D - L - U = \begin{pmatrix} 1&0 \\ 1&0\end{pmatrix} - \begin{pmatrix} 0&0 \\ -3&0\end{pmatrix} -\begin{pmatrix} 0&-2 \\ 0&0\end{pmatrix}.$$
This results in an iteration formula of (compare this to what I started with with $E1$ and $E2$ above):
$$x_{k} = D^{-1}(L + U)x_{k-1} + D^{-1}b = \begin{pmatrix} 0&-2 \\ -3&0\end{pmatrix}x_{k-1} + \begin{pmatrix} 1 \\ 0\end{pmatrix}$$
This can also be written in a component-wise form.
We know the solution here is $\displaystyle x = (-\frac{1}{5}, \frac{3}{5})$, but no initial $x_{0}$ choice will give convergence here because $A$ is not diagonally dominant (it is easy to manually crank tables for different starting $x_0's$ and see what happens).
Note: See the nice comment below from Elmar Zander, which is an oversight on my part! Thanks Elmar!
Regards
Dima Railguner;
If I remember correctly you can determine the sign of the dominant eigenvalue by observing the xn and the xn+1 iteration. The signs will alternate when the dominant eigenvalue is negative and won't when it is positive. That is at least what some books say.
Also, my power method gets -3.09... So for at least this problem it does not have a problem with the signs.
Looks like we are not using the same method. I can give you a link to mine if you want it.
Best Answer
In the power iteration $ \vec{v}_{n+1} = A \vec{v}_n$ and we normalize each vector $\vec{v}_{n+1} \leftarrow \vec{v}_{n+1} / \| \vec{v}_{n+1} \|$. Note that
\begin{equation} A^2 = \lambda^2 \begin{bmatrix} 1 & 0 \\ 0 & 1 \end{bmatrix} = \lambda^2 I \end{equation}
Thus, $\vec{v}_{2n} = A^{2n} \vec{v}_0 = \lambda^{2n} \vec{v}_0$ with $\vec{v}_0 = [1,1]^{\intercal}$. Normalizing vector gives
\begin{equation} \vec{v}_{2n} \leftarrow \frac{1}{\sqrt{2}}\begin{bmatrix}1\\1\end{bmatrix} \end{equation}
While $\vec{v}_{2n+1} = A^{2n+1} \vec{v}_0 = A A^{2n} \vec{v}_0 = \lambda^{2n} A \vec{v}_0 = \lambda^{2n}[\lambda+1,-\lambda]^{\intercal}$. Normalizing gives
\begin{equation} \vec{v}_{2n+1} \leftarrow \frac{1}{\sqrt{(\lambda+1)^2+\lambda^2}} \begin{bmatrix}(\lambda+1)\\-\lambda \end{bmatrix} \end{equation}
Iterations oscillate between these two vectors, thus the iterations do not converge.
The convergence rate of power method is determined by the ratio $\rho = |\lambda_1| / |\lambda_2| \geq 1$, where $\lambda_1$ and $\lambda_2$ are the first and second largest eigenvalues (in magnitude). The larger the ratio is, the better convergece rate the iteration have.
In the case of your matrix, eigenvalues are $\lambda_1 = |\lambda|$ and $\lambda_2 = |-\lambda|$, and the ratio is $\rho = 1$. So the iteration do not converge.