- find it's eigenvalues using $|\lambda I -A| = 0$ equation, you must find them $a$ and $-a$.
Hint: $\lambda^2-a^2 = (\lambda-a)(\lambda+a)$
- find their corresponding eigenvectors $e_1$ and $e_2$, using the criteria of eigen-value-vector
$\lambda_1 e_1 = Ae_1 $
$$a\left(\!
\begin{array}{c}
x \\
y
\end{array}
\!\right)=\left(\!
\begin{array}{c}
0 & 1 \\
a^2 & 0
\end{array}
\!\right)\left(\!
\begin{array}{c}
x \\
y
\end{array}
\!\right)$$
\begin{cases}
ax=y \\
ay=a^2x
\end{cases}
so we notice that a set of solutions is of the form $\left(\!
\begin{array}{c}
x \\
ax
\end{array}
\!\right)$ let's pick the vector $e_1=\left(\!
\begin{array}{c}
1 \\
a
\end{array}
\!\right)$
(Do the same to find $e_2$)
according to my calculations $$e_1=\left(\!
\begin{array}{c}
1 \\
a
\end{array}
\!\right)$$
$$e_2=\left(\!
\begin{array}{c}
1 \\
-a
\end{array}
\!\right)$$
- construct the matrix $M$ with those eigenvectors
$$M=\left(\!
\begin{array}{c}
1 & 1 \\
a & -a
\end{array}
\!\right)$$
And you are done
$$M^{-1}=\left(\!
\begin{array}{c}
\frac{1}{2} & \frac{1}{2a} \\
\frac{1}{2} & \frac{-1}{2a}
\end{array}
\!\right)$$
Now :
$$M^{-1}AM=\left(\!
\begin{array}{c}
\frac{1}{2} & \frac{1}{2a} \\
\frac{1}{2} & \frac{-1}{2a}
\end{array}
\!\right)\left(\!
\begin{array}{c}
0 & 1 \\
a^2 & 0
\end{array}
\!\right)
\left(\!
\begin{array}{c}
1 & 1 \\
a & -a
\end{array}
\!\right)$$
$$M^{-1}AM=\left(\!
\begin{array}{c}
a & 0 \\
0 & -a
\end{array}
\!\right)$$
Before talking about multiplication of two matrices, let's see another way to interpret matrix $A$. Say we have a matrix $A$ as below,
$$
\begin{bmatrix}
1 & 2 & 3 \\
1 & 1 & 2 \\
1 & 2 & 3 \\
\end{bmatrix}
$$
we can easily find that column $\begin{bmatrix} 3 \\ 2 \\ 3 \\\end{bmatrix}$ is linear combination of first two columns.
$$
1\begin{bmatrix} 1 \\ 1 \\ 1\\\end{bmatrix} +
1\begin{bmatrix} 2 \\ 1 \\ 2\\\end{bmatrix} =
\begin{bmatrix} 3 \\ 2 \\ 3 \\\end{bmatrix}
$$
And you can say $\begin{bmatrix} 1 \\ 1 \\ 1 \\\end{bmatrix}$ and $\begin{bmatrix} 2 \\ 1 \\ 2 \\\end{bmatrix}$ are two basis for column space of $A$.
Forgive the reason why you want to decompose matrix $A$ at first place like this,
$$
\begin{bmatrix}
1 & 2 & 3 \\
1 & 1 & 2 \\
1 & 2 & 3 \\
\end{bmatrix} =
\begin{bmatrix}
1 & 0 & 1 \\
1 & 0 & 1 \\
1 & 0 & 1 \\
\end{bmatrix} +
\begin{bmatrix}
0 & 2 & 2 \\
0 & 1 & 1 \\
0 & 2 & 2 \\
\end{bmatrix}
$$
but you can, and in the end, it looks reasonable.
If you view this equation column wise, each $column_j$ of $A$ is the sum of corresponding $column_j$ of each matrix in RHS.
What's special about each matrix of RHS is that each of them is a rank 1 matrix whose column space is the line each base of column space of $A$ lies on. e,g.
$
\begin{bmatrix}
1 & 0 & 1 \\
1 & 0 & 1 \\
1 & 0 & 1 \\
\end{bmatrix}
$
spans only $\begin{bmatrix} 1 \\ 1 \\ 1 \\\end{bmatrix}$. And people say rank 1 matrices are the building blocks of any matrices.
If now you revisit the concept of viewing $A$ column by column, this decomposition actually emphasizes the concept of linear combination of base vectors.
If these make sense, you could extend the RHS further,
$$
\begin{bmatrix}
1 & 2 & 3 \\
1 & 1 & 2 \\
1 & 2 & 3 \\
\end{bmatrix} =
\begin{bmatrix} 1 \\ 1 \\ 1 \\\end{bmatrix}
\begin{bmatrix} 1 & 0 & 1 \\\end{bmatrix} +
\begin{bmatrix} 2 \\ 1 \\ 2 \\\end{bmatrix}
\begin{bmatrix} 0 & 1 & 1 \\\end{bmatrix}
$$
Each term in RHS says take this base, and make it "look like" a rank 3 matrix.
And we can massage it a little bit, namely put RHS into matrix form, you get
$$
\begin{bmatrix}
1 & 2 & 3 \\
1 & 1 & 2 \\
1 & 2 & 3 \\
\end{bmatrix} =
\begin{bmatrix}
1 & 2 \\
1 & 1 \\
1 & 2 \\
\end{bmatrix}
\begin{bmatrix}
1 & 0 & 1 \\
0 & 1 & 1 \\
\end{bmatrix}
$$
Now you can forget matrix $A$, and imagine what you have are just two matrices on RHS. When you read this text backward(I mean logically), I hope matrix multiplication in this fashion makes sense to you now. Or if you prefer, you can start with two matrices in the question.
Best Answer
Try \begin{align} A = \begin{bmatrix} 0 & 1 & 0\\ 0 & 0 & 1\\ 1 & 0 & 0 \end{bmatrix} \end{align}