I think your difficulty is purely "conceptual". A vector space is ANY set that obeys the axioms of a vector space (such a definition assumes an associated "field of scalars"). In particular, over a given field $F$, the set:
$\text{Hom}_F(U,V)$ = all linear transformations $U \to V$ is a vector space.
So "matrices" are vectors, too! Look, we can add them: if $A = (a_{ij}), B = (b_{ij})$ then $A + B = (c_{ij})$, where for each $i,j: c_{ij} = a_{ij} + b_{ij}$, and we can "multiply by a scalar":
$rA = (ra_{ij})$ (sometimes this is written as $(rI)A$).
One way to "ease the conceptual transition" is called the "vectorization" of matrices: we just string the columns "head-to-toe" into one long column, so that:
$A = \begin{bmatrix}1&2\\0&3\end{bmatrix}$ becomes:
$A = \begin{bmatrix}1\\0\\2\\3\end{bmatrix}$, transforming an element of $\text{Mat}_2(F)$ into an element of $F^4$ (you can take $F = \Bbb R$, for concreteness, if you primarily deal with real vector spaces).
Seen this way, it becomes clear that the second coordinate of your basis vectors is "unnecessary baggage", as it is always $0$. This is no different than identifying the subspace:
$U = \{(x,0,y,z) \in \Bbb R^4\}$ with $\Bbb R^3 = \{(x,y,z):x,y,z \in \Bbb R\}$
It should be clear that $\phi:U \to \Bbb R^3$ given by $\phi(x,0,y,z) = (x,y,z)$ is a bijective linear transformation.
So, in your situation, you want to find $a,b,c$ such that:
$A = \begin{bmatrix}1\\0\\2\\3\end{bmatrix} = a\begin{bmatrix}1\\0\\0\\0\end{bmatrix} + b\begin{bmatrix}0\\0\\1\\0\end{bmatrix} + c\begin{bmatrix}0\\0\\1\\1\end{bmatrix}$
Which is just a system of 3 linear equations in 3 unknowns:
$a = 1\\b+c = 2\\c = 3$
that you should be proficient in solving by now.
You're wrong: the coordinates in basis $\mathcal A$ of the vectors of basis $\mathcal B$ are the column vectors of $P_{\mathcal A\to\mathcal B}$, not of its inverse.
If $P=\begin{pmatrix}2&1&0\\0&1&2\\3&2&4 \end{pmatrix}$ and $\mathcal B=\{w_1,w_2,w_3\}$, we have, for instance:
$$w_1=2v_1+3v_3=2\begin{pmatrix}2&4\\0&2\end{pmatrix}+3\begin{pmatrix}0&1\\3&0\end{pmatrix}=\begin{pmatrix}4&11\\9&4\end{pmatrix}$$
and so on.
Best Answer
Your procedure seems correct, but it depends on how you write down these matrices.
Let $(e_i)_{i=1,2,3}$ and $(e'_j)_{j=1,2}$ denote the canonical bases in the source and the image. Let $B=(u_\ell)_{\ell=1,2,3}$ and $B'=(u'_k)_{k=1,2}$ denote the stated bases vectors and finally $A=(a_{kl})$ the matrix calculated in the bases $B,B'$. (As a paranthetical remark: One of my predelictions concerning change of base calculations is to write an abstract vector $x$ in the form $x=\sum_k u_k x_k$ in the basis $B$, i.e. with the vectors to the right. This ensures keeping correct track of indices and matrix multiplications. Paranthesis closed).
The abstract linear transformation acts as follows: $$ f(u_\ell) = \sum_{k=1}^2 u'_k A_{k\ell} $$ Writing $u_\ell= \sum_{i=1}^3 e_i u_{i\ell}$ and $u'_k= \sum_{j=1}^2 e'_j u'_{jk}$ we get the identity: $$ \sum_i f(e_i) u_{i\ell} = \sum_j e'_j \sum_k u'_{jk} A_{k\ell} $$ Let $M$ be the matrix of $f$ between the canonical bases. Then $$ \sum_i f(e_i) = \sum_j e'_j M_{ji}$$ So comparing we get: $\sum_i M_{ji} u_{i\ell} = \sum_k u'_{jk} A_{k\ell}$ or with your data:
$$ M \left[\begin{matrix} 1 & 0 & 1\\ 1 & 1 & 1 \\ 1 & 0 & 0 \end{matrix} \right]= \left[\begin{matrix} 1 & 0\\ 1 & 1 \end{matrix} \right] \left[\begin{matrix} 2 & 1 & 3\\ 3 & 1 & -3 \end{matrix} \right] $$ which you then have to solve for $M$. The answer is then simply $M'$ the usual transpose of $M$. Using scilab I got:$$ M= \left[\begin{matrix} 2 & 1 & -1\\ -2 & 2 & 5 \end{matrix} \right]$$ so the answer should be its transpose. You may now tell me if this is compatible with the answer to the exercise?