This is the same. Maybe the square diagram in the sequel shows in the "simplest" way why.
First i have to say something about the used convention for vectors.
Because this is the "canonical impediment" when dealing with base change.
We work with column vectors, and matrices act on them by left multiplication. The linear map of left multiplication with a matrix $A$ will be denoted below (abusively) also by $ A$. So $x$ goes via $A$ to $ A\cdot x=Ax$, displayed as
$$
x\overset{A}\longrightarrow Ax\ .
$$
"Most of the world" uses column vectors. (Some authors write notes or books (e.g. in Word), and find it handy to use row vectors, so they can be simpler displayed in the book rows. In this case linear maps induced by matrices use the multiplication from the right with such matrices. As long as we need in computations only linear combinations the convention is not so important, but it becomes when we use linear maps induced by matrices.)
We will work in the "category" of (finite dimensional) vector spaces (over $\Bbb R$) with a fixed bases. The space $V:=\Bbb R^3$ comes with the canonical base $\mathcal E=(e_1,e_2,e_3)$, where $e_1,e_2,e_3$ are the columns of the matrix $E$ below,
$$
E=
\begin{bmatrix}
1 & 0 & 0\\
0 & 1 & 0\\
0 & 0 & 1
\end{bmatrix}\ .
$$
We write this object as $(V,\mathcal E)$. By abuse, we may want to write $(V,E)$ instead.
We start with two objects in this category.
For our purposes let them have the same underlying vector space $V=W=\Bbb R^3$, first object is $(V,\mathcal B=(b_1,b_2,b_3))$, and the second object is
$(W,\mathcal C=(c_1,c_2,c_3))$.
A linear map $g:V\to W$ is defined "abstractly", and has no need for chosen bases. But in practice, $g$ is usually given bases-specific in the following way. Let $v$ be a vector in $V$. We write it w.r.t. $\mathcal B$ as $v=x_1b_1+x_2b_2+x_3b_3$, and write this data as a column vector:
$$
v = \begin{bmatrix}x_1\\x_2\\x_3\end{bmatrix}_{\mathcal B}
:=x_1b_1+x_2b_2+x_3b_3
\ .
$$
Then we consider a matrix $M=M_{\mathcal B, \mathcal C}$ and build the matrix multiplication vector:
$$
\begin{bmatrix}y_1\\y_2\\y_3\end{bmatrix}
=
M
\begin{bmatrix}x_1\\x_2\\x_3\end{bmatrix}\ .
$$
Then we consider the vector $w\in W$ which written in base $\mathcal C$ has the $y$-components, so
$$
w =
\begin{bmatrix}y_1\\y_2\\y_3\end{bmatrix}_{\mathcal C}
:=y_1c_1+y_2c_2+y_3c_3
\ ,
$$
and the map $g$ is mapping linearly $v$ to $w$.
This concludes the section related to conventions and notations.
Let $\mathcal C$ be the base from the OP, the base with vectors which are columns of
$$
C=
\begin{bmatrix}
1 & 0 & 1\\
1 & 1 & 1\\
0 & -1 & 1
\end{bmatrix}\ .
$$
Let $A$ be the matrix for the given linear map $f$ w.r.t. the canonical base $\mathcal E$.
$$
A=\begin{bmatrix}1&-1&2\\-2&1&0\\1&0&1\end{bmatrix}\ .
$$
Consider now the diagram:
$\require{AMScd}$
\begin{CD}
(V,E) @>A>f> (V,E) \\
@A C A {\operatorname{id}}A
@A {\operatorname{id}}A C A\\
(V,C) @>f>{C^{-1}AC}> (V,C) \\
\end{CD}
Indeed, $C$ is the matrix of the identity seen as a map $(V,\mathcal C)\to(V,\mathcal E)$. For instance,
$$
c_1=\begin{bmatrix}1\\0\\0\end{bmatrix}_{\mathcal C}
\qquad\text{ goes to }\qquad
c_1
=\begin{bmatrix}1\\1\\0\end{bmatrix}_{\mathcal E}
=C\begin{bmatrix}1\\0\\0\end{bmatrix}_{\mathcal E}\ .
$$
It remains to compute explicitly the matrix $C^{-1}AC$.
Computer, of course:
sage: A = matrix(3, 3, [1, -1, 2, -2, 1, 0, 1, 0, 1])
sage: C = matrix(3, 3, [1, 0, 1, 1, 1, 1, 0, -1, 1])
sage: A
[ 1 -1 2]
[-2 1 0]
[ 1 0 1]
sage: C
[ 1 0 1]
[ 1 1 1]
[ 0 -1 1]
sage: C.inverse() * A * C
[ 0 -6 3]
[-1 4 -3]
[ 0 3 -1]
And we check the result both ways:
$\bf(1)$
$\require{AMScd}$
\begin{CD}
c_j=\begin{bmatrix}1\\1\\0\end{bmatrix}_{\mathcal E}\ ,\
\begin{bmatrix}0\\1\\-1\end{bmatrix}_{\mathcal E}\ ,\
\begin{bmatrix}1\\1\\1\end{bmatrix}_{\mathcal E}
@>A>f>
fc_j=\begin{bmatrix}0\\-1\\1\end{bmatrix}_{\mathcal E}\ ,\
\begin{bmatrix}-3\\1\\-1\end{bmatrix}_{\mathcal E}\ ,\
\begin{bmatrix}2\\-1\\2\end{bmatrix}_{\mathcal E}
\\
@A C A {\operatorname{id}}A
@A {\operatorname{id}}A C A\\
c_j=\begin{bmatrix}1\\0\\0\end{bmatrix}_{\mathcal C}\ ,\
\begin{bmatrix}0\\1\\0\end{bmatrix}_{\mathcal C}\ ,\
\begin{bmatrix}0\\0\\1\end{bmatrix}_{\mathcal C}
@>f>{C^{-1}AC}>
fc_j=\begin{bmatrix}0\\-1\\0\end{bmatrix}_{\mathcal C}\ ,\
\begin{bmatrix}-6\\4\\3\end{bmatrix}_{\mathcal C}\ ,\
\begin{bmatrix}3\\-3\\-1\end{bmatrix}_{\mathcal C}
\end{CD}
Or simpler, using block matrices, and ignoring the knowledge of the bases:
$\require{AMScd}$
\begin{CD}
C
@>A>f>
AC
\\
@A C A {\operatorname{id}}A
@A {\operatorname{id}}A C A\\
E
@>f>{C^{-1}AC}>
C^{-1}AC
\end{CD}
$\bf(2)$ In the spirit of the OP, using copy+pasted+corrected row vector computations:
$$
\begin{aligned}
(0,-1,1)
&=\boxed{0}\cdot (1,1,0)+\boxed{(-1)}\cdot (0,1,-1)+\boxed{0}\cdot (1,1,1)
\ ,
\\
\\
(-3,1,-1)
&=\boxed{-6}\cdot (1,1,0)+\boxed{4}\cdot (0,1,-1)+\boxed{3}\cdot (1,1,1)
\ ,
\\
\\
(2,-1,2)
&=\boxed{3}\cdot (1,1,0)+\boxed{(-3)}\cdot (0,1,-1)+\boxed{-1}\cdot (1,1,1)
\ .
\end{aligned}
$$
Best Answer
We can represent a linear map: $F:\mathbb{R}^{2} \to \mathbb{R}^{2}$ using a matrix $\mathbf{A}\in\mathbb{R}^{2\times2}$, generically, we can write:
$$\mathbf{A}=\begin{pmatrix}a & b \\ c & d\end{pmatrix}$$
Applying this to the vector $\begin{pmatrix}1 & 3\end{pmatrix}^{T}$, we have:
$$\begin{pmatrix}a & b \\ c & d\end{pmatrix}\begin{pmatrix}1 \\ 3\end{pmatrix}=\begin{pmatrix}a+3b \\ c + 3d\end{pmatrix} = \begin{pmatrix}3 \\ 1\end{pmatrix}$$
Similarly, applying it to the vector $\begin{pmatrix}-1 & 3\end{pmatrix}^{T}$, we have:
$$\begin{pmatrix}a & b \\ c & d\end{pmatrix}\begin{pmatrix}-1 \\ 3\end{pmatrix}=\begin{pmatrix}-a+3b \\ -c+3d\end{pmatrix} = \begin{pmatrix}3 \\ 2\end{pmatrix}$$
We therefore have a system of 4 linear simultaneous equations with 4 variables $a, b, c$ and $d$:
$$\begin{align*}a+3b &= 3 \\ c+3d &= 1 \\ 3b-a &= 3 \\ 3d - c &= 2\end{align*}$$
Solving these we get:
$$a=0,\quad b=1,\quad c = -\frac{1}{2},\quad d=\frac{1}{2}$$
We can thus write our transformation matrix describing the linear map, as:
$$\mathbf{A} = \frac{1}{2}\begin{pmatrix}0 & 2 \\ -1 & 1\end{pmatrix}$$