This is the same. Maybe the square diagram in the sequel shows in the "simplest" way why.
First i have to say something about the used convention for vectors.
Because this is the "canonical impediment" when dealing with base change.
We work with column vectors, and matrices act on them by left multiplication. The linear map of left multiplication with a matrix $A$ will be denoted below (abusively) also by $ A$. So $x$ goes via $A$ to $ A\cdot x=Ax$, displayed as
$$
x\overset{A}\longrightarrow Ax\ .
$$
"Most of the world" uses column vectors. (Some authors write notes or books (e.g. in Word), and find it handy to use row vectors, so they can be simpler displayed in the book rows. In this case linear maps induced by matrices use the multiplication from the right with such matrices. As long as we need in computations only linear combinations the convention is not so important, but it becomes when we use linear maps induced by matrices.)
We will work in the "category" of (finite dimensional) vector spaces (over $\Bbb R$) with a fixed bases. The space $V:=\Bbb R^3$ comes with the canonical base $\mathcal E=(e_1,e_2,e_3)$, where $e_1,e_2,e_3$ are the columns of the matrix $E$ below,
$$
E=
\begin{bmatrix}
1 & 0 & 0\\
0 & 1 & 0\\
0 & 0 & 1
\end{bmatrix}\ .
$$
We write this object as $(V,\mathcal E)$. By abuse, we may want to write $(V,E)$ instead.
We start with two objects in this category.
For our purposes let them have the same underlying vector space $V=W=\Bbb R^3$, first object is $(V,\mathcal B=(b_1,b_2,b_3))$, and the second object is
$(W,\mathcal C=(c_1,c_2,c_3))$.
A linear map $g:V\to W$ is defined "abstractly", and has no need for chosen bases. But in practice, $g$ is usually given bases-specific in the following way. Let $v$ be a vector in $V$. We write it w.r.t. $\mathcal B$ as $v=x_1b_1+x_2b_2+x_3b_3$, and write this data as a column vector:
$$
v = \begin{bmatrix}x_1\\x_2\\x_3\end{bmatrix}_{\mathcal B}
:=x_1b_1+x_2b_2+x_3b_3
\ .
$$
Then we consider a matrix $M=M_{\mathcal B, \mathcal C}$ and build the matrix multiplication vector:
$$
\begin{bmatrix}y_1\\y_2\\y_3\end{bmatrix}
=
M
\begin{bmatrix}x_1\\x_2\\x_3\end{bmatrix}\ .
$$
Then we consider the vector $w\in W$ which written in base $\mathcal C$ has the $y$-components, so
$$
w =
\begin{bmatrix}y_1\\y_2\\y_3\end{bmatrix}_{\mathcal C}
:=y_1c_1+y_2c_2+y_3c_3
\ ,
$$
and the map $g$ is mapping linearly $v$ to $w$.
This concludes the section related to conventions and notations.
Let $\mathcal C$ be the base from the OP, the base with vectors which are columns of
$$
C=
\begin{bmatrix}
1 & 0 & 1\\
1 & 1 & 1\\
0 & -1 & 1
\end{bmatrix}\ .
$$
Let $A$ be the matrix for the given linear map $f$ w.r.t. the canonical base $\mathcal E$.
$$
A=\begin{bmatrix}1&-1&2\\-2&1&0\\1&0&1\end{bmatrix}\ .
$$
Consider now the diagram:
$\require{AMScd}$
\begin{CD}
(V,E) @>A>f> (V,E) \\
@A C A {\operatorname{id}}A
@A {\operatorname{id}}A C A\\
(V,C) @>f>{C^{-1}AC}> (V,C) \\
\end{CD}
Indeed, $C$ is the matrix of the identity seen as a map $(V,\mathcal C)\to(V,\mathcal E)$. For instance,
$$
c_1=\begin{bmatrix}1\\0\\0\end{bmatrix}_{\mathcal C}
\qquad\text{ goes to }\qquad
c_1
=\begin{bmatrix}1\\1\\0\end{bmatrix}_{\mathcal E}
=C\begin{bmatrix}1\\0\\0\end{bmatrix}_{\mathcal E}\ .
$$
It remains to compute explicitly the matrix $C^{-1}AC$.
Computer, of course:
sage: A = matrix(3, 3, [1, -1, 2, -2, 1, 0, 1, 0, 1])
sage: C = matrix(3, 3, [1, 0, 1, 1, 1, 1, 0, -1, 1])
sage: A
[ 1 -1 2]
[-2 1 0]
[ 1 0 1]
sage: C
[ 1 0 1]
[ 1 1 1]
[ 0 -1 1]
sage: C.inverse() * A * C
[ 0 -6 3]
[-1 4 -3]
[ 0 3 -1]
And we check the result both ways:
$\bf(1)$
$\require{AMScd}$
\begin{CD}
c_j=\begin{bmatrix}1\\1\\0\end{bmatrix}_{\mathcal E}\ ,\
\begin{bmatrix}0\\1\\-1\end{bmatrix}_{\mathcal E}\ ,\
\begin{bmatrix}1\\1\\1\end{bmatrix}_{\mathcal E}
@>A>f>
fc_j=\begin{bmatrix}0\\-1\\1\end{bmatrix}_{\mathcal E}\ ,\
\begin{bmatrix}-3\\1\\-1\end{bmatrix}_{\mathcal E}\ ,\
\begin{bmatrix}2\\-1\\2\end{bmatrix}_{\mathcal E}
\\
@A C A {\operatorname{id}}A
@A {\operatorname{id}}A C A\\
c_j=\begin{bmatrix}1\\0\\0\end{bmatrix}_{\mathcal C}\ ,\
\begin{bmatrix}0\\1\\0\end{bmatrix}_{\mathcal C}\ ,\
\begin{bmatrix}0\\0\\1\end{bmatrix}_{\mathcal C}
@>f>{C^{-1}AC}>
fc_j=\begin{bmatrix}0\\-1\\0\end{bmatrix}_{\mathcal C}\ ,\
\begin{bmatrix}-6\\4\\3\end{bmatrix}_{\mathcal C}\ ,\
\begin{bmatrix}3\\-3\\-1\end{bmatrix}_{\mathcal C}
\end{CD}
Or simpler, using block matrices, and ignoring the knowledge of the bases:
$\require{AMScd}$
\begin{CD}
C
@>A>f>
AC
\\
@A C A {\operatorname{id}}A
@A {\operatorname{id}}A C A\\
E
@>f>{C^{-1}AC}>
C^{-1}AC
\end{CD}
$\bf(2)$ In the spirit of the OP, using copy+pasted+corrected row vector computations:
$$
\begin{aligned}
(0,-1,1)
&=\boxed{0}\cdot (1,1,0)+\boxed{(-1)}\cdot (0,1,-1)+\boxed{0}\cdot (1,1,1)
\ ,
\\
\\
(-3,1,-1)
&=\boxed{-6}\cdot (1,1,0)+\boxed{4}\cdot (0,1,-1)+\boxed{3}\cdot (1,1,1)
\ ,
\\
\\
(2,-1,2)
&=\boxed{3}\cdot (1,1,0)+\boxed{(-3)}\cdot (0,1,-1)+\boxed{-1}\cdot (1,1,1)
\ .
\end{aligned}
$$
Best Answer
You're looking for a linear map $T$ such that $T \begin{pmatrix}1\\1\\0\end{pmatrix} = \begin{pmatrix}2\\1\\1\end{pmatrix}$, $T \begin{pmatrix}1\\0\\1\end{pmatrix} = \begin{pmatrix}1\\2\\1\end{pmatrix}$, and $T \begin{pmatrix}0\\1\\1\end{pmatrix} = \begin{pmatrix}-1\\1\\1\end{pmatrix}$. To find a matrix for $T$ with respect to the standard basis using the change of basis formula, you'd note that $[T]^{\beta_1}_{\beta_2}$ is the identity matrix (the notation means "the matrix of $T$ with respect to initial basis $\beta_1$ and final basis $\beta_2$") and the change of basis formula says $$ [T]^\mathcal{E}_\mathcal{E} = [\operatorname{id}]^{\beta_2}_\mathcal{E} [T] ^{\beta_1}_{\beta_2} [\operatorname{id}]^\mathcal{E}_{\beta_1}$$ where $\mathcal{E}$ is the standard basis. The matrix in the middle is the identity, so the matrix of $T$ with respect to the standard basis is $[\operatorname{id}]^{\beta_2}_\mathcal{E} [\operatorname{id}]^\mathcal{E}_{\beta_1}$. The first of these matrices is easy: it records how to express the basis elements from $\beta_2$ as linear combinations of the standard basis, so it is just the matrix $\begin{pmatrix}2&1&-1\\1&2&1\\1&1&1\end{pmatrix}$ whose columns are the vectors from $\beta_2$. To find $[\operatorname{id}]^\mathcal{E}_{\beta_1}$ you could note that it is the inverse of $[\operatorname{id}]^{\beta_1}_\mathcal{E}$, which you can calculate like before then invert, or you could compute it directly using the definition of the matrix of a linear map. Either way you'd get $\begin{pmatrix}1/2&1/2&-1/2\\1/2&-1/2&1/2\\-1/2&1/2&1/2\end{pmatrix}$, and the product is $A=\begin{pmatrix}2&0&-1\\1&0&1\\1/2&1/2&1/2\end{pmatrix}$. You can check that this matrix really does map the vectors in $\beta_1$ to those in $\beta_2$, e.g. $A \begin{pmatrix}1\\1\\0\end{pmatrix} = \begin{pmatrix}2\\1\\1/2\end{pmatrix}+\begin{pmatrix}0\\0\\1/2\end{pmatrix} = \begin{pmatrix}2\\1\\1\end{pmatrix}$.
It's not necessary to use the change of basis machinery unless you want to practise doing it. You can just proceed directly by solving the system of linear equations for the entries of $[T]^\mathcal{E}_\mathcal{E}$ given by the requirements that it maps the entries of $\beta_1$ to $\beta_2$. Alternatively, and this is the method showin in the first link, if you can find scalars $a_{ij}$ such that $\mathbf{e}_j = \sum_i a_{ij}\beta_{1i}$ (where $\mathbf{e}_j$ is the $j$th standard basis vector and $\beta_{1i}$ means the $i$th basis vector from $\beta_1$) then applying $T$ to both sides and using linearity you have $T(\mathbf{e}_j) = \sum_i a_{ij}\beta_{2i}$, and this column vector is the $j$th column of $[T]^\mathcal{E}_\mathcal{E}$. Of course, these scalars $a_{ij}$ are exactly the entries of $[\operatorname{id}]^\mathcal{E}_{\beta_1}$ and $\sum_i a_{ij}\beta_{2i}$ is the $j$th column of the matrix product $[\operatorname{id}]^{\beta_2}_\mathcal{E}[\operatorname{id}]^\mathcal{E}_{\beta_1}$, so this is equivalent to the method above.
In your second link, the answers don't look correct to me. One of them computes the two matrices $M=[\operatorname{id}]^{\beta_1}_\mathcal{E}$ and $N= [\operatorname{id}]^{\beta_2}_\mathcal{E}$ in my notation (unfortunately the other way round to theirs) then says that $[T]^{\beta_2}_{\beta_1} =N^{-1}M$, but $[T]^{\beta_2}_{\beta_1}$ is the identity and in any case $N^{-1}M$ is not the matrix that we want.