To write $T$ in a diagonal form one has to find a basis $(e_1, e_2, e_3)$ in ${\mathbb R}^3$
and a basis $(w_1, w_2)$ in ${\mathbb R}^2$ such that
$$ Te_1=w_1, \quad Te_2=w_2, \quad \text{and}\quad Te_3=0. \tag1$$
(This is possible because $T$ has rank $2$ which is the dimension of the second space.)
Let $(i,j,k)$ be the standard basis in ${\mathbb R}^3$ and $(i,j)$ the standard basis in ${\mathbb R}^2$. Observe that
$$ T i=0,\quad Tj=i+j, \quad \text{and}\quad Tk=i-j. $$
Since $w_1=i+j$ and $w_2=i-j$ are linearly independent they form a basis of ${\mathbb R}^2$.
Hence if one set $e_1=j, e_2=k, e_3=0$, then we have (1).
Note that the above linear form is not unique, conditions in (1) can be replaced by
$$ Tf_1=\lambda u_1, \quad Tf_2=\mu u_2, \quad \text{and}\quad Tf_3=0 $$
where $\lambda, \mu$ are nonzero numbers and $(f_1, f_2, f_3)$ is a basis of ${\mathbb R}^3$ and $(u_1, u_2)$ is a basis of ${\mathbb R}^2$.
Explanation
Let $V$ and $W$ be vector spaces over ${\mathbb R}$, $\dim(V)=m$ and $\dim(W)=n$, and let
$T:V\to W$ be a linear transformation (which means that for arbitrary $v_1, v_2 \in V$ and $\alpha_1, \alpha_2 \in {\mathbb R}$ one has $T(\alpha_1 v_1+\alpha_2 v_2)=\alpha_1 Tv_1+\alpha_2 Tv_2$). If $(e_1, \ldots, e_m)$ is a basis of $V$ and $(f_1, \ldots, f_n)$ is
a basis of $W$, then $T$ can be represented by an $n\times m$ matrix as follows. Let $j\in \{ 1, \ldots, m\}$ and consider vector $Te_j$. It is a vector in $W$ and therefore can be uniquely represented as
$$ Te_j=t_{1j}f_1+\cdots+t_{nj}f_j $$
because $(f_1, \ldots, f_n)$ is a basis of $W$.
Numbers $t_{1j}, \ldots t_{nj}$ form the $j$-th column of the matrix which represents $T$ with respect to the bases $(e_1, \ldots, e_m)$ and $(f_1, \ldots, f_n)$. Hence, the whole matrix is
$$ \left[ \begin{array}{cccc}
t_{11} & t_{12} & \cdots & t_{1m}\\
t_{21} & t_{22} & \cdots & t_{2m}\\
\vdots & \vdots & \ddots & \vdots\\
t_{n1} & t_{n2} & \cdots & t_{nm}
\end{array} \right]. $$
If one represents $v=\lambda_1 e_1+\cdots+\lambda_m e_m$ as a column vector
$$ \left[ \begin{array}{c} \lambda_1\\ \lambda_2\\ \vdots\\ \lambda_m\end{array}\right], $$
then
$$ \left[ \begin{array}{cccc}
t_{11} & t_{12} & \cdots & t_{1m}\\
t_{21} & t_{22} & \cdots & t_{2m}\\
\vdots & \vdots & \ddots & \vdots\\
t_{n1} & t_{n2} & \cdots & t_{nm}
\end{array} \right]\left[ \begin{array}{c} \lambda_1\\ \lambda_2\\ \vdots\\ \lambda_m\end{array}\right]=
\left[ \begin{array}{c} t_{11}\lambda_1+t_{12}\lambda_2+\cdots+t_{1m}\lambda_m\\ t_{21}\lambda_1+t_{22}\lambda_2+\cdots+t_{2m}\lambda_m\\ \vdots\\ t_{n1}\lambda_1+t_{n2}\lambda_2+\cdots+t_{nm}\lambda_m\end{array}\right]=\left[ \begin{array}{c} \mu_1\\ \mu_2\\ \vdots\\ \mu_n\end{array}\right], $$
where $\mu_1, \cdots, \mu_n$ are the coefficients in the expansion of vector $Tv$ in basis $(f_1, \ldots, f_n)$, i.e.,
$$ Tv=\mu_1 f_1+\cdots+\mu_n f_n.$$
In the case of the vector space ${\mathbb R}^k$ one has a natural basis, called the standard basis:
$$ \left[ \begin{array}{c} 1 \\ 0\\ \vdots \\ 0 \end{array} \right],\quad \left[ \begin{array}{c} 0 \\ 1\\ \vdots \\ 0 \end{array} \right],\quad \ldots, \quad
\left[ \begin{array}{c} 0 \\ 0\\ \vdots \\ 1 \end{array} \right]. $$
If an $n\times m$ matrix is given, then one considers it as a matrix which represents a linear transformation from ${\mathbb R}^m$ to ${\mathbb R}^n$ with respect to the standard bases in ${\mathbb R}^m$, resp. in ${\mathbb R}^n$.
In the case above one has $m=3$ and $n=2$. The standard bases are usualy denoted as
$$ i=\left[ \begin{array}{c}1\\ 0\\ 0\end{array}\right], \quad j=\left[ \begin{array}{c}0\\ 1\\ 0\end{array}\right], \quad k=\left[ \begin{array}{c}0\\ 0\\ 1\end{array}\right]\quad \text{in}\quad {\mathbb R}^3$$
and
$$ i=\left[ \begin{array}{c}1\\ 0\end{array}\right], \quad j=\left[ \begin{array}{c}0\\ 1\end{array}\right]\quad \text{in}\quad {\mathbb R}^2.$$
We have a matrix
$$ \left[ \begin{array}{rrr} 0 & 1 & 1\\0 & 1 & -1\end{array}\right] $$
which represents a linear transformation $T:{\mathbb R}^3 \to {\mathbb R}^2$ with respect to the standard bases $(i,j,k)$ in ${\mathbb R}^3$ and $(i,j)$ in ${\mathbb R}^2$. The question is to find such bases $(e_1, e_2, e_3)$ in ${\mathbb R}^3$ and $(w_1, w_2)$ in ${\mathbb R}^2$ that $T$ will be represented by a diagonal matrix with respect to these bases. It means that the bases have to be chosen in a way that one has
$$ Te_1=\lambda w_1,\quad Te_2=\mu w_2,\quad Te_3=0 $$
for some numbers $\lambda$ and $\mu$. Since $T$ is surjective $\lambda$ and $\mu$ are non-zero. Also one can assume that $\lambda=\mu =1$ (if there was an additional condition that the bases have to consist of vectors with norm $1$, then we couldn't assume that those numbers are $1$ but we would calculate them - however in our case there is no condition about norms, so we can assume that those numbers are $1$). Hence we would like to have
$$ Te_1=w_1,\quad Te_2=w_2,\quad Te_3=0. \tag2$$
Now we observe that
$$ T i=0,\quad Tj=i+j, \quad \text{and}\quad Tk=i-j. $$
Hence if we choose new bases as follows:
$e_1=j,\quad e_2=k,\quad e_3=0\quad$ the new basis in ${\mathbb R}^3$
and
$w_1=i+j, \quad w_2=i-j\quad$ the new basis in ${\mathbb R}^2$,
then (2) is fulfilled. Hence with respect to these new bases $T$ is represented by the matrix
$$ \left[ \begin{array}{rrr} 1 & 0 & 0\\0 & 1 & 0\end{array}\right]. $$
It means, if we expand $v\in {\mathbb R}^3$ with respect to the basis $(e_1,e_2,e_3)$ as
$$ v=\alpha_1 e_1+\alpha_2 e_2+\alpha_3 e_2, $$
then $Tv$, which is a vector in ${\mathbb R}^2$, will have an expansion
$$ Tv=\alpha_1 w_1+\alpha_2 w_2 $$
with respect to the basis $(w_1,w_2)$ in ${\mathbb R}^2$:
$$ \left[ \begin{array}{rrr} 1 & 0 & 0\\0 & 1 & 0\end{array}\right] \left[ \begin{array}{c} \alpha_1 \\ \alpha_2 \\ \alpha_3 \end{array}\right]=\left[ \begin{array}{c} \alpha_1 \\ \alpha_2 \end{array}\right]. $$
$\left[ \begin{array}{c} \alpha_1 \\ \alpha_2 \\ \alpha_3 \end{array}\right]$ is the column of $v$ with respect to the basis $(e_1, e_2, e_3)$ and $\left[ \begin{array}{c} \alpha_1 \\ \alpha_2 \end{array}\right]$ is the column of $Tv$ with respect to the basis $(w_1,w_2)$.
Let
$\mathcal{E}=\left\{e_1,e_2,e_3\right\}$
be our canonical base. With this base, transormation T has representation
$T=\left(
\begin{array}{ccc}
3 & 1 & 0 \\
1 & 0 & 1 \\
1 & 0 & -1 \\
\end{array}
\right)$.
Now we have got a new base:
$\mathcal{F}=\left\{e_1,e_1+e_2,e_1+e_2+e_3\right\}$.
Let
$M_{\mathcal{F}}=\left(
\begin{array}{ccc}
1 & 1 & 1 \\
0 & 1 & 1 \\
0 & 0 & 1 \\
\end{array}
\right)$
be the transition between the two bases.
Then canonical coordinates are transormed in new coordinates
(with respect to base $\mathcal{F}$ ) by inverse matrix, which is
$N_{\mathcal{F}}=\left(
\begin{array}{ccc}
1 & -1 & 0 \\
0 & 1 & -1 \\
0 & 0 & 1 \\
\end{array}
\right)$.
Take
$A=\left\{a_1,a_2,a_3\right\}$
and get new coordinates
$B=N_{\mathcal{F}}.A$.
Then, with $S=T.M_{\mathcal{F}}$
we see:
$T.A=T.M_{\mathcal{F}}.N_{\mathcal{F}}.A=S.B$.
It's not a miracle, only lin. Algebra.
Key is transformation of basis, which implies
transformation of coordinates. That's all.
By the way: Calculating without inverses is not
possible. Your transformation with bases must be
regular. They must be invertible, otherwise it didn't
work.
Let's see. Other basis
$\mathcal{B}=\left\{2 e_1+5 e_3,e_1+e_2+6 e_3,3 e_1+9 e_3\right\}$,
another transition:
$M_{\mathcal{B}}=\left(
\begin{array}{ccc}
2 & 1 & 3 \\
0 & 1 & 0 \\
5 & 6 & 9 \\
\end{array}
\right)$.
The inverse:
$N_{\mathcal{B}}=\left(
\begin{array}{ccc}
3 & 3 & -1 \\
0 & 1 & 0 \\
-\frac{5}{3} & -\frac{7}{3} & \frac{2}{3} \\
\end{array}
\right)$.
Old transformation T
$T=\left(
\begin{array}{ccc}
3 & 1 & 0 \\
1 & 0 & 1 \\
1 & 0 & -1 \\
\end{array}
\right)$.
Transformed T:
$S=T.M_{\mathcal{B}}=\left(
\begin{array}{ccc}
6 & 4 & 9 \\
7 & 7 & 12 \\
-3 & -5 & -6 \\
\end{array}
\right)$
Transformed A:
$B=N_{\mathcal{B}}.A$.
$T.A=T.M_{\mathcal{B}}.N_{\mathcal{B}}.A=S.B$
Like before.
Best Answer
Basically the way to think about your matrix product is as a series of transformations.
First you transform your vector from the new basis to the standard basis. Notice that $H$ transforms vectors from the new basis to their representations wrt the standard basis. This is exactly what we want. So the first matrix (the one that'll go on the right -- and thus will be multiplied by the vector on the right first) will be $H$.
Now that your vector is in the standard basis, you want to transform your vector with $T$. So that'll be your middle matrix in this product.
Finally, you'd like to transform your vector from it's coordinates with respect to the standard basis to the new basis. This is just the inverse of $H$: i.e. $H^{-1}$.
So your matrix wrt to the new basis is just $T' = H^{-1}TH$.
P.S. I don't see how solving for $x$ in $Hx=T$, which is presumably a column vector, has anything to do with getting the matrix which represents the same linear transformation as $T$ wrt your new basis.