This is self-study? In an actual introductory LA course, you would most certainly have done a lot of Gaussian elimination as homework. It's worth it constructing some exercises for yourself to get some proficiency (and it's not hard to construct some initial examples: just write down some systems of linear equations with random coefficients and have at it! Then figure out how to deliberately construct some nontrivial examples of systems with less than maximal rank).
That said:
In the first question, you don't actually need to do any nontrivial calculations. You know (or should know) that you can specify a linear transformation $V\to W$ completely by giving the image of every element of a basis for $V$, and the each ordered set of elements of $W$ give rise to a linear transformation. So if you can extend $(1,-1,1)$ and $(1,1,1)$ to a basis for $\mathbb R^3$ then it doesn't matter what the exercise wants you to do with them: you know it can be done. And they can be extended to a basis because they are linearly independent (a set of two vectors is independent if they are not parallel).
If you wanted to write the transformation down explicitly, the systematic approach would start by choosing a third vector to complete a basis. This is more a trial-and-error matter than a systematic procedure because most other vectors will work. (In fact one of the standard basis vectors will always work (why?)). In this case we see that our two vectors have the same first and last components, so this will be the case for any vector in their span. So if we choose any vector with different first and last component, we can open up the span to cover all of $\mathbb R^3$. So our basis consists of, for example $(1,-1,1)$, $(1,1,1)$ and $(1,0,0)$. Thus the matrix to converting from our new basis to the standard basis is $P=\pmatrix{1&1&1\\-1&1&0\\1&1&0}$. Its inverse converts from the standard basis to our chosen one. Compute it by Gaussian elimiation.
Now, decide where in $\mathbb R^2$ our third basis vector will map to. The choice is immaterial, but for computational convenience we can take it to be $(0,0)$. Thus, the matrix from the chosen basis for $\mathbb R^3$ to $\mathbb R^2$ is $Q=\pmatrix{1&0&0\\0&1&0}$, and we can now compute the matrix for the entire transformation as $QP^{-1}$.
In the second part, you have specifications for the images of too many vectors in $\mathbb R^2$ to be a basis, so there you do have to calculuate. The general plan would be to arbitrarily declare two of the $\alpha_i$s to be a basis for $\mathbb R^2$, figure out how the third is a linear combination of the two first, and then see if the $\beta_i$s fit together in the same linear combination. If they do, all is well; if they don't the three $(\alpha_i\mapsto\beta_i)$ pairs cannot possibly fit together into a linear transformation.
However, this particular exercise happens to be easier to do in reverse: Since $\beta_1$ and $\beta_2$ span the entire $\mathbb R^2$, you know that $T$, if it exists, must have rank $2$ and therefore is a bijection. So we can ask instead if its inverse can exist. What makes this easier is that it is very easy to se a linear relation between the $\beta_i$s, namely $\beta_1+\beta_2=\beta_3$. So if $T$ exists and has an inverse, then $T^{-1}(\beta_1)+T^{-1}(\beta_2)=T^{-1}(\beta_3)$ or in other words $\alpha_1+\alpha_2=\alpha_3$. But that is most definitly not the case, so $T$ cannot exist here.
Remember that $T$ is linear. That means that for any vectors $v,w\in\mathbb{R}^2$ and any scalars $a,b\in\mathbb{R}$,
$$T(av+bw)=aT(v)+bT(w).$$
So, let's use this information. Since
$$T \begin{bmatrix} 1 \\ 2 \\ \end{bmatrix} = \begin{bmatrix} 0 \\ 12 \\ -2 \end{bmatrix}, \qquad T\begin{bmatrix} 2 \\ -1 \\ \end{bmatrix} = \begin{bmatrix} 10 \\ -1 \\ 1 \end{bmatrix},$$
you know that
$$T\left(\begin{bmatrix} 1 \\ 2 \\ \end{bmatrix}+2\begin{bmatrix} 2 \\ -1 \\ \end{bmatrix}\right)=T\left(\begin{bmatrix} 1 \\ 2 \\ \end{bmatrix}+\begin{bmatrix} 4 \\ -2 \\ \end{bmatrix}\right)=T\begin{bmatrix} 5 \\ 0\\ \end{bmatrix}$$
must equal
$$T \begin{bmatrix} 1 \\ 2 \\ \end{bmatrix}+2\cdot T\begin{bmatrix} 2 \\ -1 \\ \end{bmatrix} =\begin{bmatrix} 0 \\ 12 \\ -2 \end{bmatrix}+2\cdot \begin{bmatrix} 10 \\ -1 \\ 1 \end{bmatrix}=\begin{bmatrix} 20 \\ 10 \\ 0 \end{bmatrix}.$$
So, we know $T\begin{bmatrix} 5 \\ 0\\ \end{bmatrix}$. Do you see how to find $T\begin{bmatrix} 1 \\ 0\\ \end{bmatrix}$? Then use the same process to figure out $T\begin{bmatrix} 0 \\ 1\\ \end{bmatrix}$.
After doing that, you should know how to make the (standard basis) matrix for $T$.
Best Answer
Take $x$ to be the vector $[x,y,z]^T$ and multiply it by your matrix $A$. That would be your linear transformation.
The 'image' of a vector under a function just means the value of the function when that vector is put in as an argument. So do $T(v)$ and $T(u)$ once you find out $T$ explicitly.