Your method 1 should write as
$$AT=B$$
where $T$ is the transformation matrix. In this expression, you are treating columns of $A$ as basis vectors. This is equivalent to say
$$A\begin{pmatrix}x_1\\x_2\end{pmatrix}=\begin{pmatrix}0\\-1\end{pmatrix}, \quad A\begin{pmatrix}y_1\\y_2\end{pmatrix}=\begin{pmatrix}1\\-1\end{pmatrix}.$$
This is to find how the columns of $B$ can be expressed in terms of the basis.
Method 2 works fine, since this is exactly what method 1 does, but using Gauss-Jordan elimination instead of finding inverse and multipling.
Method 3 is not correct. You confused the image with the step afterwards. You already found the image, which is $B$. You are now trying to find an expression of $B$ in terms of the basis $A$. In the answer you referred, they used row instead of column, but the result is the transpose too. Also that problem has standard basis, so it is different from yours.
To answer your question 3, $LA=B$ is a correct expression, but not what you intended. It is just what you already did in step 1.
Let $f:\mathbb{R}^n\rightarrow \mathbb{R}^m$ be a linear map. Suppose that $A\in \mathbb{R}^{m\times n}$ is the matrix of $f$ w.r.t. the standard bases. Since the rank of $A$ is equal to the column rank of $A$, it suffices to show that the columns of $A$ vectors in the image of $f$. Now let $e_i$ be the $i$-th standard basis vector of $\mathbb{R}^n$. Then $f(e_i)=Ae_i=A_i$ where $A_i$ is the $i$-th column of $A$. This completes the proof.
Can you figure what happens the matrix of $f$ is not given w.r.t. the standard bases.?
Edit: You edited the question. The matrix $A$ is no longer the matrix of $f$ w.r.t. the standard bases. The above argument fails since the columns of $A$ do not necessarily belong to the image. However, the columns represent the coordinates of images of basis vectors. So what does belong to your image?
Second Edit: Let's work with your example. You know that basis of $f$ w.r.t. the bases $B$ en $B'$. What information does give us? We know that $f((1,3))=2(0,0,1)+1(1,0,-1)-1(0,1,0)$. Notice that the coordinates appearing in this last expressions are $(2,1,-1)$ which is exactly the first column of $A$. If we want to write $(1,3)$ w.r.t. the basis $B$, then we bet $(1,3)=1(1,3)+0(1,2)$. Notice that $A\begin{pmatrix}1\\0
\end{pmatrix}=\begin{pmatrix}2\\1\\-1\end{pmatrix}$. This result is the coordinate of $f((1,3))$ w.r.t. the basis $B'$. We do not have that $(2,1,-1)\in \text{Im}(f)$, but $f((1,3))=2(0,0,1)+1(1,0,-1)-1(0,1,0)=(1,-1,1)\in \text{Im}(f)$. In the same fashion $f((1,2))=-1(0,0,1)+3(1,0,-1)+1(0,1,0)=(3,1,-4)\in \text{Im}(f)$. Hence $\text{Im}(f)=\text{Span}\left\{(1,-1,1),(3,1,-4)\right\}$.
Best Answer
Let $u=(1,2,3,4)$ and $v=(0,1,1,1)$. Clearly, $(u,v)$ is an independent family of $\mathbb{R}^4$, and can be completed to a basis of $\mathbb{R}^4$ with two vectors. It's very easy to see that adding $e_1=(1,0,0,0)$ and $e_2=(0,1,0,0)$ to this family yields a basis of $\mathbb{R}^4$ (compute the rank of $(u,v,e_1,e_2)$).
Now a linear mapping is uniquely determined by the image of a basis of its domain. We already know the image of $u$ and $v$ (the nil vector of $\mathbb{R}^3$), hence we only need to determine the image of $e_1$ and $e_2$ by $F$. A bit later on, we'll need the coordinates of a vector of $\mathbb{R}^4$ in the basis $(u,v,e_1,e_2)$, so we might as well do this right now. A straightforward system solving yields: $$\forall (x,y,z,t)\in\mathbb{R}^4,\quad (x,y,z,t)=(-z+t)u+(4z-3t)v+(x+z-t)e_1+(y-2z+t)e_2.$$
There's a caveat: we can't take just any vectors $a$ and $b$ of $\mathbb{R}^3$ for the images of $e_1$ and $e_2$, since we could create some other vectors in the kernel of $F$. The restriction is that the family $(a,b)$ should be independent (think of the Rank–Nullity Theorem).
We then have the general solution of your problem: a linear mapping $F:\mathbb{R}^4\longrightarrow\mathbb{R}^3$ satisfies all your requirements if and only if there exists an independent family $(a,b)$ of vectors of $\mathbb{R}^3$ such that: $$\forall(x,y,z,t)\in\mathbb{R}^4,\quad F(x,y,z,t)=(x+z-t)a+(y-2z+t)b.$$
For example, the linear mapping $F:\mathbb{R}^4\longrightarrow\mathbb{R}^3$ defined by: $$\forall(x,y,z,t)\in\mathbb{R}^4,\quad F(x,y,z,t)=(x+z-t,y-2z+t,0)$$ fulfills all your requirements. The matrix of this $F$ in the standard bases of $\mathbb{R}^4$ and $\mathbb{R}^3$ is: $$[F]_{\text{std}(\mathbb{R}^4),\text{std}(\mathbb{R^3})}=\begin{pmatrix}1&0&1&-1\\0&1&-2&1\\0&0&0&0\end{pmatrix}.$$
But there are lots more (I actually gave you all of them!).
The example given by YourAdHere corresponds to the case $a=(3,3,0)$ and $b=(1,2,0)$ (which is, in some respect, not the simplest).
Now, given a $3\times4$ matrix, how can you determine whether its the matrix (in the standard bases of $\mathbb{R}^4$ and $\mathbb{R}^3$) of a linear mapping satisfying your requirements? Easy:
This fact is easily seen, e.g., from the form of the matrix I gave above.