[Math] Explicit Linear Transformations from $\mathbb{R}^n$ to $\mathbb{R}^m$

linear algebra

I'm learning Linear Algebra from Hoffman's book and I really love the abstract treatment and focus on the theory and proofs rather than computation, but I am having trouble with a few questions that actually require some sort of explicit calculations.

For an example of the style I am having trouble with (I know this is really easy):

Is there a linear transformation $T: \mathbb{R}^3 \rightarrow \mathbb{R}^2$ such that: $T(1,-1,1)=(1,0)$ and $T(1,1,1)=(0,1)$?

For this one I am not really sure where to start as the book gives no hint at an actual example. All i can think of is giving some general definition like let $\alpha \in \mathbb{R}^3$, then $T \alpha = \lambda_{1} \beta_{1} + \lambda_{2} \beta_{2}$, where $\beta_{1}, \beta_{2} \in \mathbb{R}^2$. How can I actually relate $(1,-1,1)$ to $(1,0)$ in any way?

As for another example,

If $\alpha_{1}=(1,-1), \alpha_{2}=(2,-1), \alpha_{3} =(-3,2)$ and $\beta_{1}=(1,0), \beta_{2}=(0,1), \beta_{3}=(1,1)$, is there a linear transformation $T: \mathbb{R}^2 \rightarrow \mathbb{R}^2$ such that $T\alpha_{i} = \beta_{i}$, where $1 \leq i \leq 3$?

I'm just at a loss for how to start such a question with actual computations and numbers, we have only done proofs about general theory throughout the entire course and I seem to be good at understanding how concepts interact "in the abstract", but unfortunately I don't know how to start these basic questions. A nudge in the right direction or a solution to either one of these problems would be very helpful; these aren't for homework I just want to make sure I can actually do some calculations, but apparently I don't know how.

Best Answer

This is self-study? In an actual introductory LA course, you would most certainly have done a lot of Gaussian elimination as homework. It's worth it constructing some exercises for yourself to get some proficiency (and it's not hard to construct some initial examples: just write down some systems of linear equations with random coefficients and have at it! Then figure out how to deliberately construct some nontrivial examples of systems with less than maximal rank).

That said:

In the first question, you don't actually need to do any nontrivial calculations. You know (or should know) that you can specify a linear transformation $V\to W$ completely by giving the image of every element of a basis for $V$, and the each ordered set of elements of $W$ give rise to a linear transformation. So if you can extend $(1,-1,1)$ and $(1,1,1)$ to a basis for $\mathbb R^3$ then it doesn't matter what the exercise wants you to do with them: you know it can be done. And they can be extended to a basis because they are linearly independent (a set of two vectors is independent if they are not parallel).

If you wanted to write the transformation down explicitly, the systematic approach would start by choosing a third vector to complete a basis. This is more a trial-and-error matter than a systematic procedure because most other vectors will work. (In fact one of the standard basis vectors will always work (why?)). In this case we see that our two vectors have the same first and last components, so this will be the case for any vector in their span. So if we choose any vector with different first and last component, we can open up the span to cover all of $\mathbb R^3$. So our basis consists of, for example $(1,-1,1)$, $(1,1,1)$ and $(1,0,0)$. Thus the matrix to converting from our new basis to the standard basis is $P=\pmatrix{1&1&1\\-1&1&0\\1&1&0}$. Its inverse converts from the standard basis to our chosen one. Compute it by Gaussian elimiation.

Now, decide where in $\mathbb R^2$ our third basis vector will map to. The choice is immaterial, but for computational convenience we can take it to be $(0,0)$. Thus, the matrix from the chosen basis for $\mathbb R^3$ to $\mathbb R^2$ is $Q=\pmatrix{1&0&0\\0&1&0}$, and we can now compute the matrix for the entire transformation as $QP^{-1}$.

In the second part, you have specifications for the images of too many vectors in $\mathbb R^2$ to be a basis, so there you do have to calculuate. The general plan would be to arbitrarily declare two of the $\alpha_i$s to be a basis for $\mathbb R^2$, figure out how the third is a linear combination of the two first, and then see if the $\beta_i$s fit together in the same linear combination. If they do, all is well; if they don't the three $(\alpha_i\mapsto\beta_i)$ pairs cannot possibly fit together into a linear transformation.

However, this particular exercise happens to be easier to do in reverse: Since $\beta_1$ and $\beta_2$ span the entire $\mathbb R^2$, you know that $T$, if it exists, must have rank $2$ and therefore is a bijection. So we can ask instead if its inverse can exist. What makes this easier is that it is very easy to se a linear relation between the $\beta_i$s, namely $\beta_1+\beta_2=\beta_3$. So if $T$ exists and has an inverse, then $T^{-1}(\beta_1)+T^{-1}(\beta_2)=T^{-1}(\beta_3)$ or in other words $\alpha_1+\alpha_2=\alpha_3$. But that is most definitly not the case, so $T$ cannot exist here.

Related Question