I have two vectors – the input and output. I would like to find a matrix that multiplied by the input vector gives the output vector.
What algorithms should be used? What is their complexity?
linear algebramatricessystems of equationsvectors
I have two vectors – the input and output. I would like to find a matrix that multiplied by the input vector gives the output vector.
What algorithms should be used? What is their complexity?
In general, the problem you describe is not well-defined.
For example, there are many matrices that will map $(1,1,1)$ to $(1,2,3)$: pick your favorite two vectors, $\mathbf{v}_1$ and $\mathbf{v}_2$, and there is a matrix that sends $(0,1,0)$ to $\mathbf{v}_1$, $(0,0,1)$ to $\mathbf{v}_2$, and sends $(1,1,1)$ to $(1,2,3)$.
For the second problem you describe, it depends on A, B, and C. If you are talking about 3-dimensional vectors, and A, B, and C are linearly independent, then you can always find one and only one matrix that will do it (I explain how below). But if they are linearly dependent, then it may be impossible, depending on D, E, and F. For example, if $A=(1,0,0)$, $B=(0,1,0)$, and $C=(1,1,0)$, then there is no matrix that sends $A$ to $(1,0,0)$, $B$ to $(0,1,0)$, and $C$ to $(0,0,1)$ (because $C=A+B$, so a matrix that sends $A$ to $D$ and $B$ to $E$ must send $C$ to $D+E$).
Assuming that $A$, $B$, and $C$ are 3-dimensional vectors, and that they are linearly independent (there is no way to pick scalars $a,b,c$, not all of them zero, such that $aA + bB + cC = (0,0,0)$, then what you want can always be done. Here is one way of doing it:
Construct the matrix that has $A$, $B$, and $C$ as its columns. Call it $T$.
Then find $T^{-1}$, the inverse of $T$ (if $A$, $B$, and $C$ are linearly independent, this can be done). Now let $R$ be the matrix that as $D$, $E$, and $F$ as its columns.
The matrix you want is $RT^{-1}$.
For example, say $A=(1,1,1)$, $B=(1,0,1)$, $C=(1,2,3)$. and $D=(3,1,-1)$, $E=(0,0,1)$, $F=(3,4,0)$. Then $T$ will be $$T=\left(\begin{array}{ccc} 1& 1 & 1\\ 1 & 0 & 2\\ 1 & 1 & 3 \end{array}\right).$$ We find its inverse: $$\begin{align*} \left(\begin{array}{ccc|ccc} 1& 1 & 1 & 1 & 0 & 0\\ 1 & 0 & 2 & 0 & 1 & 0\\ 1 & 1 & 3 & 0 & 0 & 1 \end{array}\right) &\to \left(\begin{array}{rrr|rrr} 1 & 1 & 1 & 1 & 0 & 0\\ 0 & -1 & 1 & -1 & 1 & 0\\ 0 & 0 & 2 & -1 & 0 & 1 \end{array}\right) \to \left(\begin{array}{rrr|rrr} 1 & 1 & 1 & 1 & 0 & 0\\ 0 & 1 & -1 & 1 & -1 & 0\\ 0 & 0 & 1 & -\frac{1}{2} & 0 & \frac{1}{2} \end{array}\right)\\ &\to\left(\begin{array}{rrr|rrr} 1 & 1 & 0 & \frac{3}{2} & 0 & -\frac{1}{2}\\ 0 & 1 & 0 & \frac{1}{2} & -1 & \frac{1}{2}\\ 0 & 0 & 1 & -\frac{1}{2} & 0 & \frac{1}{2} \end{array}\right)\\ & \to \left(\begin{array}{rrr|rrr} 1 & 0 & 0 & 1 & 1 & -1\\ 0 & 1 & 0 & \frac{1}{2} & -1 & \frac{1}{2}\\ 0 &0 & 1 & -\frac{1}{2} & 0 & \frac{1}{2} \end{array}\right),\end{align*}$$ so $$T^{-1} = \left(\begin{array}{rrr} 1 & 1 & -1\\ \frac{1}{2} & -1 & \frac{1}{2}\\ -\frac{1}{2} & 0 & \frac{1}{2} \end{array}\right).$$
Then $$M= RT^{-1} = \left(\begin{array}{rrr} 3 & 0 & 3\\ 1 & 0 & 4\\ -1 & 1 & 0 \end{array}\right)\left(\begin{array}{rrr} 1 & 1 & -1\\ \frac{1}{2} & -1 & \frac{1}{2}\\ -\frac{1}{2} & 0 & \frac{1}{2} \end{array}\right).$$
Explanation. The matrix $T$ maps $(1,0,0)$ to $A$, $(0,1,0)$ to $B$, and $(0,0,1)$ to $C$. The matrix $R$ sends $(1,0,0)$ to $D$, $(0,1,0)$ to $E$, and $(0,0,1)$ to $C$.
That means that the matrix $T^{-1}$ maps $A$ to $(1,0,0)$, $B$ to $(0,1,0)$, and $C$ to $(0,0,1)$. When we multiply $RT^{-1}$, we are composing the functions. So $A$ is sent to $(1,0,0)$ by $T^{-1}$, and then to $D$ by $R$; in summary, $A$ is mapped to $D$. Similarly, $B$ is sent first to $(0,1,0)$ by $T^{-1}$ and then to $E$ by $R$. And finally, $C$ is sent to $(0,0,1)$ by $T^{-1}$, and to $F$ by $R$. That's why $M$ does what you want it to do.
Caveat. If $A$, $B$, and $C$ are not linearly independent, then you need $D$, $E$, and $F$ to satisfy the same linear relations as $A$, $B$, and $C$ do in order for you to be able to find some map that works.
So, what do you do if $A$, $B$, and $C$ are not linearly independent?
If $A$ is the zero vector, then you need $D$ to be the zero vector as well; if it is not, there is no matrix $M$. If $A=D=\mathbf{0}$, then you can ignore them (any matrix will send $\mathbf{0}$ to $\mathbf{0}$); discard them, and proceed as described below.
If $A$ is not zero, but $B$ is a scalar multiple of $A$, $B=\lambda A$, then you need $E=\lambda D$. If this is not the case, then there is no matrix $M$. If this is the case, then you can discard $B$ and $E$, because if you find a matrix that sends $A$ to $D$, it will automatically send $B=\lambda A$ to $E=\lambda D$.
If $A$ is not zero, $B$ is not a scalar multiple of $A$, then $C$ is a linear combination of $A$ and $B$, $C=\alpha A + \beta B$. Then you need $F=\alpha D+\beta B$. If this is not true, then there is no matrix $M$. If this is true, then you can discard $C$ and $F$, because any matrix that sends $A$ to $D$ and $B$ to $E$ will automatically send $C=\alpha A + \beta B$ to $\alpha D+\beta E=F$.
So, now we are in a situation where we have two vectors and two images. Call them $A$ and $B$, with desired images $D$ and $E$.
If $A$ and $B$ are still linearly dependent, then one of them will be a multiple of the other; check to see that $D$ and $E$ also satisfy that condition; if they do not, there is no matrix $M$. If they do, then discard and ignore the vector that is a multiple of the other.
So now you are down to either 1 or 2 vectors that are linearly independent. Add vectors from among $(1,0,0)$, $(0,1,0)$, and $(0,0,1)$ so that you get three vectors that are a basis. Then find a matrix $M$, as described above for the linearly independent case, that does exactly what you want with your original vectors, and does anything to the new vectors (for example, sends them to $(0,0,0)$).
Best Answer
So you want a matrix, which given the input vector $a$ produces the output vector $b$.
Here is a rank-1 matrix which does that $$M=\frac{ba^T}{a^Ta}=ba^+$$ where $a^+$ is the pseudoinverse of $a$.
Later, you discover that you actually need to map a set of input vectors $\{a_k\}$ to a set of output vectors $\{b_k\}.\,\,\,$ Here is a matrix which addresses that situation $$M=BA^+$$ where $B$ is a matrix whose columns are the $\{b_k\}$ vectors, and ditto for $A$.
For a more complete solution, you can include contributions from the nullspace of $A$ $$M=BA^+ + G\,(I-AA^+)$$ where $G$ is an arbitrary matrix.
This solution remains valid when $A$ reverts to being the vector $a$, and is likely the answer to your original question.