[Math] Why doesn’t the dot product give you the coefficients of the linear combination

linear algebravector-spacesvectors

So the setting is $\Bbb R^{2}$.

Let's pick two unit vectors that are linearly independent. Say: $v_{1}= \begin{bmatrix} \frac{1}{2} \\ \frac{\sqrt{3}}{2}\end{bmatrix}$ and $v_{2} = \begin{bmatrix} \frac{\sqrt{3}}{2} \\ \frac{1}{2}\end{bmatrix}$.

Now, let's pick another vector with length smaller than $1$, say, $a = \begin{bmatrix} \frac{1}{2} \\ 0\end{bmatrix}$.

I've been trying to understand the dot product geometrically, and what I've read online has led me to believe that $a \cdot v_{1}$ is the scalar $c$ so that $cv_{1}$ is the "shadow" of $a$ on $v_{1}$. Similarly, $a \cdot v_{2}$ is the scalar $d$ so that $dv_{2}$ is the "shadow" of $a$ on $v_{2}$.

If this is true, then it should be that $cv_{1} + dv_{2} = a$, right? But this isn't the case.

We have $a \cdot v_{1} = \frac{1}{4}$ and $a \cdot v_{2} = \frac{\sqrt{3}}{4}$. So $$cv_{1} + d v_{2} = \frac{1}{4}\begin{bmatrix} \frac{1}{2} \\ \frac{\sqrt{3}}{2}\end{bmatrix} + \frac{\sqrt{3}}{4}\begin{bmatrix} \frac{\sqrt{3}}{2} \\ \frac{1}{2}\end{bmatrix} = \begin{bmatrix} \frac{1}{2} \\ \frac{\sqrt{3}}{4}\end{bmatrix} \neq a.$$

This means something is wrong with my understanding about the intuition of the dot product. I'm not sure what's wrong with it, though. Any help would be appreciated.

Best Answer

Your $v_1$ and $v_2$ need to be orthonormal. To expand on learnmore's answer, essentially, the reason you need orthogonality for this to work is that if your $v_1$ and $v_2$ are not orthogonal, then they will have a non-zero dot-product $v_1\cdot v_2$. This means that $v_2$ carries some weight "in the direction" $v_1$. Your intuition that $c = a\cdot v_1$ is the "amount of $a$ in the direction $v_1$" is correct - keep that intuition! Similarly, $d=a\cdot v_2$ is the amount of $a$ in the direction of $v_2$.

However - since $v_1$ and $v_2$ are not perpendicular, the number $c$ has "piece" of $v_2$ in it, and the number $d$ has a "piece" of $v_1$ in it. So, when you try to expand $a$ in the basis $\{v_1,v_2\}$, you would need an extra term to compensate for the "non-orthogonal mixing" between $v_1$ and $v_2$.

The technical details are as follows. Since $v_1$ and $v_2$ are linearly independent, we can write

$$ a = \alpha v_1+\beta v_2 $$ for some scalars $\alpha, \beta$. Now, take the dot product of $a$ with $v_1$ and expand it out:

$$ a\cdot v_1 = (\alpha v_1+\beta v_2)\cdot v_1 = \alpha v_1\cdot v_1 + \beta v_1\cdot v_2 = \alpha + \beta v_1\cdot v_2 $$ similarly, expand out $a\cdot v_2$:

$$ a\cdot v_2 = \alpha v_1\cdot v_2 + \beta $$

Those extra terms ($\beta v_1\cdot v_2$ and $\alpha v_1\cdot v_2$) express the non-orthogonality. Written another way, we have

$$ \alpha = a\cdot v_1 - \beta v_1\cdot v_2 $$ and

$$ \beta = a\cdot v_2 - \alpha v_1\cdot v_2 $$ which shows clearly that the correct expansion coefficients have $a\cdot v_j$, but also another piece compensating for the non-orthogonality. I could go on - you can use matrices and such, but hopefully this is enough to convince you.

Related Question