Define coordinates for a non-orthonormal basis

change-of-basislinear algebraorthonormalvectors

I have two non-orthonormal basis vectors, and I want to represent a third vector as a pair of coordinates using the aformentioned basis vectors. How would I do that? The dot product, which usually transforms a vector into coordinates in each basis vector, doesn't work for non-orthonormal bases. How would I get coordinates in terms of my non-orthonormal basis vectors?

I should also mention that I am coding, and so all my vectors are currently represented as pairs of numbers on an (x, y) plane.

Please ask if you need any clarification, I explained it really poorly here.

Also, I do not know very much linear algebra, so it would be appreciated if you answer in simpler terms 🙂

Best Answer

What you're looking for is the reciprocal basis. If $\{e_1, e_2\}$ is a basis, then there is a unique reciprocal basis $\{e^1, e^2\}$ such that $$ e^1\cdot e_1 = e^2\cdot e_2 = 1, $$$$ e^1\cdot e_2 = e^2\cdot e_1 = 0. \tag{$*$} $$ If you know $e_1\cdot e_1$ and $e_2\cdot e_2$ and $e_1\cdot e_2$ then you can write $$ e^1 = ae_1 + be_2,\quad e^2 = ce_1 + de_2 \tag{$**$} $$ and plug this into ($*$) to get a system of equations to solve for $a,b,c,d$.

The advantage of this is that now the problem of finding $\{e_1, e_2\}$ components is completely solved: every vector $v$ can be written $$ v = (v\cdot e^1)e_1 + (v\cdot e^2)e_2 $$ meaning that $v\cdot e^1$ is the $e_1$ component of $v$ and $v\cdot e^2$ is the $e_2$ component.

This is easy to prove: just write $$ v = v_1e_1 + v_2e_2 $$ and apply ($*$): $$ v\cdot e^1 = (v_1)(1) + (v_2)(0) = v_1, $$$$ v\cdot e^2 = (v_1)(0) + (v_2)(1) = v_2. $$

Now take note of something really important: an orthonormal basis is its own reciprocal!


This idea extends in the obvious way to any number of dimensions.


Jme has asked for an explicit calculation of the reciprocal basis, so I will provide one. We assume that the quantities $$ X_{11} = e_1\cdot e_1,\quad X_{22} = e_2\cdot e_2,\quad X_{12} = e_1\cdot e_2 $$ are known. Then combining ($**$) with ($*$) yields four linear equations in the four unkowns $a, b, c, d$: $$ 1 = e^1\cdot e_1 = aX_{11} + bX_{12},\quad 1 = e^2\cdot e_2 = cX_{12} + dX_{22}, $$$$ 0 = e^1\cdot e_2 = aX_{12} + bX_{22},\quad 0 = e^2\cdot e_1 = cX_{11} + dX_{12}. $$ In this 2D case the equations are easy to solve via substitution since the last two give us $$ b = \frac{-X_{12}}{X_{22}}a,\quad c = \frac{-X_{12}}{X_{11}}d. $$ (Note that $X_{11} \ne 0$ and $X_{22} \ne 0$ because with an inner product $v\cdot v = 0 \implies v = 0$.) Now we plug-in to the first two equations to get $$ a = \frac{X_{22}}{X_{11}{X_{22}} - X_{12}^2},\quad d = \frac{X_{11}}{X_{11}X_{22} - X_{12}^2} $$ and then finally $$ b = c = \frac{-X_{12}}{X_{11}X_{12} - X_{12}^2}. $$ The denominator $X_{11}X_{22} - X_{12}^2$ is nonzero by Cauchy-Schwarz. So we can write $$ e^1 = \frac{X_{22}e_1 - X_{12}e_2}{X_{11}X_{22} - X_{12}^2},\quad e^2 = \frac{-X_{12}e_1 + X_{11}e_2}{X_{11}X_{22} - X_{12}^2}. $$

In arbitrary dimensions, it's probably simplest to consider the Gram matrix $G_{ij} = e_i\cdot e_j$. This will have an inverse $G^{-1}$, and then we see $$ \sum_j (G^{-1})_{ij}e_j\cdot e_k = \delta_{ik} = e^i\cdot e_k $$ and because all the $e_k$ form a basis this implies that $$ \sum_j (G^{-1})_{ij}e_j = e^i. $$ If $E = (e_1,\dotsc, e_n)$ is the matrix whose columns are the components of the $e_k$ in some other basis and similarly $E' = (e^1,\dotsc, e^n)$, then exploiting the symmetry of $G$ we can write $$ EG^{-1} = E'. $$ But $G = E^TE$ and $E$ is invertible since $e_k$ is a basis, so this simplifies to $$ E' = E^{-1}(E^TE)^{-1} = E^{-T} $$ where $E^{-T} = (E^{-1})^T = (E^T)^{-1}$. To demonstrate that this gives the same result as above, we see $$ G = \begin{pmatrix}X_{11}&X_{12} \\ X_{12}&X_{22}\end{pmatrix} \implies G^{-1} = \frac1{X_{11}X_{22} - X_{12}^2}\begin{pmatrix}X_{22}&-X_{12}\\-X_{12}&X_{11}\end{pmatrix} $$ from which you can easily read off the components of $e^1$ and $e^2$ and see that they match our previous equations.

To demonstrate the abbreviated formula, let $e_1 = (a_1, a_2)^T$ and $e_2 = (b_1, b_2)^T$. Then $$ \begin{pmatrix}a_1&b_1\\a_2&b_2\end{pmatrix}^{-T} = \frac1{a_1b_2-b_1a_2}\begin{pmatrix}b_2&-a_2\\-b_1&a_1\end{pmatrix} $$ thus $$ e^1 = \frac1{a_1b_2-b_1a_2}\begin{pmatrix}b_2\\-b_1\end{pmatrix},\quad e^2 = \frac1{a_1b_2-b_1a_2}\begin{pmatrix}-a_2\\a_1\end{pmatrix}. $$

Related Question