[Math] What are the rules for complex-component vectors and why

complex numberscomplex-analysisvector analysisvector-spaces

I want to take the inverse of a dot product, where both vectors have complex components. In other words, if $\textbf{A} \cdot \textbf{B} = d$, and I know $\textbf{A}$ and $d$, I want to find a $\textbf{B}$. I know that I cannot do so uniquely, which is fine; I have a procedure for creating a set of vectors that will satisfy $\textbf{A} \cdot \textbf{B} = d$. But, it relies on finding vectors that are orthogonal to $\textbf{A}$ and each other. Normally this would not be a problem; just take $\textbf{A}$, zero all but two of its components, switch the last two, and negate one of them. Make sure it's a different pair every time. For more orthogonal vectors, you can take cross products.

The difficulties arise when I consider vectors with complex components. I want to normalize each of the orthogonal vectors, which means divide by their magnitudes. I have read that you need to divide by $\sqrt{|v_x|^2 + |v_y|^2 + |v_z|^2 …}$. What is the justification for this? Since I am just normalizing vectors, do I absolutely have to take the magnitudes of the components? Also, My technique of finding the inverse dot product relies on the identity $\textbf{A} \cdot \textbf{B} = d = |\textbf{A}||\textbf{B}|$. What modifications might I need to make? I can post more details if people want. Also, if anyone has links to stuff to read about, especially with regards to the reasons why, I would be most appreciative.

Best Answer

I wouldn't call this the "inverse of a dot product".

Given complex vectors $v = (v_{1}, \ldots, v_{n})$ and $w = (w_{1}, \ldots, w_{n})$, their scalar (=dot) product is given by $v \cdot w = \sum_{j=1}^{n} v_{j} \overline{w}_{j} = v_{1} \overline{w}_{1} + \cdots + v_{n} \overline{w}_{n}$. Why? Well, you want to generalize the usual dot product on $\mathbb{R}^{n}$, but you also want that $v \cdot v \geq 0$ for all $v$, and the vector $(i,0,\ldots,0)$ shows that you can't do without the conjugation. You might ask why, I don't care if $v \cdot v \geq 0$ for all $v$. Most people do: the expression $\|v\| = \sqrt{v \cdot v}$ should define a norm and $\|v - w\|$ a metric on $\mathbb{C}^{n}$, and taking square-roots of non-positve numbers (or even complex numbers) simply isn't well-defined. Note that $\|v\| = \sqrt{v \cdot v} = \sqrt{\sum_{j=1}^{n} |v_{j}|^{2}}$.

Having settled this, let $v$ be a non-zero vector. Its orthogonal complement $U = v^{\perp} = \{u \in \mathbb{C}^{n}\,:u \cdot v = 0\}$ is the set of all vectors orthogonal to $v$. Since $U$ is determined by the single linear equation $u_{1}\overline{v}_{1} + \cdots + u_{n}\overline{v}_{n} = 0$, it is an $(n-1)$-dimensional subspace of $\mathbb{C}^{n}$. Finding solutions is easily achieved using Gauss elimination, this will give you vectors $u_{1}, \cdots, u_{n-1}$, which you can make into an orthonormal basis of $U$ using Gram-Schmidt (note that the notation $\langle u, v \rangle = u \cdot v$ is to be understood. The fact that you're working with $\mathbb{C}$ and not with $\mathbb{R}$ is immaterial, just be careful to note that $v \cdot (\lambda w) = \overline{\lambda} (v \cdot w)$, i.e., the dot product is conjugate-linear in the second variable.

Finally, in order to solve the equation $w \cdot v = d$ simply take any $u \in U = v^{\perp}$ and put $w = u + \frac{d}{v \cdot v} v$, and note that $w \cdot v = (u + \frac{d}{v \cdot v}v) \cdot v = (u \cdot v) + \frac{d}{v \cdot v} v \cdot v = 0 + d = d$.

Related Question