[Math] Algorithm to find an orthogonal basis (orthogonal to a given vector)

linear algebranumerical linear algebra

Let $K$ be a given integer, with $K$ even (and "large"). Let $\mathbf{v} \in \mathbb{R}^{K \times 1}$ be a given non-zero (column) vector. Write a (possibly efficient) algorithm to construct a matrix $\mathbf{B} \in \mathbb{R}^{K \times K-1}$ such as that:
$(1)$ the column-vectors of $\mathbf{B}$ are orthogonal to each other; $(2)$ $\mathbf{B^T v} = \mathbf{0}$, where $^T$ denotes transpose and $\mathbf{0}$ a ((K-1)-dimensional) vector of all zeros.

Denoting with $\mathbf{b}_i$, $i=1,\dots,K-1$, the $i$-th column of $\mathbf{B}$, the first requirement can be rewritten as: $\mathbf{b}_i^T \mathbf{b}_j = 0$ for $i \neq j$.

Essentially, the problem asks to write an algorithm to find an orthogonal basis to the orthogonal complement of the (mono-dimensional) space spanned by $\mathbf{v}$. The algorithm can be written in pseudo-code (or in MATLAB-like or in C-like code). The algorithm takes in input the vector $\mathbf{v}$ and the integer $K$; its output is the matrix $\mathbf{B}$.

Note: the algorithm cannot do a "random trial-and-error", i.e., generate a random vector, try if it is orthogonal to $\mathbf{v}$ and to the previously found columns of $\mathbf{B}$, discard if it not, memorize it if it is. This is an explicitly forbidden brute-force approach. However, it is indeed allowed to do this as an initializing stage, i.e., for the first column of the matrix or as a "random guess" at the start, if any need to do so should arise. "Normality", i.e., finding an orthonormal basis, is not required. Input checking (e.g. checking if $K$ is an even integer) is not required.

EDIT: My thoughts and previous attempts: The problem is essentially an implementation of Gram-Schmidts orthonormalization process. However, it cannot simply be used as stated, because Gram-Schimdts' theorem assumes to start with a basis (which we do not have). What we can actually construct is a spanning set of vectors, i.e. $\mathbf{v}, e_1, \dots, e_K$, where $e_i$ denotes the $i$-th canonical base vector.

I have already implemented Gram-Schmidts, paying attention to numerical issues. The problem is a slight generalization of the process, in which you do not start with a basis, but with a spanning set, find a suitable basis in the set (I don't know how to do it), which must contain $\mathbf{v}$ as its first element, and then apply Gram-Schimdts process.

P.S. Any help is appreciated, I am no master of linear algebra, especially numerical implementations. Thanks.

Best Answer

By the way, most likely a much simpler and efficient way to compute your matrix $B$ here is just using a single Householder reflection. Consider $w=v\pm\|v\|_2e_1$ (the sign in "$\pm$" is usually chosen to be same as in the first component in $v$ from numerical reasons). Then set $Q=I-2ww^T/w^Tw$. Since $Qv=\mp\|v\|_2e_1$ (depending on the choice of the sign in $w$), we have $v=\mp\|v\|_2Qe_1$ (multiply by $Q^T=Q$). It means that $v$ is a scalar multiple of the first column (row) of the orthogonal matrix $Q$. Let $Q=[q_1,\ldots,q_K]$. It follows that $q_i^Tv=0$ for $i=2,\ldots,K$. Hence $B=[q_2,\ldots,q_K]$ satisfies $B^Tv=0$ and $B^TB=I$.

In contrast to the $O(K^3)$ complexity of the Gram-Schmidt approach (which is of course generally applicable to compute orthonormal basis for any given set of vectors), this way you get your $B$ using $O(K^2)$ operations if you need the full matrix explicitly computed or $O(K)$ if it's enough to have it implicitly represented by $w$.