[Math] Gram-Schmidt and zero vector

linear algebraphysics

I have a problem concerning the orthogonalization of a coordinate system; this is necessary in the context of a normal mode analysis of molecular vibrations. I am working on H2O, giving me a 9-dimensional vector space, with six (orthogonal) basis vectors predetermined by describing rotational and translational motion of the entire molecule. I want to determine the three remaining vectors by a modified Gram-Schmidt process, but in my case, this somehow fails due to G-S constructing a zero vector.

As far as I understand, zero vectors from Gram-Schmidt may occur if there is linear dependency somewhere in my set of vectors, but given that my six vectors are mutually orthogonal I don't know how this might be the case (let alone how I could avoid it).

The six predetermined vectors are:

trans-x   trans-y   trans-z   rot-xx    rot-yy    rot-zz
3.9994         0         0         0    0.2552         0
     0    3.9994         0   -0.2552         0         0
     0         0    3.9994         0         0         0
1.0039         0         0         0   -0.5084   -0.7839
     0    1.0039         0    0.5084         0         0
     0         0    1.0039    0.7839         0         0
1.0039         0         0         0   -0.5084    0.7839
     0    1.0039         0    0.5084         0         0
     0         0    1.0039   -0.7839         0         0

Can you see where the problem lies? I've been looking over this for a few days now, including trying alternative approaches at the orthogonalization problem, and I am starting to get frustrated. Given that my Gram-Schmidt algorithm produces a valid 9-dimensional orthogonal set if I use only the first three vectors (the translational coordinates), I assume my implementation to be correct and the problem to be somewhere in the rotational coordinate vectors. But I am at loss about what exactly is going wrong here. (In the end, it's probably just an example of not seeing the forest for the trees …)

Regards

-M.

Best Answer

I realized I made a mistake right from the start. Gram-Schmidt orthogonalizes a linearly independent set of vectors. I previously simply filled my set up with the remaining Cartesian basis vectors (which would, in this case, be the last three columns of a $9\times9$ identity matrix times the mass-weighting) to create my "intermediate" coordinates to which I applied the G-S algorithm. However, this intermediate set is not linearly independent (just as the comments on the original question supposed) and thus, G-S produces a zero vector. No surprise there.

So the task at hand is now to construct the remaining $3N-6$ vectors so that the entire set is linearly independent. Is there any reliable method to achieve this? (Bonus points if all vectors are already mutually orthogonal, so I won't have to Gram-Schmidt them later.)