Orthogonalization in GAP

gap

I'm using Eigenvectors in GAP to diagonalize a matrix $M$. The matrix is guaranteed to be diagonalizable and in fact it should be so by an orthogonal transformation. This works fine and $D$ is the desired diagonal matrix :

C:=Eigenvectors(CF(4),M);
D:=C*M*C^-1;

However $C$ is usually not orthogonal. There are several methods to orthogonalize it. For example this matlab code https://blogs.mathworks.com/cleve/2016/07/25/compare-gram-schmidt-and-householder-orthogonalization-algorithms/
should be translatable to GAP without much difficulty; but my guess is that it will be slow. Is there anything already built in GAP for this?

I gathered an example before some of the comments and answers were written. I'll post here for reference; (I usually work with much larger matrices). I added a simple Gram-Schmidt orthogonalization translated from the referenced link:

M:=[
[1,0,0,0,0,2,0,0,0,0,2,0,0,0,0,2],
[0,-1,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,-1,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,1,0,0,2,0,0,2,0,0,2,0,0,0],
[0,0,0,0,-1,0,0,0,0,0,0,0,0,0,0,0],
[2,0,0,0,0,1,0,0,0,0,2,0,0,0,0,2],
[0,0,0,2,0,0,1,0,0,2,0,0,2,0,0,0],
[0,0,0,0,0,0,0,-1,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,-1,0,0,0,0,0,0,0],
[0,0,0,2,0,0,2,0,0,1,0,0,2,0,0,0],
[2,0,0,0,0,2,0,0,0,0,1,0,0,0,0,2],
[0,0,0,0,0,0,0,0,0,0,0,-1,0,0,0,0],
[0,0,0,2,0,0,2,0,0,2,0,0,1,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,-1,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,-1,0],
[2,0,0,0,0,2,0,0,0,0,2,0,0,0,0,1]];

C:=Eigenvectors(CF(4),M);

Print("C*M*C^-1 diagonal? : ",IsDiagonalMat(C*M*C^-1),"\n");
Print("C orthonal?        : ",TransposedMat(C)=C^-1,"\n");

gramschmidt:=function(V)local U,n,m,T,norm,k,h;
 norm:=v->Sqrt(v*v);
 T:=DimensionsMat(V);
 m:=T[1];n:=T[2];U:=NullMat(m,n);
 U[1]:=V[1]/norm(V[1]);
 for k in [2..m] do
  U[k]:=V[k];
  for h in [1..k-1] do
   U[k]:=U[k]-((U[h]*U[k])/(U[h]*U[h]))*U[h];
  od;
  U[k]:=U[k]/norm(U[k]);
 od;
 return U;end;

R:=gramschmidt(C);

Print("R*M*R^-1 diagonal? : ",IsDiagonalMat(R*M*R^-1),"\n");
Print("R orthonal?        : ",TransposedMat(R)=R^-1,"\n");

Best Answer

I'll try to answer without knowing exactly what part of Eigenvectors isn't what you want. The short version of my answer is:

Don't worry about the length of vectors. It is usually easy to adjust other algorithms to handle non unit-length vectors in a way that never needs square roots. GAP defaults to using exact square roots, and these are expensive in terms of time and space. You can use floating point in gap, but usually then another program is more suited (and sage would be an easy way to use that other program with gap).

Square roots in GAP (just don't)

GAP represents most numbers exactly. For example, $\sqrt{2}$ is part of a field $\mathbb{Q}[\sqrt{2}]$ with an abelian Galois group, so it is a subfield of the cyclotomic field CF(8). Indeed, Sqrt(2) gives E(8)-E(8)^3 ${}=\zeta_8 - \zeta_8^3$. Notice the decimal approximation or any other properties of the $\mathbb{R}$ number/embedding are not important to GAP, only the algebraic properties of the groups acting on it.

Unfortunately, this means that some vectors have much more cpu and memory hungry lengths than others. For example to compute the length of the vector [100,1] GAP will work with the roots $\zeta_{10001}$ living in a field of dimension 9792. Multiplication and division (and even addition and subtraction) will be at least 10,000 times slower than with a rational number.

Even worse, what if the length-squared is not a rational number? What if gasp the Galois group of the length is not abelian over the rationals? Then you will be forced to define the field extensions yourself. And what if you need two different square roots? Then you will need to form the composite field of the two extensions. At this point, I am usually using a GAP package that calls out to another algebra system that handles more number theory.

A sample session

I'm not sure whether you mean orthogonal or unitary. Set g:=1 to use orthogonal and g:=-1 to use unitary. Set n:=6 to use 6x6 matrices. Set c:=4 to use CF(4) as the field.

Step 0, choose your constants:

g := -1;;
n := 6;;
c := 4;;

Step 1, generate some sample data:

fld := CF(c);;
s := RandomMat(n,n,fld); s:= (s-TransposedMat(GaloisCyc(s,g)))/2;;
o := (s^0+s)*(s^0-s)^(-1);;
d := DiagonalMat(RandomMat(1,n,fld)[1]);;
M := o*d*o^-1;;
IsDiagonalMat(d); # always true
IsDiagonalMat(o*TransposedMat(GaloisCyc(o,g))); # always true
IsOne(o*TransposedMat(GaloisCyc(o,g))); # always true

Step 2, recover an eigendecomposition

C := Eigenvectors(fld,M);;
D := C*M*C^(-1);;
IsDiagonalMat(D); # always true
IsDiagonalMat(C*TransposedMat(GaloisCyc(C,g))); # usually true
IsOne(C*TransposedMat(GaloisCyc(C,g))); # usually false, but that is ok?

Step 3, make C be orthogonal

If the eigenvalues are distinct, then the eigenvectors are uniquely determined up to a scalar multiple, so the only correction needed (and the only correction possible) is to rescale them. However, it is very inefficient to take exact square roots of arbitrary cyclotomic numbers. For example, in GAP it is about 32 times more expensive to divide by the norm of a vector $\vec v$ with $\vec v \cdot \vec v = 30$ than it is to divide by the norm of a vector with $\vec v \cdot \vec v$ a perfect (rational) square. However, it is very cheap to divide by $\vec v \cdot \vec v$, so usually instead of requiring $\vec v \cdot \vec v = 1$, we just update our formulas to divide by it as required.

If the eigenvalues are not distinct, then you can apply a type of Gram-Schmidt orthogonalization just to the eigenvectors whose eigenvalues match. The only difference between the typical Gram-Schmidt is that you don't divide by the length. If you want the vectors to be "short" then you can use the built-in LLL, but if you only care about the rows being orthogonal, then you can just use the "root free" Gram-Schmidt.

Let me know if you want the code for Gram-Schmidt (zlattice.gi has it in a repeat loop, but it and my own version both assume you want a non-standard inner product, and so would need to be simplified for this answer).

Related Question