Doing row operations is equivalent to multiply your matrix from the left by an elementary matrix, thus you get
$$E_m E_{m-1}\cdot\ldots\cdot E_2E_1A=I$$
Now the above simply means $\;E_m\cdot\ldots\cdot E_1= A^{-1}\;$
You can do the above also with columns operations, which is the same as multiplying your matrix from the right by elementary operations, but never mixed row and column operations: ifyou began with either one, stick to it all the time.
The starting vectors in most of the eigenvalue algorithms are initialized with a random vector with entries $X \sim N(0,1)$ If you look in Trefethen. You should note that they randomly initialize most of the algorithms in numerical linear algebra.
It says "arbitrary" but what it means it is random. Some code follows after an explanation. The Lanzcos algorithm is in a suite of algorithms called the Krylov methods. The Krylov methods are a specific type of iterative eigenvalue algorithm as opposed to direct methods. The Krylov methods are listed below.
To explain how Lanczos works I am going to explain how Arnoldi works. The basis of both of these eigenvalue methods is first to use a similarity transform reducing to Hessenberg form. That is the following
$$A = QHQ^{*} $$
or
$$AQ = QH $$
Instead, we compute $Q_{n} $ letting it be a $m \times n$ matrix whose first $n$ columns are of $Q$
Additionally, we have
In the book it calls it Hessenberg matrix however my professor said it is "hessenbergish". We end up with
$$ A Q_{n} = Q_{n+1} \bar{H_{n+1}} $$
The idea is simple that we are actually building Krylov subspaces
$$ \mathbf{K_{n}} = \langle b, Ab, \cdots, A^{n-1}b \rangle = \langle q_{1} ,q_{2} , \cdots q_{n} \rangle $$
We form these throught the QR decomp
$$ K_{n} = Q_{n}R_{n}$$
Now the we had $H_{n+1}$ to remove this row we note the following. $ Q_{n}^{*}Q_{n+1}$ is the $n \times (n+1)$ identity then $Q_{n}^{*}Q_{n+1}\bar{H_{n}}$ is $ n \times n$ Hessenberg matrix obtained by removing the last row.
$$ H_{n} = Q_{n}^{*}AQ_{n} $$
Now if you see we are iteratively building our projections through these Krylov subspaces.
The Lanczos iteration is simply when the matrix is symmetric.
function [T varargout] = myLanczos(mat,varargin)
%--------------------------------------------------------------------------
% Syntax: T = myLanczos(mat);
% T = myLanczos(mat,k);
% T = myLanczos(mat,v);
% T = myLanczos(mat,k,v);
% [T Q] = myLanczos(mat);
% [T Q] = myLanczos(mat,k);
% [T Q] = myLanczos(mat,v);
% [T Q] = myLanczos(mat,k,v);
%
% Inputs: mat is a symmetric or Hermitian N x N matrix
%
% k is the number of Lanczos iterations to perform. The
% default value is N
%
% v is an N x 1 vector (internally forced to have unit norm)
% specifying a particular starting vector to use. The default
% vector is chosen uniformly at random from the unit sphere
%
% Outputs: T is a k x k symmetric tridiagonal matrix. When k < N, T is
% approximately similar to mat up to rank k. When k = N, T is
% similar (in the linear algebra sense) to mat. That is, when
% k = N, T has identical eigenvalues to mat
%
% Q is the N x k similarity transformation such that
% mat = Q * T * Q'. Note that equality holds only when k = N.
% For k < N, mat ~ Q * T * Q'
%
% Description: This function uses the Lanczos algorithm with full
% reorthogonalization to compute k x k symmetric tridiagonal
% matrix T that approximates mat up to rank k with respect to
% transformation Q. That is, mat = Q * T * Q'. Note that
% equality holds only for k = N. For k < N, mat ~ Q * T * Q'
%
% NOTE: In particular, when k = N, T has identical
% eigenvalues to mat
%
% Author: Brian Moore
% [email protected]
%
% Date: July 5, 2013
%--------------------------------------------------------------------------
% Knobs
symTol = 1e-8;
% Check input matrix size
[m,n] = size(mat);
if (m ~= n)
error('Input matrix must be square');
end
% Make sure input matrix is symmetric
if max(max(abs(mat - mat'))) > symTol
error('Input matrix is not symmetric to working precision');
end
% Parse user inputs
if (nargin == 2)
if (length(varargin{1}) == n)
k = n;
v = varargin{1};
else
k = varargin{1};
v = randn(n,1);
end
elseif (nargin == 3)
k = varargin{1};
v = varargin{2};
else
k = n;
v = randn(n,1);
end
% Initialize variables
Q = nan(n,k);
q = v / norm(v);
Q(:,1) = q;
d = nan(k,1);
od = nan(k-1,1);
% Perform Lanczos iterations
for i = 1:k
z = mat * q;
d(i) = q' * z;
%----------------------------------------------
% Uncomment only one of the following 3 options
%----------------------------------------------
% Option 1: Full re-orthogonalization
%z = z - Q(:,1:i) * (Q(:,1:i)' * z);
% Option 2: Full re-orthogonalization (x2)
z = z - Q(:,1:i) * (Q(:,1:i)' * z);
z = z - Q(:,1:i) * (Q(:,1:i)' * z);
% Option 3: No re-orthogonalization
%z = z - d(i) * q;
%if (i > 1)
% z = z - od(i-1) * Q(:,i-1);
%end
%----------------------------------------------
if (i ~= k)
od(i) = norm(z);
q = z / od(i);
Q(:,i + 1) = q;
end
end
% Construct T
T = diag(d) + diag(od,-1) + diag(od,1);
% Return user-requested information
if (nargout == 2)
varargout{1} = Q;
end
if you note if v is not specified it is these lines
else
k = n;
v = randn(n,1);
end
% Initialize variables
Q = nan(n,k);
q = v / norm(v);
It is constructing an orthogonal eigenvector. $q_{1}$ and each one after that from it.
Best Answer
It truly depends on the type of matrix you're going to compute the inverse from. Some methods are better for some classes of matrices than other.
But more importantly, why do you want to invert matrices? In many problems, you don't need to invert matrices, but only need to apply the inverse to some vectors. The latter problem is much easier to tackle, especially from a computational complexity standpoint (e.g. if your matrix is very large) and stability point of view (a matrix could have bad conditioning, be numerically close to non-invertible etc...). Does your application require that compute the matrix inverse?