[Math] Power Method, Eigenvalues.

approximationeigenvalues-eigenvectorsnumerical methods

ok so i've studied recently about the power method
which gives us the eigen-vector for the biggest eigen-value
so we know that on the kth iteration the approximation for the eigen-vector will be: $X_n = A^k*X_0$ where $A$ is the matrix that we willing to find its eigen-vector.
so lets say $X_0 = \begin{bmatrix} 1 \\ 1 \\ 1 \\\end{bmatrix}$ so the algorithm is iterable which means inorder to find the next $X_m$ we will iterate $m$ times.
so my question is:
in each iteration we multiple the $k_{th}$ vector by Matrix $A$ from left.
but this method is kinda slow i would say so i thought to accelerate it with the following method.
lets say we want to iterate 5 times, so instead of multiplying 5 times $A$ from left i would suggest to multiple $A$ by $A$ so we get $A^2$ on iteration $1$ now
on the next iteration we will multiple $A^2$ by $A^2$ so we will get $A^4$ so eventually on the 5th iteration we will have $A^{32}$ instead of $A^5$.
but when i tried to check in via python i encountered some issues with the eigen-vectors.
when i ran the algorithm 4 iteration ($A^{16}$) i got eigen-vector but not for the biggest eigen-value
but when i ran 3 iterations ($A^8$) i had the right eigen-vector.
so i dont know what im missing here and why i have this issue but if some one can clarify whether i can or cant multiple the matrix i would be grateful.

EDIT: this is the code,

import numpy as np
from numpy import linalg as LA

A = np.matrix([[0,11,-5] , [-2,17,-7] , [-4,26,-10]])
X0 = np.matrix([1,1,1]).T
iterations = 3

for i in range (iterations):
    A = A*A

x = A*X0
norm = LA.norm(x)
print x/norm
print LA.eig(A)[1]

$A = \begin{bmatrix} 0 & 11 & -5 \\ -2 & 17 & -7 \\ -4 & 26 & -10\end{bmatrix}$

when i set iterations for 3 ($A^8$) i get $X$ to be
$X = \begin{bmatrix} [ 0.32474385] \\
[ 0.48679802] \\
[ 0.81090636]\\\end{bmatrix}$
but when i set iterations to 4 the eigen-vector that i get is:
$\begin{bmatrix}[-0.21821789]\\
[-0.43643578]\\
[-0.87287156]\\\end{bmatrix}$

The Eigen-Vectors of $A$ are:
$\begin{bmatrix}[-0.21821789] [-0.40824829] [-0.32444284]\\
[-0.43643578] [-0.40824829] [-0.48666426]\\
[-0.87287156] [-0.81649658] [-0.81110711]\end{bmatrix}$

which you can clearly see that it converges to 2 different vectors.

Best Answer

Your element type in the matrix is integer, python always tries to guess your intentions from the input format, and no dots or floating point constants means integer. In numpy, these are 32bit signed integers with the largest value $2^{31}-1=2147483647$.$~$ (You can get 64bit signed integers by marking the constants as in 0L, but this only postpones the problem.)

At some point in the matrix squaring process your entries get so large that you get integer overflow, and the whole matrix changes. Thus in $A^{16}$ you get

[[ -65534  -65539   65537]
 [-131070 -131075  131073]
 [-262140 -262150  262146]]

instead of the correct

[[      -65534 17179803645 -8589869055]
 [     -131070 25769672701 -12884770815]
 [     -262140 42949410810 -21474574334]]

Change just one element to float to get the whole matrix in floating point,

A = np.matrix([[0.0,11,-5] , [-2,17,-7] , [-4,26,-10]])

and this problem disappears.

Related Question