For my numerical analysis class, I wanted to implement the rather simple, in-place LU decomposition algorithm. I did this the naive way and found my code was very, very slow, because I was doing every operation in good old interpreted Python.
So I looked in some books (mainly my trusty Numerical Linear Algebra, by Trefethen and Bau), and found an easy way to speed up the computations, by using NumPy's np.dot
routine. However, the algorithm is still very slow, and the book cited above mentions an outer product reformulation of the LU algorithm, which uses only one loop.
The exact statement is the following:
Like most of the algorithms in this book, Gaussian elimination involves a triply nested loop. In Algorithm 20.1, there are two explicit
for
loops, and the third loop is implicit in the vectors $u_{j, k \colon m}$ and $u_{k,k \colon m}$. Rewrite this algorithm with just one explicitfor
loop indexed by $k$. Inside this loop, $U$ will be updated at each step by a certain rank-one outer product. This "outer product" form of Gaussian elimination may be a better starting point than Algorithm 20.1 if one wants to optimize computer performance.
Now, I've been searching for this formulation for a while now, and can't seem to find it. The main things it needs to have are
- it needs to be in-place;
- it can't use SciPy routines (NumPy is fine however)
The current code I'm using is
def LUfactorize(A):
lA = len(A)
for k in range (0, lA-1):
for j in range(k+1, lA):
if abs(A[j, k]) > 1e-10:
A[j, k] /= A[k, k]
A[j, k+1:lA] -= A[j][k] * A[k, k+1:lA]
Best Answer
You can do
which vectorizes your lines of code and thus should be faster.
Edit: Here is my time measuring:
And my measurements:
As you probably notice, the upspeed is better than I thought. :D