On my quest to find the holy grail of mathematics become a little bit better at algebra, I have read up on matrix logarithms and exponentials and how useful they can be in investigating groups and algebras. I know that power methods are common for trancendental matrix functions. This question is about fast numerical ways to compute the matrix logarithm.
Own work: I know of and have implemented the Taylor expansion: $$\log({\bf I + A}) = \sum_{i=1}^N\frac{(-1)^i}{i}{\bf A}^i$$ or equivalently $$\log({\bf A}) = \sum_{i=1}^N\frac{(-1)^i}{i}({\bf A-I})^i$$ in various ways. Having stored $({\bf A-I})^{i-1}$ at iteration $i$ lets us multiply with $\bf (A-I)$ just once every iteration. I have read about some popular speed-up technique using the fact that $$2\log({\bf A}^{1/2}) = \log({\bf A})$$ and that Taylor series is more accurate close to the point of expansion for the lower exponents. However as I don't know any fast matrix-square root, I have not yet managed to employ this fact. Maybe a Taylor expansion of the square root function would do?
Best Answer
Power methods are actually not so common (anymore?) for computing matrix functions, especially if your matrix has a high condition number.
Try Padé approximants, inverse scaling-and-squaring, or a Schur algorithm instead. The matrix logarithm and the matrix square root are well explored topics in the field of matrix function computations, so it's best to just check out the established sources.