I am very confused by a line in the textbook "Gaussian Processes for Machine Learning" by Rasmussen and Williams (available here). On page 48, they write the following:
$|B| = |K| \cdot |K^{-1} + W| = |I_n + W^{\frac{1}{2}}KW^{\frac{1}{2}}|$
I don't understand how the second equality follows. I understand that the following works:
$|K| \cdot |K^{-1} + W| = |I_{n} + KW|$
due to the properties of determinants, but I just can't work out how $KW = W^{\frac{1}{2}}KW^{\frac{1}{2}}$.
More information that may be important:
- $W$ is a diagonal matrix whose entries are all positive
- $K$ is a symmetric positive definite matrix.
Thanks for your help!
Best Answer
The diagonal entries of $W$ are positive, $W$ has a square root that is invertible.
$$|I+KW|=|W^{-\frac12}+KW^\frac12||W^\frac12|=|W^\frac12||W^{-\frac12}+KW^\frac12|=|I+W^\frac12KW^\frac12|$$