[Math] Convex Optimization: Gradient of $\log \det (X)$

convex optimizationmatrix-calculusoptimizationproof-verification

In Boyd's CVX book, there is a step by step analysis of the gradient of so called log det function

enter image description here

Three confusions:

  1. Is the determinant for positive definite matrix exactly equivalent to the sum of eigen values is equal to the trace?

  2. There is the claim that because $\Delta x$ is small (what does it mean by small), therefore $\lambda_i$ are small, is there any justification to this claim? Because after all we are computing the eigenvalues of $X^{-1/2}\Delta X X^{-1/2}$ not simply $\Delta X$

  3. By first order approximation of $\log(1+\lambda_i) \approx \lambda_i$ I am assuming first order Mac series?

Thanks!

Best Answer

  1. If $A \in S_{++}$ then $\det A$ is the product of the eigenvalues and $\log \det A$ is the sum of their logarithms.
  2. If $\Delta X$ is small then so is $X^{-1/2} \Delta X X^{-1/2}$. This is just a multiplication by two fixed matrices after all.
  3. You got the MacLaurin series wrong, $\log (1+x) = x - \frac{x^2}{2} + \dots$ for $|x| < 1$. After that correction the argument is OK.