Optimisation: How to compute the stepsize for the Inexact Newton’s Method

optimizationpython

I need to implement the Inexact Newton's Method in Python. (For the sake of clearity I included the code for the algorithm below.)

However, I do not understand how to implement the computation of the direction $s^k$ in step 4. My instructor told me to use the Conjugate Gradient Method via scipy.sparse.linalg.cg. As I understand said command solves a linear equation system of the form $Ax = b$ with a given tolerance, but how does this relate to step 4, I mean this expression contains norms. Could you please explain this to me?

Input: $x_0 \in \mathbb{R^n}, \varepsilon >0$

  1. $k \leftarrow 0$
  2. while $\vert\vert \nabla f(x^k) \vert\vert > 0$
  3. $\qquad$ Choose tolerance $\eta_k > 0$
  4. $\qquad$ Compute $s^k$ such that $\vert\vert \nabla^2f(x^k)s^k+\nabla f(x^k) \vert\vert \le \eta_k \vert\vert \nabla f(x^k) \vert\vert $
  5. $\qquad x^{k+1} \leftarrow x^k + s^k$
  6. $\qquad k \leftarrow k+1$

Best Answer

A good Conjugate Gradient Method is explained starting at 14:00 mins in this youtube video.

It also shows you a python function conjGrad at 21:55 mins. And lastly, at 22:05 onwards, you will see an example of both Gauss-Seidel and conjGrad.

I hope this research helps. Here's the screenshot of that video: enter image description here