Unconstrained Mirror Descent Algorithm

convex optimizationoptimization

I wanted to know what the formula for mirror descent is. (I'm only looking at the unconstrained case) I'm trying to set up mirror descent, I am starting with the fact it can be thought as minimising this
$$
p_k(y) = f(x_k) + \langle\nabla{f(x_{k})},y-x_{k}\rangle + \dfrac{1}{h_k}B_w(y,x_k)
$$

Where $B_w$ is the bregman divergence.

But then I'm not too sure how you get the update formula out from this?

Would it just be

$$
\nabla{w}(x_{k+1}) = \nabla{w}(x_{k}) – \nabla{f}(x_k)
$$

If that is the case how do I get that from the above minimisation problem

Best Answer

Since the Bregman divergence is a convex function, you can get the update formula from the first-order optimality condition:

$$ 0 = \nabla_y \left\{ f(x_k) + \langle \nabla f(x_k), y - x_k \rangle + \frac{1}{h_k} \left(w(y) - w(x_k) - \langle \nabla w(x_k), y - x_k \rangle \right) \right\} $$ which leads to the update formula (here $y$ is your $x_{k+1}$):

$$ \nabla w(y) = \nabla w(x_k) - h_k \nabla f(x_k). $$

Related Question