[Math] Please explain the intuition behind the dual problem in optimization.

convex optimizationintuitionkarush kuhn tuckerlagrange multiplieroptimization

I've studied convex optimization pretty carefully, but don't feel that I have yet "grokked" the dual problem. Here are some questions I would like to understand more deeply/clearly/simply:

  1. How would somebody think of the dual problem? What thought process would lead someone to consider the dual problem and to recognize that it's valuable/interesting?
  2. In the case of a convex optimization problem, is there any obvious reason to expect that strong duality should (usually) hold?
  3. It often happens that the dual of the dual problem is the primal problem. However, this seems like a complete surprise to me. Is there any intuitive reason to expect that this should happen?
  4. Does the use of the word "dual" or "duality" in optimization have anything to do with the dual space in linear algebra? Or are they just different concepts that go by the same name. What about the use of the word "dual" in projective geometry — is there a connection there?
  5. You can define the dual problem and prove theorems about strong duality without ever mentioning the Fenchel conjugate. For example, Boyd and Vandenberghe prove a strong duality theorem without mentioning the Fenchel conjugate in their proof. And yet, people often talk as if the Fenchel conjugate is somehow the "essence" of duality, and make it sound as if the whole theory of duality is based on the Fenchel conjugate. Why is the Fenchel conjugate considered to have such fundamental importance?

Note: I will now describe my current level of understanding of the intuition behind the dual problem. Please tell me if you think I might be missing any basic insights.

I have read the excellent notes about convex optimization by Guilherme Freitas, and in particular the part about "penalty intuition". When we are trying to solve

\begin{align*}
\text{minimize} &\quad f(x) \\
\text{such that} & \quad h(x) \leq 0
\end{align*}

one might try to eliminate the constraints by introducing a penalty when constraints are violated. This gives us the new unconstrained problem

\begin{equation}
\text{minimize} \quad f(x) + \langle \lambda ,h(x) \rangle
\end{equation}

where $\lambda \geq 0$. It's not hard to see that for a given $\lambda \geq 0$, the optimal value of this unconstrained problem is less than or equal to the optimal value for the constrained problem. This gives us a new problem — find $\lambda$ so that the optimal value for the unconstrained problem is as large as possible. That is one way to imagine how somebody might have thought of the dual problem. Is this the best intuition for where the dual problem comes from?

Another viewpoint: the KKT conditions can be derived using what Freitas calls the "geometric intuition". Then, if we knew the value of the multipliers $\lambda$, it would be (often) much easier to find $x$. So, a new problem is to find $\lambda$. And if we can somehow recognize that $\lambda$ is a maximizer for the dual problem, then this suggests that we might try solving the dual problem.

Please explain or give references to any intuition that you think I might find interesting, even if it's not directly related to what I asked.

Best Answer

Here's what's really going on with the dual problem. (This is my attempt to answer my own question, over a year after originally asking it.)

(A very nice presentation of this material is given in Ekeland and Temam. These ideas are also in Rockafellar.)

Let $V$ be a finite dimensional normed vector space over $\mathbb R$. (Working in an inner product space or just in $\mathbb R^N$ risks concealing the fundamental role that the dual space plays in duality in convex optimization.)

The basic idea behind duality in convex analysis is to think of a convex set in terms of its supporting hyperplanes. (A closed convex set $\Omega$ can be "recovered" from its supporting hyperplanes by taking the intersection of all closed half spaces containing $\Omega$. The set of all supporting hyperplanes to $\Omega$ is sort of a "dual representation" of $\Omega$.)

For a convex function $f$ (whose epigraph is a convex set), this strategy leads us to think about $f$ in terms of affine functions $\langle m^*, x \rangle - \alpha$ which are majorized by $f$. (Here $m^* \in V^*$ and we are using the notation $\langle m^*, x \rangle = m^*(x)$.)

For a given slope $m^* \in V^*$, we only need to consider the "best" choice of $\alpha$ -- the other affine minorants with slope $m^*$ can be disregarded. \begin{align*} & f(x) \geq \langle m^*,x \rangle - \alpha \quad \forall x \in V \\ \iff & \alpha \geq \langle m^*, x \rangle - f(x) \quad \forall x \in V \\ \iff & \alpha \geq \sup_{x \in V} \quad \langle m^*, x \rangle - f(x) \end{align*} so the best choice of $\alpha$ is \begin{equation} f^*(m^*) = \sup_{x \in V} \quad \langle m^*, x \rangle - f(x). \end{equation} If this supremum is finite, then $\langle m^*,x \rangle - f^*(m^*)$ is the best affine minorant of $f$ with slope $m^*$. If $f^*(m^*) = \infty$, then there is no affine minorant of $f$ with slope $m^*$.

The function $f^*$ is called the "conjugate" of $f$. The definition and basic facts about $f^*$ are all highly intuitive. For example, if $f$ is a proper closed convex function then $f$ can be recovered from $f^*$, because any closed convex set (in this case the epigraph of $f$) is the intersection of all the closed half spaces containing it. (I still think the fact that the "inversion formula" $f = f^{**}$ is so simple is a surprising and mathematically beautiful fact, but not hard to derive or prove with this intuition.)

Because $f^*$ is defined on the dual space, we see already the fundamental role played by the dual space in duality in convex optimization.

Given an optimization problem, we don't obtain a dual problem until we specify how to perturb the optimization problem. This is why equivalent formulations of an optimization problem can lead to different dual problems. By reformulating it we have in fact specified a different way to perturb it.

As is typical in math, the ideas become clear when we work at an appropriate level of generality. Assume that our optimization problem is \begin{equation*} \operatorname*{minimize}_{x} \quad \phi(x,0). \end{equation*} Here $\phi:X \times Y \to \bar{\mathbb R}$ is convex. Standard convex optimization problems can be written in this form with an appropriate choice of $\phi$. The perturbed problems are \begin{equation*} \operatorname*{minimize}_{x} \quad \phi(x,y) \end{equation*} for nonzero values of $y \in Y$.

Let $h(y) = \inf_x \phi(x,y)$. Our optimization problem is simply to evaluate $h(0)$.

From our knowledge of conjugate functions, we know that \begin{equation*} h(0) \geq h^{**}(0) \end{equation*} and that typically we have equality. For example, if $h$ is subdifferentiable at $0$ (which is typical for a convex function) then $h(0) = h^{**}(0)$.

The dual problem is simply to evaluate $h^{**}(0)$.

In other words, the dual problem is: \begin{equation*} \operatorname*{maximize}_{y^* \in Y^*} \quad - h^*(y^*). \end{equation*} We see again the fundamental role that the dual space plays here.

It is enlightening to express the dual problem in terms of $\phi$. It's easy to show that the dual problem is \begin{equation*} \operatorname*{maximize}_{y^* \in Y^*} \quad - \phi^*(0,y^*). \end{equation*}

So the primal problem is \begin{equation*} \operatorname*{minimize}_{x \in X} \quad \phi(x,0) \end{equation*} and the dual problem (slightly restated) is \begin{equation*} \operatorname*{minimize}_{y^* \in Y^*} \quad \phi^*(0,y^*). \end{equation*} The similarity between these two problems is mathematically beautiful, and we can see that if we perturb the dual problem in the obvious way, then the dual of the dual problem will be the primal problem (assuming $\phi = \phi^{**}$). The natural isomorphism between $V$ and $V^{**}$ is of fundamental importance here.

The key facts about the dual problem -- strong duality, the optimality conditions, and the sensitivity interpretation of the optimal dual variables -- all become intuitively clear and even "obvious" from this viewpoint.

An optimization problem in the form \begin{align*} \operatorname*{minimize}_x & \quad f(x) \\ \text{subject to} & \quad g(x) \leq 0, \end{align*} can be perturbed as follows: \begin{align*} \operatorname*{minimize}_x & \quad f(x) \\ \text{subject to} & \quad g(x) + y \leq 0. \end{align*}

This perturbed problem has the form given above with \begin{equation*} \phi(x,y) = \begin{cases} f(x) \quad \text{if } g(x) + y \leq 0 \\ \infty \quad \text{otherwise}. \end{cases} \end{equation*} To find the dual problem, we need to evaluate $-\phi^*(0,y^*)$, which is a relatively straightforward calculation. \begin{align*} -\phi^*(0,y^*) &= -\sup_{g(x) + y \leq 0} \quad \langle y^*,y \rangle - f(x) \\ &= -\sup_{\substack{ x \\ q \geq 0 }} \quad \langle y^*, -g(x) - q \rangle - f(x) \\ &= \inf_{\substack{ x \\ q \geq 0 }} \quad f(x) + \langle y^*, g(x) \rangle + \langle y^*, q \rangle. \end{align*} We can minimize first with respect to $q$, and we will get $-\infty$ unless $\langle y^*, q \rangle \geq 0$ for all $q \geq 0$. In other words, we will get $-\infty$ unless $y^* \geq 0$.

The dual function is \begin{equation*} -\phi^*(0,y^*) = \begin{cases} \inf_x \quad f(x) + \langle y^*, g(x) \rangle \quad \text{if } y^* \geq 0 \\ -\infty \quad \text{otherwise}. \end{cases} \end{equation*}

This is the expected result.