[Math] Converting standard constrained optimization problem into an unconstrained one

lagrange multiplieroptimizationquadratic programming

This question strikes me as if it must be a theorem or something, but I cannot find a good result. I was fiddling with Lagrange multipliers and their use when it comes to converting constrained optimization problems into unconstrained ones, but felt like I was missing the point in what seems to be the general case for the questions in my review, so I pose this question. (This isn't homework, just review).

Consider the quadratic $f : \mathbb{R}^{n} \rightarrow \mathbb{R}$ in standard form $f(x) = c^{T}x + \frac{1}{2} x^{T}Cx$, $b \in \mathbb{R}^{m}$, $A \in \mathbb{R}^{m \times n}$, $m < n$ and $A$ is a full rank matrix.
How can we use QR decomposition (i.e. $A^{T} = QR$) to transform the constrained optimization problem
$$ \min_{x \in \mathbb{R}^{n}} \{f(x) : Ax=b\}$$ into an unconstrained optimization problem of a quadratic function on $n-m$ variables? What would this new unconstrained problem look like?

Best Answer

Let $ x_0$ be a particular solution to $ Ax = b $ and let $ M $ be a matrix whose columns form a basis of the null space of $ A $. Then every solution to $ Ax = b $ is equal to $ x_0 + My $ for some vector $ y $.

So your optimization problem is equivalent to minimizing $ f (x_0 + My) $ with respect to $ y $, which is an unconstrained problem.

You can express $ M $ in terms of the QR factorization of $ A^T $.

Related Question