Continuous water-filling

calculuscalculus-of-variationsconvex optimizationoptimization

Disclaimer: I x-posted this question on OR-stackexchange as suggested in the comment below. I reposted it there since I did not receive any satisfactory answer on math-stackexchange up to now.

Let $x\in\mathbb{R}^n$ be an optimization variable and $\alpha\in\mathbb{R}^n$ be a given $n$-dimensional vector. The standard water-filling problem is formulated as

\begin{equation}
\begin{array}{ll}
\underset{x}{\operatorname{minimize}} & -\displaystyle\sum_{i=1}^{n} \log \left(\alpha_{i}+x_{i}\right) \\
\text { subject to } & x \succeq 0, \quad \mathbf{1}^{T} x=1,
\end{array}
\end{equation}

and it has a well-known solution (see Boyd & Vandenberghe page 245).

I was thinking about the case in which we have continuous communication channel slots. Intuitively this may be thought of as the case when the sizes of the communication channels approach zero. How is it possible to generalize this optimization problem to this case?

I believe that $\alpha$ and $x$, which from now on we will denote by $\alpha(x)$ and $f(x)$ respectively, would in this case be continuous real function and the problem would be:

\begin{equation}
\begin{array}{ll}
\underset{f \in \mathcal{F}}{\operatorname{minimize}} & -\displaystyle\int_{x} \log \left(\alpha(x)+f(x)\right)dx \\
\text { subject to } & f(x) \geq 0\; \forall x, \quad \int_x f(x)dx=1
\end{array}
\end{equation}

where $\mathcal{F}$ is a given class of functions (e.g. Hölder continuous, Lipschitz, etc). We can also assume that the domain of the functions $f(x)$ and $\alpha(x)$ is compact, i.e. $x\in\mathcal{X}\subseteq \mathbb{R}$, with $\mathcal{X}$ a compact subspace of $\mathbb{R}$.

Is this the right path to follow? Do you have any other on how to solve these types of problems? It looks it is related to the calculus of variations, but I have never seen these types of problems and I have no idea how to solve them. Thanks!

Best Answer

You're correct that the problem falls into the class treated by the calculus of variations. A nice modern approach to this subject is to treat the objective as a function on a topological vector space, which contains the space of functions over which you want to optimise the objective. The optimality conditions are then often quite analogous to the Karush-Kuhn-Tucker conditions for optimisation over a finite-dimensional vector space, although there can be some messy technical details which depend on the specific class of functions over which you want to optimise. David Luenberger's Optimization by Vector Space Methods contains a nice exposition of this approach.

For the specific problem you've given, the optimality conditions are similar to those of the finite-dimensional version. You introduce Lagrange multipliers $\ \lambda\in\mathbb{R}\ $ for the constraint $$ \int_{\mathcal{X}}f(x)=1\ , $$ and $\ \mu\in\mathcal{F}^*\ $ for the constraint $$ f(x)\ge0\ , $$ where $\ \mathcal{F}^*\ $ is the function-space dual of $\ \mathcal{F}\ $. This is where some of the messy technical details mentioned above come in. The duals of spaces of continuous functions are typically spaces of finitely additive measures, which can be difficult to deal with. A way out, in this case, is to relax the conditions on $\ f\ $ by allowing it to be any function that's square-integrable with respect to Lebesgue measure over $\ \mathcal{X}\ $. The space $\ \mathcal{H}\ $ of functions square-integrable over $\ \mathcal{X}\ $ is a real Hilbert space, which is self-dual. The Lagrange multiplier $\ \mu\in\mathcal{H}^*\ $ corresponding to the constraint $\ f(x)\ge0\ $ is now itself just a square-integrable function $\ \mu:\mathcal{X}\rightarrow\mathbb{R}\ $. The Lagrangian for the problem becomes $$ \int_{\mathcal{X}}-\log(\alpha(x)+f(x))dx+\lambda\left(\int_{\mathcal{X}}f(x)dx-1\right)-\int_{\mathcal{X}}\mu(x)f(x)dx\ , $$ and the optimality conditions are $$ \frac{-1}{\alpha(x)+f(x)}+\lambda-\mu(x)=0\\ \mu(x)\ge0,\ \ f(x)\ge0,\ \ \mu(x)f(x)=0,\ \ \int_{\mathcal{X}}f(x)=1\ , $$ which are very closely analogous to those of the finite dimensional case.

Now let $\ \mathcal{X}_1=\{ x\in\mathcal{X}\,|\,f(x)>0\,\}\ $. The complementary slackness conditions, $\ \mu(x)f(x)=0\ $, tell us that $\ \mu(x)=0\ $ for $\ x\in\mathcal{X}_1\ $, and the first optimality condition then gives \begin{align} \frac{1}{\alpha(x)+f(x)}&=\lambda\ \text{, or}\\ f(x)&=\frac{1}{\lambda}-\alpha(x)>0 \end{align} for $\ x\in\mathcal{X}_1\ $. For $\ x\in\mathcal{X}\setminus\mathcal{X}_1\ $, where $\ f(x)=0\ $, we must have \begin{align} \lambda-\frac{1}{\alpha(x)}&=\mu(x)\ge0\ \text{, or}\\ \frac{1}{\lambda}&\le\alpha(x)\ , \end{align} since $\ \lambda\ $ must be positive. It follows from this that $\ f(x)=$$\max\left(0,\frac{1}{\lambda}-\alpha(x)\right)\ $. The constraint $$ 1=\int_\mathcal{X}f(x)dx=\int_\mathcal{X}\max\left(0,\frac{1}{\lambda}-\alpha(x)\right)dx $$ now allows us to determine the value of $\ \lambda\ $. The function $\ \phi(\nu)=\int_\limits{\mathcal{X}}\max\left(0,\nu-\alpha(x)\right)dx\ $ is a strictly increasing continuous function of $\ \nu\ $, with $\ \phi(0)=0\ $ and $$\ \phi\left(\frac{1}{\int_\limits{\mathcal{X}}1dx}+\max_\limits{ x\in\mathcal{X}}\alpha(x)\right)\ge1\ ,$$ so there is a unique $\ \nu^*>0\ $ such that $\ \phi\big(\nu^*\big)=1\ $, and then $\ \lambda=\frac{1}{\nu^*}\ $.

Because the objective function is strictly concave in this case, the optimality conditions are both necessary and sufficient, so $\ f(x)=\max\left(0, \frac{1}{\nu^*}-\alpha(x)\right)\ $ achieves the minimum of the objective function over the class of square-integrable functions over $\ \mathcal{X}\ $. Since $\ f\ $ happens to be continuous, and all continuous functions over the compact set $\ \mathcal{X}\ $ are square-integrable, it must also achieve the minimum of the objective over the set of continuous functions on $\ \mathcal{X}\ $.

Related Question