We let $p=rank(AB)$, since $p \leq min(rank(A),rank(B))$, it suffices to prove that $$\sum_{i=1}^p\sigma_i(AB) \leq \sum_{i=1}^p\sigma_i(A)\sigma_i(B).$$
Consider the SVD decomposition of AB,
$$AB=\left[\begin{array}{cc}U & \widetilde{U} \end{array}\right]\left[\begin{array}{cc} \Sigma & 0 \\ 0 & 0 \end{array}\right]\left[\begin{array}{c} V^T \\ \widetilde{V}^T \end{array}\right]=U\Sigma V^T$$
where $\Sigma=diag(\sigma_1(AB),\ldots,\sigma_p(AB)).$
For any matrix $P$, we let $P_k$ denotes the submatrix consisting of the first $k$ columns of $P$. We fix a $k \in \left\{ 1,\ldots,p\right\},$
$$U_k^T(AB)V_k=diag(\sigma_1(AB),\ldots,\sigma_k(AB)).$$
Next, we consider the SVD of $U_k^TA,$
$$U_k^TA=R \left[\begin{array}{cc}S & 0 \end{array}\right]\left[\begin{array}{c} W^T\\ \widetilde{W}^T\end{array} \right]=RSW^T.$$
We have
$$U_k^TAA^TU_k=RS^2R^T$$
and hence we can see that
$$\det(RS^2R^T)\leq\prod_{i=1}^k \sigma_i(A)^2$$
and as a result,
$$\det(RSR^T) \leq \prod_{i=1}^k \sigma_i(A).$$
\begin{align*}
\prod_{i=1}^k \sigma_i(AB) &=\det(U_k^TABV_k)\\
&= \det(RSW^TBV_k) \\
&= \det(RSR^T)det(RW^TBV_k)\\
&\leq \prod_{i=1}^k \sigma_i(A)\sigma_i(B).
\end{align*}
Taking logarithm on both sides, we have $\forall k \in \left\{1, \ldots, p\right\},$
$$\sum_{i=1}^k \log(\sigma_i(AB)) \leq \sum_{i=1}^k \log(\sigma_i(A)\sigma_i(B))$$
I will use a trick called majorization to remove the logarithms. I am going to construct 2 vectors $a,b \in \mathbb{R}^{p+1}$ such that $a \succ b.$ The vectors satisfy the following conditions:
\begin{align*}
\text{1. }& a_1 \geq \ldots \geq a_{p+1},\\
\text{2. }& b_1 \geq \ldots \geq b_{p+1}, \\
\text{3. }& \sum_{i=1}^k b_i \leq \sum_{i=1}^k a_i, \forall k \in \left\{1,\ldots,p\right\}, \\
\text{4. }& \sum_{i=1}^{p+1} b_i = \sum_{i=1}^{p+1} a_i.
\end{align*}
Under the above conditions, $\sum_{i=1}^{p+1} \exp(b_i) \leq \sum_{i=1}^{p+1} \exp(a_i)$ since the exponential function is convex.
We let
$$b=\left(\log(\sigma_1(AB),\ldots,\log(\sigma_p(AB)),\min(a_p,b_p)\right),$$
$$a=\left(\log(\sigma_1(A)\sigma_1(B),\ldots,\log(\sigma_p(A)\sigma_p(B)),\sum_{i=1}^{p+1}b_i-\sum_{i=1}^p a_i\right).$$
To verify the $\textbf{Condition 1}$,
I need to show that $a_p \geq \sum_{i=1}^{p+1}b_i-\sum_{i=1}^p a_i$ which is equivalent to
$$a_p-\sum_{i=1}^{p+1}b_i+\sum_{i=1}^p a_i=(a_p-b_{p+1})+\left(\sum_{i=1}^pa_i-\sum_{i=1}^p b_i\right)\geq 0.$$
$b_{p+1} \leq a_p$ due to the definition of $b_{p+1}$ and we have proven earlier that $\left(\sum_{i=1}^pa_i-\sum_{i=1}^p b_i\right) \geq 0$.
To verify the $\textbf{Condition 2}$, observe that $b_p \geq b_{p+1}$ due to definition of $b_{p+1}$ again.
$\textbf{Condition 3}$ was proven earlier.
To check $\textbf{Condition 4}$,
$$\sum_{i=1}^{p+1}a_i=\sum_{i=1}^{p}a_i+a_{p+1}=\sum_{i=1}^{p+1}b_i.$$
As a result, we have
$$\sum_{i=1}^{p+1} \exp(b_i) \leq \sum_{i=1}^{p+1} \exp(a_i)$$
which is equivalent to
$$\sum_{i=1}^{p} \exp(b_i) \leq \sum_{i=1}^{p} \exp(a_i)+\exp(a_{p+1})-\exp(b_{p+1}).$$
Thus, we have
\begin{align*}
\sum_{i=1}^{p} \sigma_i(AB) &\leq \sum_{i=1}^{p} \sigma_i(A)\sigma_i(B)+\exp(a_{p+1})-\exp(b_{p+1})\\
&=\sum_{i=1}^{p} \sigma_i(A)\sigma_i(B)+\exp\left(\sum_{i=1}^p(b_i-a_i)+b_{p+1}\right)-\exp(b_{p+1})\\
&=\sum_{i=1}^{p} \sigma_i(A)\sigma_i(B)+\exp(b_{p+1})\left(\exp \left(\sum_{i=1}^p(b_i-a_i)\right)-1)\right)\\
&\leq \sum_{i=1}^{p} \sigma_i(A)\sigma_i(B)+\exp(b_{p+1})\left(\exp \left(0\right)-1)\right)\\
&=\sum_{i=1}^{p} \sigma_i(A)\sigma_i(B)
\end{align*}
Hence we are done.
A bonus that I learn from answering the question is if the following conditions hold:
\begin{align*}
\text{A. }& a_1 \geq \ldots \geq a_{p},\\
\text{B. }& b_1 \geq \ldots \geq b_{p}, \\
\text{C. }& \sum_{i=1}^k b_i \leq \sum_{i=1}^k a_i, \forall k \in \left\{1,\ldots,p\right\}, \\
\end{align*}
Then we have $$\sum_{i=1}^k \exp(b_i) \leq \sum_{i=1}^k \exp(a_i), \forall k \in \left\{1,\ldots,p\right\}.$$
It is not clear that your proof covers all possible cases. Also it can be simplified a bit by using previous results instead of repeating arguments.
You say
The problem is trivial (equality holds) when the value of both integrals is $0$.
but what you actually need is that equality holds if either of the integrals on the right-hand side is zero. That is still correct because it implies that either $f$ or $g$ is zero almost everywhere on $[a,b]$, so that the integral on the left-hand side is zero as well.
Now you can consider the case that both $\int_a^b f(x) \, dx$ and $\int_a^b g(x) \,dx$ are non-zero. You do that by considering two cases:
Case 1: $\int_a^b f(x) \, dx = \int_a^b g(x) \,dx = 1$.
Case 2: The general case, i.e., $\int_a^b f(x) \, dx$ and $\int_a^b g(x) \,dx \ne 1$.
If the second case means that both integrals are not equal to one, then your proof does not cover the cases that one integral is equal to one and the other is not.
Actually your proof of the “Case 2” works as long as both integrals on the right are different from zero, which means that “Case 1” is not needed.
Note also that you use Young's inequality in both cases, so you are repeating some arguments.
What you can do instead (to clarify and simplify) your proof, is one of the following.
Either:
- Prove Hölder's inequality for the case that $\int_a^b f(x) \, dx = 0 $ or $\int_a^b g(x) \, dx = 0$.
- Then prove Hölder's inequality for the case that $\int_a^b f(x) \, dx \ne 0 $ and $\int_a^b g(x) \, dx \ne 0$. This would be what you wrote in your “Case 2,” using Young's inequality.
Or:
- Prove Hölder's inequality for the case that $\int_a^b f(x) \, dx = 0 $ or $\int_a^b g(x) \, dx = 0$.
- Then prove Hölder's inequality for the case that $\int_a^b f(x) \, dx = 1 $ and $\int_a^b g(x) \, dx = 1$. This would be what you wrote in your “Case 1,” using Young's inequality.
- Finally prove Hölder's inequality for the case that $\int_a^b f(x) \, dx \ne 0 $ and $\int_a^b g(x) \, dx \ne 0$. Here you should not repeat the argument from part 2, but use that result instead.
Let me clarify what I mean with “using that result instead.“ If $\alpha > 0$ and $\beta > 0$ are defined such that
$$
\int_a^b f(x)^p \, dx = \alpha^p \, , \, \int_a^b g(x)^q \, dx = \beta^q
$$
then (as you say)
$$
\int_a^b \left(\frac{f(x)}{\alpha}\right)^p \, dx = \int_a^b \left(\frac{g(x)}{\beta}\right)^q \, dx = 1 \, .
$$
From the previous part we know that Hölder's inequality holds for the functions $f/\alpha$ and $g/\beta$, i.e. that
$$
\int_a^b \frac{f(x)g(x)}{\alpha \beta} \, dx \le 1 \, .
$$
It follows that
$$
\int_a^b f(x) g(x) \, dx \le \alpha \beta
$$
and that is exactly Hölder's inequality for $f$ and $g$.
Best Answer
Actually the Holder inequality $$uv\le{1\over p}u^p+{1\over q}v^q, \ u,v\ge 0,\ q={p\over p-1}$$ can be applied, but for proving that $e^x$ is convex. Then we can use this to prove the claim as the function in OP will be a sum of $n$ convex functions. Concerning $e^x$ we have $$e^{\alpha x+(1-\alpha)y}= e^{\alpha x}e^{(1-\alpha)y}$$ Then for $p=\alpha^{-1},$ $u=e^{\alpha x},\ v=e^{(1-\alpha)y},$ by Holder inequality we get that $$e^{\alpha x}e^{(1-\alpha)y}\le \alpha e^x+(1-\alpha)e^y$$