Give a bound of $\frac{\int_{[0,1]^n}\exp(2\beta f(x))d x_1\ldots d x_n}{\beta(\int_{[0,1]^n}\exp(\beta f(x))d x_1\ldots d x_n)^2} $ for large $\beta$

calculusexponential functionfunctional-analysisintegration

Let $f:R^n\mapsto R$ be a continuous function over compact set $[0,1]^n$, and $0<C_1\leq f(x)\leq C_2$ on this set.

For all $\beta\in[1,\infty)$, I try to give a bound (irrelative to $\beta$) of
\begin{equation}
\frac{\int_{[0,1]^n}\exp(2\beta f(x))d x_1\ldots d x_n}{\beta(\int_{[0,1]^n}\exp(\beta f(x))d x_1\ldots d x_n)^2}.
\tag{1}
\end{equation}

Or equivalently, showing that
\begin{equation}
\lim\sup_{\beta\rightarrow\infty}\frac{\int_{[0,1]^n}\exp(2\beta f(x))d x_1\ldots d x_n}{\beta(\int_{[0,1]^n}\exp(\beta f(x))d x_1\ldots d x_n)^2}<\infty.
\tag{2}
\end{equation}

When $n=1$, I can show that (1) is bounded by some constants, which is only relative to $C_1, C_2$ and $n$. My sketch proof is to represent the numerator and denominator by a sequence of series, respectively. Here, I use Integration by parts and induction to obtain the sequence of series. However, the Integration by parts is hard for $n>1$.

Now, I want to extend my result to the case when $n>1$. I used to think the extension was easy. Here is my thought: $f$ is continuous and bounded in $[0,1]^n$, so we can rearrange the value of $f$ from small to large. Then, we can project the increasing values onto a one-dimension continuous function. Then, we can transform the multi-dimensional problem into the a one-dimension problem. However, I fail to find a reference to do such a thing. Can someone give a bound of (1) using the fact the (1) is bounded when $n=1$? I think induction may be helpful.

I also have another thought. I try to prove that there exists a one-dimension function $g$ ($g$ has to be independent of $\beta$) such that
$\int_{[0,1]}\exp(2\beta g(y)) dy=\int_{[0,1]^n}\exp(2\beta f(x))d x_1\ldots d x_n$
and
$\int_{[0,1]}\exp(\beta g(y)) dy=\int_{[0,1]^n}\exp(\beta f(x))d x_1\ldots d x_n$. Then, we can transform the multi-dimensional problem into the a one-dimension problem. This way also seems to be possible.

Best Answer

I started out writing down my approach, which is not complete in any way, but contains some ideas which might be helpful.

Let's define the $$E[h(x)] = \int_{[0,1]^n} h(x)\,d^nx$$ to make notation a little easier, and to hint at my approach being related to probability.

Note of interest: In probability theory, $[0,1]^n$ would be the probability space with $X$ indicating a random point drawn uniformly from $[0,1]^n$, and $H=h(X)$ a random variable where it is the probability distribution of $H$ that is of interest rather than the points $X$. The probability distribution is exactly the rearrangement of values from small to big that you refer to at the end. One would write $E[H]=E[h(X)]$ instead of $E[h(x)]$, but I'll stick with the latter for now.

For $f(x)$, we define the moment generating function and cumulant generating function as $$ M(s) = E\left[\exp(sf(x))\right] \quad\mbox{and}\quad m(s) = \ln M_f(s) \quad\mbox{respectively}. $$ These names reflect that the Taylor expansion of $M(s)$ around $s=0$ yields $E[f(x)^k]/k!$, while those of $m(s)$ similarly correspond to mean, variance, skewness, kurtosis, etc. In particular, $m(0)=0$ and $m'(s)=\mu=E[f(x)]$.

More importantly, $m(s)$ is convex; ie $M(s)$ is log-convex.

Given this terminology, the question is to find bounds on $R(\beta)=M(2\beta)/\beta M(\beta)^2$. Taking the logarithm, this corresponds to finding bounds on $r(\beta)=m(2\beta)-2m(\beta)-\ln\beta$. So we can start off finding bounds on $r(\beta)$ using only that $m(s)$ is convex, and next try to construct cases that approach these bounds.

If we let $A=\inf f$ and $B=\sup f$, we have $C_1\le A\le f(x)\le B\le C_2$ which is the strictest possible bound on $f$. Note that I will not use that these must be positive; in fact, adding any constant $C$ to $f$ has no effect on $r(\beta)$, so we could have assumed $\mu=E[f(x)]=0$ without loss of generality except making $C_1$ negative.

As $s$ increases, $m(s)/s$ will approach $B$; as $s$ decreases, $m(s)/s$ will approach $A$. To prove this, there is some volume for which $f(x)>B-\epsilon$ for any $\epsilon>0$, and as $s$ increases this will dominate. As $m(s)$ is convex, this also means $A\le m'(s)\le B$.

Since $m(s)$ is convex, $m(0)=0$, and $m'(0)=\mu$, we must have $m(s)\ge\mu s$ for all $s$.

Next, we can draw the tangent to $m(s)$ at $s=2\beta$, and because of convexity, $m(s)$ cannot go below this line: ie $m(s)\ge m(2\beta)+(s-2\beta)m'(2\beta)$. This implies that $m(\beta)\ge m(2\beta)-\beta m'(2\beta)$.

Adding $m(\beta)\ge\mu s$ to $m(\beta)\ge m(2\beta)-\beta m'(2\beta)$ gives us $2m(\beta)\ge m(2\beta)-\beta m'(2\beta)$, which makes $m(2\beta)-2m(\beta)\le \beta\cdot[m'(2\beta)-\mu]$. This gives us bounds $$ m(2\beta)-2m(\beta)\le \begin{cases} \beta\cdot(B-\mu)\quad\text{for $\beta\ge 0$} \\ \beta\cdot(\mu-A)\quad\text{for $\beta\le 0$} \end{cases}. $$ These bound may not be achievable by any actual $f(x)$. However, some limiting properties may be the same as for actual $f(x)$.

Next, for $\beta>0$, $$ \ln\frac{M(2\beta)}{M(\beta)^2} = m(2\beta)-2m(\beta) = 2\,\int_0^\beta m'(2s)-m'(s)\,ds, $$ the size of which depends on how quickly $m'(s)$ approaches the limit $B$ as increases. This in turn depends on how quickly the volume of $f(x)>B-\epsilon$ falls as $\epsilon\rightarrow0$.

In particular, if the volume of $f(x)>B-\epsilon$ is proportional to $\epsilon^k$ for some $k\ge0$ as $\epsilon\rightarrow0$, we get $M(s)\sim e^{Bs}/s^k$ as $s$ increases, which makes $M(2s)/M(s)^2\sim s^k$. This indicates that $M(2s)/sM(s)^2$ will be bound if $k<1$, possibly if $k=1$, not when $k>1$.

Related Question