[Math] Real approximation to the maximum using Laplace’s method integral

approximate integrationapproximationlaplace-methodoptimization

The Laplace's Method states that under some conditions, it holds that:

$ \sqrt{\frac{2\pi}{M(-g''(x_0))}} h(x_0) e^{M g(x_0)} \approx \int_a^b\! h(x) e^{M g(x)}\, dx \text { as } M\to\infty$

Where $g(x_0)$ is the maximum of $g$ and $g''(x_0)$ is the second derivative at that point. (At that point $g''(x_0)<0$. Also $x_0$ is in $(a,b)$.) Inspired in this one can say that there is an approximation of the maximum given by

$ e^{M g(x_0)} \approx \int_a^b\! \sqrt{\frac{M(-g''(x_0))}{2\pi}} \frac{h(x)}{h(x_0)} e^{M g(x)}\,dx \text { as } M\to\infty$

Now, suppose that I only want a right-hand-side that doesn't depend in knowing $x_0$ a priori. (That is, I want an approximation of the maximum based on the integral rather than the other way around.)

1) Is it valid to do this substitution (particular case) $h(x) \to \sqrt{-g''(x)}$?

Because if so then,

$ e^{M g(x_0)} \approx \int_a^b\! \sqrt{\frac{M (-g''(x))}{2\pi} } e^{M g(x)}\,dx \text { as } M\to\infty$

2) Is this a valid restatement of the Laplace's Method? (The problem is I can't find a reference where this form is used)

Furthermore, I want to use this formula in practice, to give a family of approximations to $g(x_0)$ (without knowing $x_0$ a priori) given by the integral at finite values of $M$. But any of these approximations have to evaluate to a real number for my application.

The problem is, that in the formula above for any finite $M$ and for the non-concave regions of $g$ (where $g''(x)$ is positive) there are points where the integrand evaluates to pure imaginary numbers. So, it could happen in principle that the integral gives a complex number and then I have a complex approximation to $g(x_0)$ which I want to avoid. But it could also be the case that the imaginary portions of the integral always cancel out by some property of the form of the integral (for a general $g$); like due to some contour integral.

3) Is there a reason to think that the last integral gives always real result if $g$ is real but not necessarily concave everywhere (under nice-conditions for $g$)?

I don't mind assuming that $g$ has all the nice properties that you want –e.g. analytic– in the interval $(0, \infty)$ or that the integral interval is also $(0, \infty)$.
In fact (maybe this helps) in all applications $g$ turns out to have some sort of singularity at $x=0$ and a logarithmic divergence at $x\to\infty$.

4) Is there a variant of the approximation of the maximum based on the Laplace's method that I am missing?

For example:

$ e^{M g(x_0)} \approx \int_a^b\! \sqrt{\frac{M |g''(x)|}{2\pi} } e^{M g(x)}\,dx \text { as } M\to\infty$

but I am not sure about using the absolute value or taking blindly the real part function as I want the approximation to $g(x_0)$ to be differentiable (analytic?) with respect to $g$ for finite values of $M$.

Best Answer

1.) Yes, but the result is false. Correctly: $$ \sqrt{\frac{2\pi}M} \cdot e^{M g(x_0)} \approx \int \sqrt{-g''(x)}\cdot e^{Mg(x)}\,\mathrm{d}x $$ Which is something, but it is not trivial for me how an approximation for ${g(x_0)}$ follows. You have to be careful with the estimations.

2.) 3.) 4.) I think that it doesn't work for those cases. Here is why.

The reason the Laplace's method works is that $\lvert e^x\rvert$ is very small when $\mathbb{R}\ni x\to-\infty$. If $g(x)$ has a global maximum at $x_0$ then $e^{M g(x_0)}\cdot \int \frac{e^{M g(x)}}{e^{M g(x_0)}}$ uses the property that the integrand is small everywhere except near the global maximum.

So the real, global maximum is required (however multivariate generalizations exist).

Note that in practice, it is harder to evaluate the integral. On that I advise statistical physics papers, for example: http://www3.eng.cam.ac.uk/research_db/publications/gc121

At the end you might find that if you have enough points to approximate the integral then you have enough points to take just the maximum. So the Laplace's method is superfluous.

Related Question