Are substitutions relating multiple integration variables in iterated integrals always valid

calculuscomplex-analysisintegrationjacobianreal-analysis

A standard technique of mine when evaluating iterated integrals, such as double integrals, is to invoke a substitution that relates the integration variables of both integrals. To demonstrate and clarify what I mean, I shall present three examples for double integrals:

Example 1: Gaussian integral, $\Gamma\left(\frac{1}{2}\right)=\sqrt{\pi}$
$$I=\Gamma \left(\frac{1}{2}\right)^2 = \int_{0}^{\infty} \int_{0}^{\infty} \frac{1}{\sqrt{x t}}e^{-(t+x)}\,dx\,dt$$
We now enforce the substituion $t=sx \implies dt=x\,ds$:
$$\implies I = \int_{0}^{\infty}\int_{0}^{\infty}\frac{1}{\sqrt{s}}e^{-x (s+1)}\,dx\,ds=\int_{0}^{\infty}\frac{1}{\sqrt{s}(s+1)} \,ds=\pi$$
$$\implies \Gamma\left(\frac{1}{2}\right)=\int_{-\infty}^{\infty}e^{-x^2}\,dx=\sqrt{\pi}$$
This is a much more elementary approach than the common Jacobian polar coordinates method of evaluating the Gaussian integral and is how I originally discovered the solution for myself a few years ago prior to learning about the Jacobian.

Example 2: In a previous answer of mine, I determined the following result:
$$J=\frac{1}{2\pi}\int_{-\infty}^{\infty} \exp\left(\frac{x^2}{2}-\frac{y^2}{2}\right)K_0\left(\sqrt{(x-y)^2+\left(\frac{y^2}{2}-\frac{x^2}{2}\right)^2}\right)\,dy = \frac{e}{2} \sqrt{\pi}\operatorname{erfc}(1)$$ and is independent of $x$. Here $K_0$ is the modified Bessel function of the second kind of order $0$.

A crucial part of my method of evaluation was to invoke a particular substitution on the integral:
$$J=\frac{1}{2\pi}\int_0^\infty\int_0^\infty \frac{\cosh\left(xv\left(\frac{v^2}{4t}+1\right)\right)}{t}\exp\left(-t-\frac{1+2t+x^2}{4t}v^2-\frac{v^4}{16t}\right)\,dv\,dt$$

We now enforce the substitution $t = s v^2 \implies dt = v^2 \,ds$:
$$\begin{align}J&=\frac{1}{2\pi} \int_{0}^{\infty} \int_{0}^{\infty}\frac{\cosh\left(x v \left(\frac{1}{4s}+1\right)\right)}{s} \exp\left(-sv^2-\frac{1+2sv^2+x^2}{4s}-\frac{v^2}{16s}\right) \, dv \, ds\\&=\frac{1}{2\pi}\int_{0}^{\infty}\int_{0}^{\infty}\frac{\cosh\left(xv\left(\frac{1}{4s}+1\right)\right)}{s}\exp\left(-\frac{1+x^2}{4s}\right)\exp\left(-v^2 \frac{(1+4s)^2}{16s}\right)\,dv\,ds\end{align}$$
This allows the result to fall out readily, since:
$$\int_{0}^{\infty}\cosh(a v)\exp\left(-v^2 b^2\right) \, dv=\frac{\sqrt{\pi}}{2b}\exp\left(\frac{a^2}{4b^2}\right)$$

$$\implies J = \frac{1}{2\pi} \int_{0}^{\infty} \frac{\exp \left(-\frac{1+x^2}{4s}\right)}{s} \cdot \frac{\sqrt{\pi}}{2\left(\frac{1+4s}{4\sqrt{s}}\right)}\exp \left(\frac{4s x^2 \left(\frac{1}{4s}+1\right)^2}{(1+4s)^2}\right)\, ds$$
This results in the integral being independent as we wanted since the $x^2$ terms cancel.

$$\implies J = \frac{1}{\sqrt{\pi}} \int_{0}^{\infty} \frac{\exp\left(-\frac{1}{4s}\right)}{\sqrt{s} (1+4s)} \, ds\stackrel{s\,\mapsto\frac{1}{s}}{=}\frac{1}{\sqrt{\pi}} \int_{0}^{\infty} \frac{\exp\left(-\frac{s}{4}\right)}{\sqrt{s} (4+s)} \, ds=\frac{e}{2} \sqrt{\pi} \, \operatorname{erfc} (1)$$

Example 3: Derivation of the relationship between the Beta function and Gamma function. $\Gamma(x)\Gamma(y)=\Gamma(x+y)B(x,y)$:
$$\Gamma(x)\Gamma(y)=\int_{0}^{\infty}\int_{0}^{\infty} e^{-u-v}u^{x-1}v^{y-1}\,du\,dy$$
We now enforce the substitution $u=zt$ and $v=z(1-t)$:
$$\begin{align}\Gamma(x)\Gamma(y)&=\int_{0}^{\infty}\int_{0}^{1}e^{-z}(zt)^{x-1}(z(1-t))^{y-1} z\,dt\,dz\\ &=\int_{0}^{\infty}e^{-z}z^{x+y-1}\,dz \cdot \int_{0}^{1}t^{x-1}(1-t)^{y-1}\\&=\Gamma(x+y)B(x,y)\end{align}$$
This is interestingly the method mentioned in the derivation of this property on the Wikipedia page of the Beta function. In this question about this derivation, the comments and answer invoke the Jacobian, suggesting that such substitutions can be made rigorous using the Jacobian, however, I would appreciate seeing exactly how that works as I only know about the Jacobian based on my own self-teaching.

So to summarise is it always valid to perform substitutions on iterated integrals, such as double or triple integrals in this way, whereby one or more integration variables are written in terms of an integration variable of a different integral in the iteration. I have never come across any issue in using this technique in the past, but have never found rigorous proof in any literature that states that such substitutions are valid; it surprises me that such a simple solution to the Gaussian integral is not often used.

Best Answer

$\newcommand{\d}{\mathrm{d}}$The general substitution theorem is of this form:

$$\int_{\varphi(\Omega)}f\,d\mu=\int_{\Omega}(f\circ\varphi)\cdot|\det\varphi'|\,\d\mu$$

With hypotheses (for some $n$): $\Omega\subseteq\Bbb R^n$ is open, $\varphi$ is a $C^1$ diffeomorphism between $\Omega$ and its image in $\Bbb R^n$, and $f$ is Lebesgue measurable. $\mu$ denotes $n$-dimensional Lebesgue measure (so, just "normal" integration). A good reference for this is peek-a-boo's great answer.

There is actually not all that much to say here. You pass an iterated integral into a product integral via Fubini’s theorem - since the measure space under consideration is complete and $\sigma$-finite, the only consideration is that $f(x,y)$ should be integrable in $x$ for almost every $y$. In almost every “normal” case you deal with this shall be true. From the product integral, you can use the substitution formula. Any of these substitutions which are intertwining two or more of the variables of integration are fine, so long as the substitution function satisfies the above (or some other equivalent) hypotheses. I'll show this for your examples, but for the sake of getting the rigorous argument you do not actually need to read any further.

For simplicity consider two dimensions. You're integrating functions $f(x,t)$ over some such open set in $\Bbb R^n$, in both examples $(0,\infty)\times(0,\infty)$.

Example 1:

Examine $t=sx$. This is essentially expressing $(x,t)=\varphi(x,s)$ where $\varphi(x,s)=(x,sx)$. As $\varphi$ is indeed continuously differentiable and so is its inverse - on this domain excluding $0$ - there are no problems whatsoever. Let's look at $\varphi'$: $$\varphi'=\begin{pmatrix}1&0\\s&x\\\end{pmatrix},\,|\det\varphi'|=|x-0|=x$$Which is exactly what you get in your derivation. Let's look at this in general. If you want to make the susbtitution $t=g(x,s)$ for some $C^1$ function $g$ (i.e. essentially every substitution you will ever consider) you can formally use $\varphi(x,s)=(x,g(x,s))$ so long as this has a continuously differentiable inverse, which it will usually do if you do any normal, sensible substitutions; in this instance: $$\varphi'=\begin{pmatrix}1&0\\\partial_xg(x,s)&\partial_sg(x,s)\end{pmatrix},\,|\det\varphi'|=|\partial_sg(x,s)|$$... and this form, in Riemann notation, $\d t=\partial_sg(x,s)\,\d s$ is exactly what you're doing in your integrals (the absolute value sign is present because Lebesgue integrals are orientation-ignorant, when using Riemann notation you should be careful with dropping minus signs). There are no formal problems here.

Example 2: Exactly the same story

Example 3: ... exactly the same story essentially. You only need to ensure a inverse exists and that the inverse is also continuously differentiable (the inverse function theorem will usually be helpful for this, if the derivative is never null, i.e. the determinant is never $0$). We are integrating $f(u,v)=e^{-(u+v)}u^{x-1}v^{y-1}$. Define $\varphi(z,t)=(zt,z(1-t))$. $\varphi((0,\infty)\times(0,1))=(0,\infty)\times(0,\infty)$ is not too hard to see, and some simple algebra shows it is indeed invertible, the inverse is also $C^1$ (or use the inverse function theorem). You have: $$\int_{\varphi((0,\infty)\times(0,1))}f(u,v)\,\d u\,\d v=\int_{(0,\infty)\times(0,1)}f(\varphi(z,t))\cdot|\det\varphi'|\,\d t\d z$$We get: $$\varphi'=\begin{pmatrix}t&z\\1-t&-z\end{pmatrix},\,|\det\varphi'|=|-zt-z+zt|=z$$So your integral, entirely formally, goes to: $$\int_0^\infty\int_0^1f(zt,z(1-t))\cdot z\,\d t\,\d z$$

And you proceed.