[Math] Jacobian Determinant of Polar-Coordinate Transformation

differential-geometrymultivariable-calculusreal-analysis

Consider a Euclidean space of $n\in\mathbb{Z}_+$ ($n\geq3$) dimensions. The coordinates of any vector can be defined in terms of polar coordinates as follows (this example appears in Folland, 1999):
\begin{align*}
x_1=&\,r\cos\phi_1,\\
x_2=&\,r\sin\phi_1\cos\phi_2,\\
\vdots&\\
x_k=&\,r\left(\prod_{j=1}^{k-1}\sin\phi_j\right)\cos\phi_k,\\
\vdots&\\
x_{n-1}=&\,r\left(\prod_{j=1}^{n-2}\sin\phi_j\right)\cos\theta,\\
x_{n}=&\,r\left(\prod_{j=1}^{n-2}\sin\phi_j\right)\sin\theta,
\end{align*}
where $(r,(\phi_j)_{j=1}^{n-2},\theta)\in\mathbb{R}^{n}$.

Now, the Jacobian determinant corresponding to this transformation is claimed to be given as follows:
\begin{align*}
\det\left(\frac{\partial (x_j)_{j=1}^n}{\partial(r,(\phi_j)_{j=1}^{n-2},\theta)}\right)=r^{n-1}\prod_{j=1}^{n-2}\sin^{n-j-1}\phi_j.
\end{align*}

While this result is “standard,” I have had a hard time proving it. Technically, the problem is that the Jacobian matrix of
\begin{align*}
\frac{\partial (x_j)_{j=1}^n}{\partial(r,(\phi_j)_{j=1}^{n-2},\theta)}
\end{align*}
is a Hessenberg matrix: that is, it is “almost triangular,” but there is one extra diagonal set of nonzero elements just above the main diagonal. This makes computing the determinant directly extremely tedious. I tried induction and the Laplace expansion, too, to little avail.

Are any of you familiar with the canonical proof of this result, or any standard reference? Thank you for your assistance in advance.

Best Answer

If you look closely, you may notice that the determinant is the product of the lengths of the partial derivatives.

The "trick" is that the partial derivatives are orthogonal to each other. If one doesn't see it abstractly, consider

$$\frac{\partial x}{\partial\phi_j} = \begin{pmatrix}0\\\vdots\\0\\ -c\sin\phi_j\\c\cos\phi_j\cos\phi_{j+1}\\c\cos\phi_j\sin\phi_{j+1}\cos\phi_{j+2}\\\vdots\end{pmatrix}$$

where the $c$ is $r$ times the product of the $\sin\phi_i$ for $i < j$. If you look at the corresponding entries of $\frac{\partial x}{\partial \phi_k}$ for $k < j$, you find

$$\frac{\partial x}{\partial\phi_j} = \begin{pmatrix}\ast\\\vdots\\\ast\\ c'\cos\phi_j\\c'\sin\phi_j\cos\phi_{j+1}\\c'\sin\phi_j\sin\phi_{j+1}\cos\phi_{j+2}\\\vdots\end{pmatrix}$$

where the $c'$ contains a factor $\cos \phi_k$ instead of $\sin \phi_k$, and everything after the $\phi_j$ stuff is the same as for $\frac{\partial x}{\partial \phi_j}$ and in effect the inner product reduces to

$$cc'\left(-\sin\phi_j\cos\phi_j + \cos\phi_j\sin\phi_j(\cos^2\phi_{j+1} + \sin^2\phi_{j+1}(\cos^2\phi_{j+2} + \sin^2\phi_{j+2}(...))))\right) = 0.$$

The case of $\langle \frac{\partial x}{\partial r}\mid \frac{\partial x}{\partial \phi_j}\rangle$ is analogous. (Name $\theta = \phi_{n-1}$ for uniformity.)

Related Question