Intuition about surface measure

analysislebesgue-measuremeasure-theoryriemann-surfaces

I have been reading some papers where "surface measure" comes up. I wanted to develop some intuition as to what it is. For the purposes of this post, I am going to primarily restrict to two-dimensional Euclidean space $\mathbb{R}^2$. Let $\sigma$ denote surface measure throughout.

Pretend we are working in a closed rectangular domain $R\subseteq \mathbb{R}^2$, with faces $F_1$, $F_2$, $F_3$, and $F_4$. For example, consider the rectangle with vertices $v_1=(0,0)$, $v_2=(0, 1)$, $v_3=(1, 1)$, and $v_4(1,0)$. Let $F_i\subseteq \mathbb{R}^2$ be the face connecting vertices $v_i$ and $v_{i+1}$, where $v_5 := v_1$.

My question is the following: Is surface measure on one of the faces really like one-dimensional Lebesgue measure restricted to the face? What I mean is the following. Consider the face $F_3$ connecting $(1,1)$ to $(1,0)$. Then, for example, is $$\sigma(\left\{(1,x): x\in [0,0.5]\right\}=0.5?$$

My intuitive (and very imprecise) understanding of surface measure (at least in Euclidean space) is surface measure is just like Lebesgue measure when the "surface" is projected down one dimension. As another example, when the boundary of a region in $\mathbb{R}^2$ is smooth, my intuitive sense of surface measure is to flatten out the boundary and then work with one-dimensional Lebesgue measure on that flattened surface. To be slightly more precise (but not completely rigorous), my understanding in $\mathbb{R}^n$ is that: $$\sigma(A) = \text{Leb}(\text{proj}(A)),$$
where $A\subseteq \mathbb{R}^n$ is a subset of the "surface," proj$(A)$ is the projection of $A$ down one dimension, and Leb is $(n-1)$-dimensional Lebesgue measure.

Is my intuition correct? Or am I off? References would be appreciated, too!

Best Answer

My professor gave us an intuition about the surface measure on submanifolds which I found very helpful. The main idea, similar to yours, is looking at small cubes of smaller dimension under linear transformations (in your case 1-dimensional cubes aka line segments) then taking the limit, making them "infinitly small" and take the jacobi matrix at a given point as the linear transformation.

Let $k < n$ (k will be the dimension of your object(manifold) M on which you will define a surface measure and n the dimension of your space), $A \in \mathbb{R}^{n \times k}$ a linear transformation and $C := [0,1]^k$ the k-dimensional unit cube. We will now consider the volume of $AC$. It is plausible that this volume should stay the same if we apply an orthogonal transformation (since this would only rotate/reflect the object) $P \in O(n)$. Now we can choose $P$ in a way that $PAC \subseteq \mathbb{R}^k \times \{0\}^{n-k}$. Now we are basically working in $\mathbb{R}^k$ (we projected) and can simply assign this transformed cube a volume using the k-dimensional lebesgue measure. Therfore we define $\sigma(PAC) := \lambda_k(PAC) = |\det(PA)| \lambda_k(C) = |\det(PA)|$. Now this still depends on our choice of $P$ but not really, since $\det(PA) = \det((AP)^T)$ and therefore $$|\det(PA)|^2 = \det(A^T P^T)\cdot \det(PA) = \det(A^T A)$$

And we get $\sigma(AC) = \sqrt{\det(A^TA)}$. Now for the limit taking process:

The idea is to make $C$ small at a certain location. Is your lower dimensional object sufficiently smooth and parametrized (like a submanifold), you can look at the multivariable derivative (linearization) of this parametrisation. Let $p: \mathbb{R}^k \supseteq U \to M$ be such a parametrization of your surface M. Now given $x_0 \in U$ you get $p(x) = p(x_0) + J_p(x_o)(x-x_0) + R(x)$ for a rest term R (total derivative). This means your surface looks (locally at $x_0$) approximately like $p(x_0) + J_p(x_0)([0,l]^k)$. By the assumptions above $J_p(x_0)([0,l]^k)$ should get the volume $\sqrt{\det(J_p(x_0)^T J_p(x_0))}\cdot\lambda_k([0,l]^k)$ assigned.

By making $l$ smaller and smaller and summing over all possible $x_0$ this leads us to the intuition

$$\sigma(M) = \int_U \sqrt{\det(J_p(x)^T J_p(x))} d\lambda_k(x)$$

For a subset $N \subseteq M$ you can do the same thing looking by looking at $p^{-1}(N)$:

$$\sigma(N) = \int_{p^{-1}(N)} \sqrt{\det(J_p(x)^T J_p(x))} d\lambda_k(x)$$

I'm not sure if this is what you searched for but I hope it helps.

Related Question