Electromagnetism – Understanding Faraday’s Law and Derivatives in the Integral

electromagnetismmaxwell-equations

In the Faraday law derivation written below there is a passage that I don't understand (the one in bold). My professor gave us just a short comment about it that it's not enough for me to understand, can you give me a more detailed explanation? If you understand the comment can you explain me what the professor was trying to tell?

the derivation starts with Lenz law:

$$\oint \vec E \cdot d \vec l = – \frac {d} {dt} \int_S \vec B \cdot d\vec s$$
than Stokes' theorem is used:
$$\int_S \nabla \times \vec E \cdot d \vec s = – \frac {d} {dt} \int_S \vec B \cdot d\vec s$$
And then the passage that I don't understand: the derivative is brought inside the integral even if the surface $S$ is changing in time. My professor comment about this passage is: "if you use an inertial frame which is instantaneously linked to the surface then you can exchange $\frac d {dt}$ and $\int$ because it's instantaneously fixed."
$$\int_S \nabla \times \vec E \cdot d \vec s = – \int_S\frac {d} {dt} \vec B \cdot d\vec s$$
it follows that
$$ \nabla \times \vec E = -\frac {d} {dt} \vec B $$

Best Answer

You seem to be starting from the integral formulation of Faraday's law; i.e your hypothesis is that for every "nice" 2-dimensional surface $S$ (i.e a compact oriented smooth $2$-dimensional submanifold of $\Bbb{R}^3$), we have \begin{align} \int_{\partial S}\mathbf{E}(\mathbf{r},t)\cdot d\mathbf{l}&=-\frac{d}{dt}\int_S\mathbf{B}(\mathbf{r},t)\cdot d\mathbf{S} \end{align} I'm not sure why your professor is trying to invoke time-varying surfaces. Trying to bring in time-varying surfaces only obscures the matter (because if we consider a smoothly varying family of surfaces $\Sigma_t$, then $\frac{d}{dt}\int_{\Sigma_t}\mathbf{B}(\mathbf{r},t)\cdot d\mathbf{S}$ will be more complicated and have extra terms).

So, fix such a nice surface $S$ for the rest of the discussion. Then, Leibniz's integral rule (one of the simplest versions will suffice here) tells us that the $\frac{d}{dt}$ can be brought inside and converted into $\frac{\partial}{\partial t}$, i.e \begin{align} \int_{\partial S}\mathbf{E}(\mathbf{r},t)\cdot d\mathbf{l}&=\int_{S}-\frac{\partial \mathbf{B}}{\partial t}(\mathbf{r},t)\cdot d\mathbf{S} \end{align} Using Stokes' theorem, and rearranging, we get \begin{align} \int_S\left(\nabla\times\mathbf{E}+\frac{\partial \mathbf{B}}{\partial t}\right)\cdot d\mathbf{S}&= 0 \end{align} Note that an integral of a function over a surface being equal to zero tells us nothing about the function. However, in our case, the integral is zero for EVERY possible surface $S$. Therefore, it must happen that the integrand vanishes identically, and thus \begin{align} \nabla \times \mathbf{E}&=-\frac{\partial \mathbf{B}}{\partial t}. \end{align}


As for why we can swap the $\frac{d}{dt}$ with the integral to get a partial derivative, just observe the following. Define \begin{align} \mathbf{b}(t)&=\int_S\mathbf{B}(\mathbf{r},t)\cdot d\mathbf{S} \end{align} Then, writing out a difference quotient, and using linearity of integrals, we have \begin{align} \frac{\mathbf{b}(t+h)-\mathbf{b}(t)}{h}&=\int_S\frac{\mathbf{B}(\mathbf{r},t+h)-\mathbf{B}(\mathbf{r},t)}{h}\cdot d\mathbf{S} \end{align} Thus, by taking the limits $h\to 0$, we get \begin{align} \mathbf{b}'(t)&=\lim\limits_{h\to 0}\frac{\mathbf{b}(t+h)-\mathbf{b}(t)}{h}\\ &=\lim\limits_{h\to 0}\int_S\frac{\mathbf{B}(\mathbf{r},t+h)-\mathbf{B}(\mathbf{r},t)}{h}\cdot d\mathbf{S}\\ &=\int_S\lim_{h\to 0}\frac{\mathbf{B}(\mathbf{r},t+h)-\mathbf{B}(\mathbf{r},t)}{h}\cdot d\mathbf{S}\tag{$*$}\\ &=\int_S\frac{\partial \mathbf{B}}{\partial t}(\mathbf{r},t)\cdot d\mathbf{S} \end{align} In the step $(*)$, we of course have to justify why we are allowed to interchange the limits with integrals. For this one has to make some "regularity assumptions" on $\mathbf{B}$, which for all intents an purposes, you can assume are satisfied in any applications to physics (the most common way to justify the exchange is known as "dominated convergence theorem", but this is a fairly advanced theorem of calculus. Having said this, if we assume $\mathbf{B}$ is continuously differentiable, then one can provide a 'simple' proof of this, assuming one knows the basic $\epsilon,\delta$ definitions of limits).