My recent article might be relevant: http://link.springer.com/content/pdf/10.1140%2Fepjc%2Fs10052-013-2371-4.pdf (published in the European Physical Journal C, open access). An excerpt from the abstract: "after introduction of a complex four-potential of electromagnetic field, which generates the same electromagnetic fields as the initial real four-potential, the spinor field is algebraically eliminated from the equations of spinor electrodynamics. It is proven that the resulting equations for electromagnetic field describe independent evolution of the latter"
The article may be relevant, although I do not use the formalism of differential forms, as you "take the Dirac wave function not to be observable in the classical Maxwell-[Dirac] theory" and "prefer to disavow unobservable fields unless there are absolutely clear reasons why they are unavoidable."
It is easy to show that the differential and integral forms of Maxwell's equations are equivalent using Gauss's and Stokes's theorems.
Correct, they are equivalent (assume no GR, and no QM) in the sense that if the integral versions hold for any surface/loop then the differential versions hold for any point, and if the differential versions hold for every point then the integral versions hold for any surface/loop. (This also assumes you write the integral versions in the complete and correct form with the flux of the time partials of the fields and/or with stationary loops.)
Suppose there is a time-varying current in a wire $I(t)$ and I wish to find the fields a long way from the wire.
Ampere's Law is correct, but it is not also helpful. If you know the circulation of $\vec B$, you can use it to find the total current (charge and displacement). If you know the total current (charge and displacement), then you can find the circulation of $\vec B$. But solving for $\vec B$ itself is hard unless you have symmetry.
What about using Ampere's law in integral form? What is the limit of its validity?
It is completely valid, but it might not be helpful. When you write:
$$ \oint \vec{B}(r,t)\cdot d\vec{l} = \mu_0 I(t) + \mu_0 \iint \epsilon_0 \frac{\partial \vec{E}(r,t)}{\partial t} \cdot d\vec{a}$$
then the $t$ that is used on each side of the equation is exactly the same.
When $I(t)$ changes, then $\vec B$ field nearby changes quickly, and when there is a changing $\vec B$ field there is a circulating electric field, so as the region of changing $\vec B$ field expands, so does the region of newly circulating electric fields. Both expand together. Eventually the expanding sphere of changing $\vec B$ field and changing circulating electric fields finally starts to reach the Amperian loop (together), and only then does the circulation of $\vec B$ on the far away Amperian loop change. If there was just one change in $I(t)$, then the expanding shell of changing electric fields continues expanding, and you are stuck with the new value of the circulating $\vec B$ field, based on the current that changed a while back.
So, to solve for $\vec B$, you'd need both $I$ and $\partial \vec E /\partial t$ and the latter you need for all the empty space on a surface through the Amperian Loop. Maxwell's equations don't have limited validity and do not need to be modified. They just aren't always as useful as you might want them to be.
Best Answer
The integral and differential versions are equivalent, so it sounds like your text simply doesn't know how to use the differential version in as general a way as your text knows how to use the integral version.
For instance, you do not need to have partial derivatives to define the divergence and/or curl, but if you have the partial derivatives, then there are simple formulas for the divergence and curl in that situation, you might be used to those formulas as the definition, but they are not the proper general definition.
The two forms really are equivalent, and they are equivalent because they aren't really saying anything different whatsoever. The understanding for the integral forms is that they hold for every closed/open surface in any bounded region. The understanding for the differential form is that they hold in any bounded region.
It sounds like your text knows how to setup the integral forms in a fairly general way. The differential forms can be setup in an equally general way since there are very general ways to define the divergence and the curl. One way is to take the divergence theorem and the curl theorem as the definitions of the operators (instead of assuming partial derivatives, then making up random combinations and then making a theorem). For instance
$$\vec{\nabla}\cdot\vec{A}@(x,y,z)=\lim_{DiameterV\rightarrow 0, V\rightarrow\{(x,y,z)\}}\frac{\iint_{\partial V}\hat{n}\cdot\vec{A}dS}{\iiint_VdV}$$
and
$$\hat{S}\cdot\left(\vec{\nabla}\times\vec{A}@(x,y,z)\right)=\lim_{Diameter S\rightarrow 0, S\rightarrow\{(x,y,z)\}}\frac{\int_{\partial S}\vec{A}\cdot d\vec{l}}{\iint_SdS}$$
don't require partial derivatives, only that the fields be nice enough to have limits that allow the divergence and the curl theorems, nothing more, nothing less. But that level of generality is based assuming your integral version is using Riemann integration. That is the simplest version of integration (but not the most general), and I want you to see how we don't have to arbitrarily choose to make one version more general than the other.
What you see from the above is that knowing how to integrate is fully enough to tell us how to differentiate. So we can choose to differentiate as generally as we want to integrate. Let's look at how generally we want to integrate and differentiate.
If you look at an equation like $\vec{\nabla}\cdot\vec{E}=\frac{q}{\epsilon_0}\delta^3(\vec{r}-\vec{r}_0)$ you see that the result of a derivative might be a generalized function. And a generalized functions is defined by what it does inside integrals. So a general derivative is defined by what the result does under integrals.
How is $\delta^3(\vec{r}-\vec{r}_0)$ defined? It is defined by sending any smooth function $f$ that goes to zero (and all its derivatives go to zero) fast enough to avoid boundary terms from integration by parts (called a test function) and sending it to $\iiint f(\vec{r})\delta^3(\vec{r}-\vec{r}_0)dxdydz=f(\vec{r}_0)$. And it is just one of many such maps called distributions that send test functions to numbers in a nice (linear and continuous) way. So if we expect taking derivatives to give us distributions, then we should define them so that they do.
First note that any nice function $g$ is a distribution in the sense that it sends a test function $f$ to the number $\iiint fg dxdydz$, let's denote that by $G:f\mapsto\iiint fg dxdydz=G(f)$ and then note that if the function $g$ had a partial derivative $\partial g$ that was a nice function then $\iiint f\partial g dxdydz$ = $-\iiint g\partial f dxdydz$. This leads to a generalization of partial derivatives. For any distribution $G$ that sends tests functions $f$ to $G(f)$ we can define it's derivative to be the distribution that sends like $f$ to go to $-G(\partial f)$. To any nice function this definition agrees with the original definition of derivatives in the sense that they act the same under integrals. But now any distribution has a derivative.
But even with this generalization, the divergence can be defined more generally. Using $$\int \phi \vec{\nabla}\cdot \vec{A}=\int \vec{\nabla}\cdot (\phi\vec{A})-\int\vec{A}\cdot\nabla\phi,$$ we can throw away the total divergence term (because things go to zero fast enough on the boundary), to get
$$\int \phi \vec{\nabla}\cdot \vec{A}=\int -\vec{A}\cdot\nabla\phi.$$
So for regular functions $\vec{A}$ the divergence turns it into something that acts on test functions $\phi$ by sending them to the number that $\vec{A}$ sends $ -\nabla \phi$. So for every vector distribution $\vec{G}$ that sends vector test functions $\vec{f}$ to $\vec{G}(\vec{f})$ the divergence of $\vec{G}$ is the distribution that sends a scalar test function $\phi$ to $-\vec{G}(\nabla\phi)$. This a very general kind of divergence, both for what it can act on and what it can give.
It's just not fair to extend how general a thing you allow your integrals to do and not to be equally general about what you allow your derivatives to do. But no one truly desires to define their derivatives and integrals to be anything other than equally general, there simply is no point to that.