[Physics] Phasor form of Maxwell’s Equations

complex numbersmaxwell-equations

I'm interested in the transformation from the standard Maxwell's equations to their phasor equivalents.

From the literature, this means injecting:

\begin{equation}
E = Re(\boldsymbol{E}e^{j\omega t})
\end{equation}

into

\begin{equation}
\nabla \times E = -\frac{\partial B}{\partial t}
\end{equation}

to deduce

\begin{equation}
\nabla \times \boldsymbol{E} = -j\omega\boldsymbol{B}
\end{equation}

Some demonstrations out there simply jump from the real components equation to the complete equation, ignoring the imaginary components. i.e., from:

\begin{equation}
Re(\nabla \times \boldsymbol{E} e^{j\omega t}) = Re(-j\omega\boldsymbol{B}e^{j\omega t})
\end{equation}

to the expected solution. Example of such demonstration are available at:
https://courses.cit.cornell.edu/ece303/Lectures/lecture14.pdf or
http://faculty.cua.edu/kilic/EE%20542/Topic2_BasicEMTheory.ppt

What assumption allows them to do so? Why is it possible to ignore the imaginary component?

Thanks

Best Answer

Two reasons:

  1. Maxwell's equations are linear. Also the operations ${\rm Re}$, ${\rm Im}$ and $z\mapsto z^*$ are linear in the sense that a sum mapped by the respective operator is the sum of the mapping of the addends by that operator.

  2. With a time-varying sinusoidal quantity the mapping between the quantities $|a| \cos(\omega\,t+\arg(a));\,a\in\mathbb{C}, \omega\,t\in\mathbb{R}$ (or $i\,|a| \,\sin(\omega\,t+\arg(a))$) on the one hand and $a\,e^{-i\,\omega\,t};\,a\in\mathbb{C}$ on the other is one-to-one and onto. So for every entity of the form $a\,e^{-i\,\omega\,t};\,a\in\mathbb{C}$ there is a unique $|a| \cos(\omega\,t+\arg(a))$ and contrawise. Explicitly:

    $$|a| \cos(\omega\,t+\arg(a)) = {\rm Re}(a\,e^{-i\,\omega\,t})$$

    and, because $\omega\,t$ takes on every value in $[0,2\pi]$, one can uniquely infer $\arg(a)$, $|a|$ and $\omega$ from the values of $f(t) = |a| \cos(\omega\,t+\arg(a))$ as $t$ varies. Likewise for the inversion of ${\rm Im}$.

It's important to keep in mind that this seeming "trickery" works because $\omega\,t$ varies, so we see the variation of the real and imaginary parts with time. In contrast, taking the real or imaginary part of a lone complex is of course irreversible: the imaginary (or real) part is lost and one cannot get the original complex number back from only its real (or imaginary) part!

You can think of the above pithily as: the phasor signal is the "single sideband" (if you recall this archaic modulation scheme) version of the real valued signal. That is, the negative frequency component of a real valued signals is simply the complex conjugate of the positive frequency component. So, for a linear calculation, there is no need to "process" both positive and negative components: one calculates with the positive frequency component and then recovers the real signal by taking the complex conjugate of the calculation outcome, thus finding the negative frequency component, and adding it back to the "single-sideband" positive frequency component.

Phasors are simply a tool of convenience: calculations with $e^{-i\,\omega\,t}$ are easier than those with $\cos$ and $\sin$. However, in the special case of Maxwell's equations, one can interpret the complex quantities as more than simply phasors (although the technique turns out to be the same). See my answer here where I show that the complex quantities are intimately linked to the unique decomposition of the electromagnetic field into its left and right hand circularly polarised components.