No explicit complexification is needed to derive this breakdown of Maxwell's equations. This can be understood wholly through the real vector space of special relativity.
Let's start with Maxwell's equations for the EM field, in the clifford algebra language called STA: the spacetime algebra. Maxwell's equations take the form
$$\nabla F = -J$$
where $\nabla F = \nabla \cdot F + \nabla \wedge F$, $F = e_0 E + B \epsilon_3$, in the $(-, +, +, +)$ sign convention.
Let $x$ be the spacetime position vector. It's generally true that, for a vector $v$ and a constant bivector $C$,
$$\nabla (C \cdot x) = -2 C, \quad \nabla (C \wedge x) = 2C \implies \nabla (Cx) = 0$$
One can then evaluate the expression
$$\nabla (Fx) = (\nabla F)x + \dot \nabla (F \dot x)$$
where the overdot means that only $x$ is differentiated in the second term; using the product rule, $F$ is "held constant" and so the above formulas apply. We just argued that the second term is zero, so we get $\nabla (Fx) = (\nabla F) x$. Thus, we arrive at the following transformation of Maxwell's equations:
$$\nabla (Fx) = -Jx$$
Now, we could always write $F$ as a "complex bivector" in the sense that, using $\epsilon = e_0 \epsilon_3$, and $\epsilon \epsilon = -1$, we have
$$F = e_0 E - B \epsilon_3 e_0 \epsilon_3 \epsilon = e_0 (E + \epsilon B)$$
It's crucial to note that $\epsilon$ does not commute with any vector.
What are the components of $Fx$? Write $x = t e_0 + r$ and we can write them as
$$Fx = e_0 (Ex + \epsilon Bx) = e_0 (E \cdot r + E \wedge r - e_0 Et + \epsilon B \cdot r - e_0 B \times r + \epsilon B t e_0)$$
This too can be written in a "complex" form:
$$Fx = (e_0 E \cdot r + Et + B \times r) + \epsilon (E \times r + e_0 B\cdot r + Bt)$$
We seem to differ on some signs, but this is recognizably the same quantity you have called $G$.
Now, to talk about how these equations break down, let's write $G = G_1 + G_3$, where $G_1 = (e_0 E \cdot r + \ldots)$ and $G_3 = \epsilon (E \times r + \ldots)$. Let's also write for $R = Jx = R_0 + R_2$.
Maxwell's equations then become
$$\nabla \cdot G_1= R_0, \quad \nabla \wedge G_1 + \nabla \cdot G_3 = R_2, \quad \nabla \wedge G_3 = 0$$
The first and third equations are the components of the Gauss dipole; the second equation is the Ampere-Faraday dipole equation.
Now, what does it all mean? The expression for $G = Fx$ includes both rotational moments of the EM field as well as some dot products, so it measures both how much the spacetime position is in the same plane as the EM field as well as how much the spacetime position is out of the plane.
It's probably more instructive to look at the source term $-Jx$. This tells us both about the moments of the four-current as well as how it goes toward or away from the coordinate origin. The description for the moments is wholly in the Ampere-Faraday dipole equation. What kinds of moments would this describe? A pair of two opposite point charges at rest, separated by a spatial vector $2 \hat v$ and centered on the origin, each with current at rest $j_0$, would create a $R = Jx = + j_0 e_t \hat v - j_0 e_t (-\hat v) = 2 j_0 e_t \hat v$, so this would be described wholly by the A-F dipole equation.
That's at time zero, however. At later times, $R$ will pick up these weird time terms. Say we're at time $\tau$. Then $R = 2 j_0 e_t \hat v + j_0 e_t (\tau e_t) - j_0 e_t (\tau e_t)$. So for this case, there's no problem: the extra stuff will just cancel. A single charge, however, would start picking up this term.
In a few words, these equations are weird.
Magnetic potentials are nowhere near unique, as you have conclusively shown; for more details, look up 'gauge freedom' in your favourite EM textbook or in wikipedia. Imposing the Coulomb gauge condition $\nabla\cdot\mathbf A=0$ reduces the gauge freedom but you can still transform the potential to
$$\mathbf A\mapsto \mathbf A'=\mathbf A+\nabla\psi$$
for any harmonic $\psi$ such that $\nabla^2\psi$, and obtain a different potential which (i) returns the same magnetic field, and (ii) also satisfies the Coulomb gauge condition.
For regular-enough magnetic fields, you can often introduce additional requirements on the vector potential - regularity conditions, and suitable decay at infinity - which can specify it uniquely, but their feasibility depends on the niceness of the magnetic field.
To make this a bit more explicit, you've shown that $\mathbf A_1=\dfrac{\mu_0Iz}{2\pi s}\hat s$ works as a vector potential. However, it is equally easy to check that $\mathbf A_2=-\dfrac{\mu_0I}{2\pi}\ln(s)\hat z$ works equally well: it returns the same field, and it's also in the Coulomb gauge. What gives? Well, the two gauges are related by the transformation
$$
\mathbf A_1
= \mathbf A_2+\nabla\psi
= \mathbf A_2+\nabla\left(\dfrac{\mu_0I}{2\pi}z\ln(s)\right),
$$
where $\psi\propto z\ln(s)$ obeys the Laplace equation. Which one is preferable? Neither, really - they are both singular at the wire ($\mathbf A_1$ more than $\mathbf A_2$), and while $\mathbf A_2$ grows at infinity, the $\sim 1/s$ decay of $\mathbf A_1$ there is probably too slow to be much help. In this situation, the magnetic field is too singular (infinitely thin wire) and contains too much energy (infinitely long wire) for regularity and boundedness conditions to be much help in specifying a unique vector potential.
In general, gauge freedom is something which you can often fix to a large extent but which will always be there lurking in the background. Moreover, there are simply no universally-optimal ways to fix the gauge (so, for instance, the Coulomb gauge $\nabla\cdot \mathbf A=0$ is not Lorentz invariant but the Lorenz gauge $\nabla\cdot \mathbf A+\frac{1}{c^2}\frac{\partial \varphi}{\partial t}=0$ is awkward for non-relativistic work, and so on) so you always need to think of gauge-dependent constructs as temporary, non-unique, and non-physical. The broader-picture answer is to simply let go of the unicity of the magnetic potential.
Best Answer
I don't fully understand the physics here, but I've tried to give a mathematical explanation.
Firstly I'll just quote some standard Fourier transform results. The cosine Fourier series is
$$f(x)=\frac{a_0}{2}+\sum_{n=1}^{\infty}a_n\cos\left(\frac{n\pi x}{L}\right) \quad \text{where}\quad a_n=\frac{2}{L}\int_0^L\mathrm{d}x\,\cos\left(\frac{n\pi x}{L}\right)f(x)$$
and the sine Fourier series is similarly
$$f(x)=\sum_{n=1}^{\infty}b_n\sin\left(\frac{n\pi x}{L}\right) \quad \text{where}\quad b_n=\frac{2}{L}\int_0^L\mathrm{d}x\,\sin\left(\frac{n\pi x}{L}\right)f(x).$$
Your situation is slightly different as the region is from $-L$ to $0$ but all this does is change the integration limits (which can be seen by considering $x\rightarrow -x$). Note that you can use either the sine or cosine Fourier series as we are only interested in this region.
Your notes expand $E(z,t)$ as a sine Fourier series and $H(z,t)$ as a cosine Fourier series so, assuming $H(z,t)$ has no constant $a_0$ component,
$$E(z, t) = \sum_n A_n(t) u_n(z), \\ H(z, t) = \sum_n H_n(t) v_n(z)$$
where $u_n(z)=\sin(k_n z)$ and $v_n(z)=\cos(k_n z)$ for $k_n=n\pi/L$ and, using the standard results,
$$A_n(t)=\frac{2}{L}\int_{-L}^0\mathrm{d}z\,u_n(z) E(z,t), \\ H_n(t)=\frac{2}{L}\int_{-L}^0\mathrm{d}z\,v_n(z) H(z,t). $$
There is a subtlety in that the Fourier series is only convergent with respect to the norm which means it does not necessarily converge at every point $z$. For example, in your problem $E(0,t)$ seems to be non-zero (yet apparently $E(-L,t)$ is zero), but in the Fourier expansion we find $E(0,t)=0$, so the Fourier series does not converge at $z=0$. This causes a problem for $\partial E(z,t)/\partial z$ as the Fourier series is not differentiable at $z=0$. We can fix it in an extremely dodgy physics way by adding a Kronecker delta to the expansion like
$$E(z, t) = \sum_n A_n(t) u_n(z) + E(0,t)\delta_{z,0}$$
which has derivative
$$\frac{\partial E(z, t)}{\partial z} = \sum_n A_n(t) \frac{\partial u_n(z)}{\partial z} + E(0,t) \frac{\partial \delta_{z,0}}{\partial z}$$
The first term is evaluated easily to be
$$\sum_n A_n(t) \frac{\partial u_n(z)}{\partial z} = \sum_n k_n A_n(t) v_n(z)$$
and the second term can be expressed formally as a cosine Fourier series with coefficients
$$a_n=\frac{2}{L}\int_{-L}^0\mathrm{d}z\,v_n(z) E(0,t) \frac{\partial \delta_{z,0}}{\partial z}\\=\left[\frac{2}{L}v_n(z)E(0,t)\delta_{z,0}\right]_{-L}^0-\frac{2}{L}\int_{-L}^0\mathrm{d}z\,\frac{\partial v_n(z)}{\partial z} E(0,t) \delta_{z,0}=\frac{2}{L}E(0,t)$$
using integration by parts. The integral is zero because the integrand is zero except for at $z=0$ where it is only finite. Thus we have
$$E(0,t) \frac{\partial \delta_{z,0}}{\partial z}=\sum_n v_n(z) \frac{2}{L} E(0,t)$$
and so we find
$$\frac{\partial E(z, t)}{\partial z} = \sum_n v_n(z) \left(\frac{2}{L} E(0,t)+ k_n A_n(t) \right)$$
which is the first of your expressions.
$H(z,t)$ on the other hand does not have this problem as the cosine series is not automatically zero at the boundary and differentiating gives
$$-\frac{\partial H(z, t)}{\partial z} = -\sum_n H_n(t) \frac{\partial v_n(z)}{\partial z} = \sum_n k_n H_n(t) u_n(z)$$
which is the second result.
The somewhat easier way to derive this, which is probably what your notes did, is by expressing the derivatives as Fourier series directly. They expand $\partial E(z,t)/\partial z$ in a cosine series as
$$ \frac{\partial{E(z, t)}}{\partial{z}} = \frac{2}{L} \sum_n v_n(z) \left( \int_{-L}^0 dz' \ v_n(z') \frac{\partial{E(z', t)}}{\partial{z'}} \right)$$
which, as with $E(z,t)$ itself, just comes from the standard results. The integral can be done by parts to give
$$\frac{2}{L}\int_{-L}^0 dz \ v_n(z) \frac{\partial{E(z, t)}}{\partial{z}} = \left[\frac{2}{L} v_n(z) E(z,t) \right]_{-L}^0-\frac{2}{L}\int_{-L}^0 dz \ \frac{\partial{v_n(z)}}{\partial{z}} E(z, t) \\ = \frac{2}{L}E(0,t)+ k_n\frac{2}{L}\int_{-L}^0 dz \ u_n(z) E(z, t).$$
However, note that the second term is just $k_n$ times $A_n(t)$ and so
$$\frac{2}{L}\int_{-L}^0 dz \ v_n(z) \frac{\partial{E(z, t)}}{\partial{z}} = \frac{2}{L}E(0,t)+ k_n A_n(t)$$
which gives
$$\frac{\partial E(z, t)}{\partial z} = \sum_n v_n(z) \left(\frac{2}{L} E(0,t)+ k_n A_n(t) \right).$$
Similarly, for $\partial H(z,t)/\partial z$, they expand it as a sine series
$$ -\frac{\partial{H(z, t)}}{\partial{z}} = -\frac{2}{L} \sum_n u_n(z) \left( \int_{-L}^0 dz' \ u_n(z') \frac{\partial{H(z', t)}}{\partial{z'}} \right),$$
the integration by parts gives
$$\frac{2}{L}\int_{-L}^0 dz \ u_n(z) \frac{\partial{H(z, t)}}{\partial{z}} = \left[\frac{2}{L} u_n(z) H(z,t) \right]_{-L}^0-\frac{2}{L}\int_{-L}^0 dz \ \frac{\partial{u_n(z)}}{\partial{z}} H(z, t)= -k_n H_n(t)$$
and we finally have
$$-\frac{\partial H(z, t)}{\partial z} = \sum_n k_n H_n(t) u_n(z).$$
Some of this is quite dodgy and I'm not really sure what they are trying to do, but I hope it explains where the equations come from.