The general solution to the homogeneous wave function $u_{tt}-c^2u_{xx}=0$ is
$$u(x,t)=f(x-ct)+g(x+ct)$$
Applying the initial condition $u(x,0)=A(x)$ reveals that
$$f(x)+g(x)=A(x) \tag 1$$
Applying the initial condition $u_t(x,0)=B(x)$ reveals that
$$-cf'(x)+cg'(x)=B(x) \tag 2$$
Now, we differentiate $(1)$ to obtain
$$f'(x)+g'(x)=A'(x) \tag 3$$
Solving $(2)$ and $(3)$ simultaneously for $f'$ and $g'$ yields
$$f'(x)=\frac{cA'(x)-B(x)}{2c}$$
$$g'(x)=\frac{cA'(x)+B(x)}{2c}$$
If we impose that $g=0$, then we must have $B(x)=-cA'(x)$. Then,
$$f(x)=A(x)$$
and
$$u(x,t)=A(x-ct)$$
where the initial conditions are $u(x,0)=A(x)$ and $u_t(x,0)=B(x) = -cA'(x)$.
Approaching the problem using D'Alembert's equation, we have
$$u(x,t)=\frac12 (A(x-ct)+A(x+ct))+\frac{1}{2c}\int_{x-ct}^{0}B(u)du+\frac{1}{2c}\int_{0}^{x+ct}B(u)du$$
Then, for a solution with right-going waves only, we must enforce
$$\frac12 A(x+ct)+\frac{1}{2c}\int_{0}^{x+ct}B(u)du=0 \tag 4$$
Differentiating $(4)$ yields
$$\frac{c}{2} A'(x+ct)+\frac12 B(x+ct)=0 \Rightarrow B(x+ct)=-cA'(x+ct)$$
Finally,
$$u(x,t)=\frac12 A(x-ct)-\frac{1}{2}\int_{x-ct}^{0}A'(u)du=A(x-ct)$$
where we tacitly assume that $A(0)=0$.
Maybe let's try first the 2D case: $u(x,y,t)$ and $u_{tt} = c^2\Delta (u_{xx} + u_{yy})$.
Then define $v^{(x)}_t = cu_x, v^{(y)}_t = cu_y$. For the spatial derivatives of $\boldsymbol v = \begin{pmatrix} v^{(x)} \\ v^{(y)} \end{pmatrix}$ require that $v^{(x)}_x + v^{(y)}_y = \frac 1c u_t$. Then for continuously differentiable $v^{(x)}, v^{(y)}$:
\begin{align} \frac 1c u_{tt} &= \partial_t \Big(v^{(x)}_x + v^{(y)}_y \Big) \\
&= v^{(x)}_{xt} + v^{(y)}_{yt} \\
&= v^{(x)}_{tx} + v^{(y)}_{ty} \\
&= cu_{xx} + c u_{yy}\end{align}
so that the original PDE (Wave equation) remains preserved.
So the first order system reads
$$ \partial_t\begin{pmatrix} v^{(x)} \\v^{(y)} \\ u \end{pmatrix} + \nabla \cdot \begin{pmatrix} -cu & 0 \\ 0 & -cu \\ -cv^{(x)} & -cv^{(y)} \end{pmatrix} = \begin{pmatrix} 0 \\ 0 \\ 0\end{pmatrix} $$
where the $\nabla \cdot$ acts row-wise.
This is the general form of a conservation law in multiple dimensions in divergence form:
$$ \partial_t \boldsymbol u + \nabla \cdot \boldsymbol f(\boldsymbol u) = \boldsymbol{0}.$$
I am not sure how you would write a linear system in divergence form, since for a row-wise acting divergence $\nabla \cdot$ you need that $A \boldsymbol u \in \mathbb R^{m\times d}$, with $m$ the number of variables $| \boldsymbol u |$ and $d$ the spatial dimension. While you can ensure that $A$ has $m$ rows, there is no way for $A$ having $d$ columns if you multiply it with the column vector $\boldsymbol u \in \mathbb R^{m \times 1}$.
The extension to 3D is then straightforward, here you have
$$ \partial_t\begin{pmatrix} v^{(x)} \\v^{(y)} \\ v^{(z)} \\ u \end{pmatrix} + \nabla \cdot \begin{pmatrix} -cu & 0 & 0\\ 0 & -cu &0 \\ 0 & 0 & -cu \\ -cv^{(x)} & -cv^{(y)} & -cv^{(z)} \end{pmatrix} = \begin{pmatrix} 0 \\ 0 \\ 0 \\ 0\end{pmatrix} $$
Best Answer
Generally, the name of a variable of integration doesn't actually matter, it's just an identifier. It only really matters when there's more than one variable involved that you could be integrating with respect to, and in this problem there isn't. You can think of $s$ as "the single argument of $g$", which you could give any name you want, provided it isn't already bound in the surrounding scope, so not $x,F,G,g$ or $c$. (So you could technically call it $f$, even though there is an $f$ elsewhere in the problem. Please don't, though.)
Note that the name of a variable limit of integration does matter, because the definite integral depends on that limit. Thus for example here the use of $x$ as the upper limit of integration is not a completely stylistic choice; whatever letter you use has to be the same letter that is used for the arguments of $F$ and $G$ on the LHS.
The big thing going on in the background here surrounds the difference between an expression and a function. An expression can have named variables in it. A function, strictly speaking, does not have named variables in it, it just has arguments and their positions. This means that strictly speaking a function can't be differentiated with respect to a named variable, only with respect to an argument position. But no one writes like that, unless their audience is a computer.
This results in rather convoluted things happening under the hood. For example, in this context, the symbol $\frac{\partial u}{\partial t}$ is a shorthand for $(x_0,t_0) \mapsto \frac{\partial}{\partial t} \left. \left [ u(x,t) \right ] \right |_{x=x_0,t=t_0}$. Here we take $u$, a function, turn it into $u(x,t)$, an expression, turn that into $\frac{\partial}{\partial t} [u(x,t)]$, also an expression, and then finally convert that back into a function. Then we even change the name of the arguments of that function back to $(x,t)$. Similar things are going on with integration.