You've got two questions about the absolute Laplace transform, which is defined as
$$\mathcal L_a[f(k)](x)=\int^\infty_{0}f(k)e^{-k|x|}dk$$ for continuous $f(k) \in o(e^{\delta x}), \forall\delta>0$.
Question 1: How to interpret the different behaviour on both sides of the equation under differentiation at $x=0$?
In $\mathbb R$, if there exists a punctured neighbourhood of $x=0$ such that $\mathcal L_a[f](x)$ and $\displaystyle{\sum^\infty_{n=0}a_n x^{2n}}$ coincide , then both expressions have the same behaviour under differentiation at $x=0$, in the sense that
$$\int^\infty_{0}f(k) \left( \frac{\partial}{\partial x} \right)_{\text{left}} e^{-k|x|}dk\bigg\vert_{x\to0^-}
=\int^\infty_{0}f(k) \left( \frac{\partial}{\partial x} \right)_{\text{right}} e^{-k|x|}dk\bigg\vert_{x\to0^+}
=\frac{d}{dx}\sum^\infty_{n=0}a_n x^{2n}\bigg\vert_{x=0}=0
$$
where the subscripts denote the one-sided derivatives.
Sketch of proof:
The third equality is trivial.
To prove the right-sided derivative is zero, we want to switch the order of integral and differentiation. Here we will utilise the 'extended Leibniz integral rule':
For $$\frac{d}{dx}\int^\infty_c f(x,t)dt=\int^\infty_c \frac{\partial}{\partial x} f(x,t)dt\qquad{x\in(a,b)}$$ to be true, the sufficient conditions are
$f(x,t)$ and $\displaystyle{\frac{\partial}{\partial x} f(x,t)}$ are continuous in the region $c\le t<\infty$, $a\le x\le b$.
$\displaystyle{\lim_{N\to\infty}\int^N_c \frac{\partial}{\partial x}f(x,t)dt} $ converges uniformly for $x\in(a,b)$.
$\displaystyle{\lim_{N\to\infty}\int^N_c f(x,t)dt} $ converges for $x\in(a,b)$.
It is straight-forward to prove that the three conditions are satisfied for $0<x<r$ ($r$ is the radius of convergence of the Taylor series). Thus
$$\begin{align}
\int^\infty_{0}f(k) \left( \frac{\partial}{\partial x} \right)_{\text{right}} e^{-k|x|}dk\bigg\vert_{x\to0^+}
&=\left( \frac{d}{dx} \right)_{\text{right}}\int^\infty_{0}f(k)e^{-kx}dk\bigg\vert_{x\to0^+} \\
&=\left( \frac{d}{dx} \right)_{\text{right}}\sum^\infty_{n=0}a_n x^{2n}\bigg\vert_{x\to0^+} \\
&=0
\end{align}
$$
Similarly, the left-sided derivative is also zero.
Note: It is a little bit more complicated to show condition 2 is satisfied.
We aim to prove that for $x>0$, $$\lim_{N\to\infty}\int^N_{0}f(k) \left( \frac{\partial}{\partial x} \right)_{\text{right}} e^{-k|x|}dk=\lim_{N\to \infty}\int^N_{0}-kf(k) e^{-kx}dk\quad\text{converges uniformly.}$$
To this end, we make use of Cauchy criterion:
for sufficiently large $m>n>N$,
$$\begin{align}
\left|\int^m_{n}-kf(k) e^{-kx}dk\right|
&<\int^m_{n}\left|kf(k) e^{-kx}\right|dk \\
&<\int^m_{n}e^{\delta x} e^{-kx}dk \\
&<2\cdot\frac{e^{(\delta-x)n}}{x-\delta} \\
&<2\cdot\frac{e^{(\delta-x)N}}{x-\delta} \\
&<2\cdot\frac{e^{-\Delta N}}{\Delta} \quad \text{for } x>\delta+\Delta, \Delta>0\\
\end{align}
$$
Choosing $N=\frac1{\Delta}\ln\frac 2{\epsilon\Delta}$ would show uniform convergence for $x>\delta+\Delta$, and hence justifying the exchange of differentiation and integral for $x>\delta+\Delta$. By noticing that $\delta,\Delta$ can be chosen to be arbitrarily small, we have shown that condition 2 is satisfied for all $x>0$.
Question 2: Is it possible to differentiate under the integral sign, possibly at the cost of introducing additional terms of distributional nature?
Yes.
Suppose $\mathcal L_a[f](x)$ and $\displaystyle{\sum^\infty_{n=0}a_n x^{2n}}$ coincide in a punctured neighbourhood of $x=0$.
Then, indeed, in the sense of distribution,
$$\int^\infty_0 kf(k)dk=0$$ and thus
$$\frac{d}{dx}\mathcal L_a[f(k)](x)\bigg\vert_{x=0}=-\text{sgn}(x)\int^\infty_0 kf(k)e^{-k|x|}dk\bigg\vert_{x=0}=-\text{sgn}(0)\int^\infty_0 kf(k)dk=0$$
Proof:
It is well-known that $$\int^\infty_0 \delta'(s)e^{-sk}ds=k$$
Therefore,
$$\begin{align}
\int^\infty_0 kf(k)dk
&=\int^\infty_0 \int^\infty_0 \delta'(s)e^{-sk} f(k) \, ds \, dk \\
&=\int^\infty_0 \int^\infty_0 \delta'(s)e^{-sk} f(k) \, dk \, ds \qquad (1)\\
&=\int^\infty_0 \delta'(s)\left(\int^\infty_0 f(k)e^{-sk} dk\right)ds \\
&=\int^\infty_0 \delta'(s)\sum^\infty_{n=0}a_n s^{2n} ds \qquad (2)\\
&=-\int^\infty_0 \delta(s)\left(\sum^\infty_{n=0}a_n s^{2n}\right)' ds \\
&=-\left(\sum^\infty_{n=0}a_n s^{2n}\right)'_{s=0} \\
&=0
\end{align}
$$
$(1)$: Changing order of integrals is justified by Fubini's theorem.
$(2)$: Due to the formula $\displaystyle{\int^\infty_{-\infty}\delta'(x)\varphi(x)dx=-\int^\infty_{-\infty}\delta(x)\varphi'(x)dx}$.
Not a general answer, but the derivation of explicit representations for special cases.
We can modify the expression by changing $\theta\to\pi-\theta$
\begin{align}
{\mathcal J}_{\alpha,m}(x,y)&= \int_0^\pi J_\alpha( x \sin(\theta) ) e^{\imath y\cos(\theta)} \sin^m(\theta) \,d\theta\\
&= \left( \int_0^{\pi/2}+\int_{\pi/2}^\pi\right) J_\alpha( x \sin(\theta) ) e^{\imath y\cos(\theta)} \sin^m(\theta) \,d\theta\\
&= \int_0^{\pi/2} J_\alpha( x \sin(\theta) ) e^{\imath y\cos(\theta)} \sin^m(\theta) \,d\theta+\int_0^{\pi/2}J_\alpha( x \sin(\theta) ) e^{-\imath y\cos(\theta)} \sin^m(\theta) \,d\theta\\
&=2\int_0^{\pi/2} J_\alpha( x \sin(\theta) )\cos( y\cos(\theta)) \sin^m(\theta) \,d\theta
\end{align}
Using the Bessel representation
\begin{equation}
J_{-1/2}\left( y\cos\theta \right)=\sqrt{\frac{2}{\pi}}\frac{\cos( y\cos(\theta))}{\sqrt{y\cos(\theta)}}
\end{equation}
we can express
\begin{equation}
{\mathcal J}_{\alpha,m}(x,y)=\sqrt{2\pi y}\int_0^{\pi/2} J_\alpha( x \sin(\theta) )J_{-1/2}\left( y\cos\theta \right) \sin^m(\theta) \cos^{1/2}\theta \,d\theta
\end{equation}
A similar integral is tabulated (G&R 6.683.2):
\begin{equation}
\int_0^{\pi/2} J_\nu( z_1 \sin\theta )J_{\mu}\left( z_2\cos\theta \right) \sin^{\nu+1}(\theta) \cos^{\mu+1}\theta \,d\theta=\frac{z_1^\nu z_2^\mu J_{\nu+\mu+1}\left( \sqrt{z_1^2+z_2^2} \right)}{\sqrt{\left( z_1^2+z_2^2 \right)^{\nu+\mu+1}}}
\end{equation}
when $\Re\nu>-1,\Re\mu>-1$. By choosing $\nu=\alpha,\mu=-1/2,z_1=x,z_2=y$, if $m=\nu+1$, we obtain
\begin{equation}
{\mathcal J}_{\alpha,\alpha+1}(x,y)=\sqrt{2\pi}\frac{x^\alpha J_{\alpha+1/2}\left( \sqrt{x^2+y^2} \right)}{\left( x^2+y^2 \right)^{\alpha/2+1/4}}
\end{equation}
When $\alpha=0$, we find ${\mathcal J}_{0,1}(x,y)=2j_0\left( \sqrt{x^2+y^2} \right)$ as expected.
Other results may be obtained from the recurrences relations for the Bessel function. For example, using
\begin{equation}
J_{\alpha}(z)=\frac{2(\alpha+1)}{z}J_{\alpha+1}(z)-J_{\alpha+2}(z)
\end{equation}
by taking $z=x\sin\theta$, it comes
\begin{equation}
{\mathcal J}_{\alpha,\alpha+3}(x,y)=\frac{2(\alpha+1)}{x}{\mathcal J}_{\alpha+1,\alpha+2}(x,y)-{\mathcal J}_{\alpha+2,\alpha+3}(x,y)
\end{equation}
Both terms of the rhs have an explicit representation from the above expression.
Best Answer
It is essentially just a change of variables.
Our target is $$ \tag{T}\label{1} J_{k - 1}\Bigl( \frac{4 \pi \sqrt{m n}}{c} \Bigr) = \frac{(4 \pi \sqrt{m n})^{k - 1}}{2^k \pi i c^{k - 1}} \int_{-\infty}^{(0+)} t^{-k} \exp\Bigr( t - \frac{4 \pi^2 m n}{c^2 t} \Bigr) \, d t. $$
I will leave it to you to keep track of the contours in the integrals as we go through the motions, but here's the idea. In the original integral $$\int_{-\infty+iy}^{\infty+iy}(cv)^{-k}e\!\left(\frac{-m}{c^2v}-nv\right)\, dv$$ let us first do the change of variables $v = t/n$ (with $d v = d t / n$) so as to make the lonely $t$ in $\eqref{1}$ appear. We get $$(c / n)^{-k} n^{-1} \int t^{-k}e\!\left(\frac{-m n}{c^2 t}-t\right)\, dt. $$
Since $\eqref{1}$ uses $\exp$ instead of $e(x) = \exp(2 \pi i x)$, take $v = 2 \pi i t$ (with $d v = 2 \pi i \, d t$) and we get $$ \begin{align*} &\frac{(c / n)^{-k} n^{-1}}{2 \pi i} \int_{}^{} (v / 2 \pi i)^{-k} \exp\Bigl( \frac{- (2 \pi i)^2 m n}{c^2 v} - v \Bigr) \, d v \\ &= \frac{(2 \pi i n)^{k - 1}}{c^k} \int_{}^{} v^{-k} \exp\Bigr( \frac{4 \pi^2 m n}{c^2 v} - v \Bigr) \, d v. \end{align*}$$ This is now very close, it just remains to switch $v = -t$ with $d v = - d t$ and we end up with $$ \frac{(- 2 \pi i n)^{k - 1}}{c^k} \int t^{-k} \exp\Bigl ( t - \frac{4 \pi^2 m n}{c^2 t} \Bigr) \, d t. $$
There's obviously still a fair bit of bookkeeping to keep track of to finish off Lemma 14.2 in Iwaniec-Kowalski, but this is essentially where the Bessel function comes from.