In $\mathbb{R}^n$, the biharmonic operator takes on the form
$$
\nabla^4 = \Delta^2 = \left(\frac{\partial^2 }{\partial x_1^2} + \frac{\partial^2 }{\partial x_2^2} + \cdots + \frac{\partial^2 }{\partial x_n^2}\right)\left(\frac{\partial^2 }{\partial x_1^2} + \frac{\partial^2 }{\partial x_2^2} + \cdots + \frac{\partial^2 }{\partial x_n^2}\right).
$$
As such, $\Delta^2 u$ is a set of 4th order partial derivatives containing a subset of Laplacian-like derivatives and a subset of mixed terms. These mixed terms are the main drivers of what complicates the process in finding a solution.
\begin{align}
\Delta^2 u &= \left(\frac{\partial^2 }{\partial x_1^2} + \frac{\partial^2 }{\partial x_2^2} + \cdots + \frac{\partial^2 }{\partial x_n^2}\right)\left(\frac{\partial^2 u}{\partial x_1^2} + \frac{\partial^2 u}{\partial x_2^2} + \cdots + \frac{\partial^2 u}{\partial x_n^2}\right) \\
&= u_{{xxxx}_1} + u_{{xx}_2{xx}_1} + \cdots + u_{{xx}_n{xx}_1} + u_{{xx}_1{xx}_2} + u_{{xxxx}_2} + \cdots + u_{{xx}_n{xx}_2} + \cdots \\
&+ u_{{xx}_1{xx}_n} + u_{{xx}_2{xx}_n} + \cdots + u_{xxxx_n} \\
&= \sum_{i = 1}^n \sum_{j = 1}^n u_{{xx}_i{xx}_j}.
\end{align}
We have then
$$
u_t + \sum_{i,j = 1}^n u_{{xx}_i{xx}_j} = 0 \ \ \ \text{ with } \ \ \ u(\mathbf{x},0) = f(\mathbf{x}), \ \ \ \mathbf{x} \in \mathbb{R}^n.
$$
Since $n$ is finite, interchanging the integral with the summation is fine. Here $\mathrm{i}^2 = -1$ is the complex unit when $i$ is used as an index instead.
\begin{align}
\mathscr{F}[u_t] + \sum_{i,j = 1}^{n} \mathscr{F}[u_{{xx}_i{xx}_j}] &= \int_{\mathbb{R}^n} u_t \, e^{\mathrm{i} \mathbf{k} \cdot \mathbf{x}} \, \mathrm{d}\mathbf{x} + \sum_{i,j = 1}^{n} \int_{\mathbb{R}^n}u_{{xx}_i{xx}_j} e^{\mathrm{i} \mathbf{k} \cdot \mathbf{x}} \, \mathrm{d}\mathbf{x} \\
&= \frac{\partial}{\partial t}\int_{\mathbb{R}^n} u(\mathbf{x},t) e^{\mathrm{i} \mathbf{k} \cdot \mathbf{x}} \, \mathrm{d}\mathbf{x} + \sum_{i,j = 1}^{n} \int_{\mathbb{R}^n} u(\mathbf{x},t) \frac{\partial^4 e^{\mathrm{i} \mathbf{k} \cdot \mathbf{x}}}{\partial x_i^2 \partial x_j^2} \, \mathrm{d}\mathbf{x} \\
&= \mathscr{F}[u]_t + \sum_{i,j = 1}^{n} k_i^2 k_j^2 \ \mathscr{F}[u] \\
&= 0.
\end{align}
So we have the first order ordinary differential equation with immediately available solution
$$
\mathscr{F}[u](\mathbf{k},t) = \mathscr{F}[f](\mathbf{k},t) \prod_{i,j = 1}^n e^{-k_i^2 k_j^2 t}.
$$
This form is actually equivalent to your finding, but provides a means of writing the solution in terms of the given $f$ through the convolution theorem.
$$
u(\mathbf{x},t) = \int_{\mathbb{R}^n} f(\mathbf{x} - \mathbf{s}) \, \mathscr{F}^{-1}_{\mathbf{k} \to \mathbf{s}}\left[\prod_{i,j = 1}^n e^{-k_i^2 k_j^2 t}\right] \mathrm{d}\mathbf{s}.
$$
By convolution theorem again, we know
\begin{align}
\mathscr{F}^{-1}\left[\prod_{i,j = 1}^n e^{-k_i^2 k_j^2 t}\right] &= \mathscr{F}^{-1}\left[e^{-k_1^4 t} \prod_{i = 1}^n \prod_{j = 2}^n e^{-k_i^2 k_j^2 t}\right] \\
&= \mathscr{F}^{-1}\left[e^{-k_1^4 t}\right]*\mathscr{F}^{-1}\left[\prod_{i = 1}^n \prod_{j = 2}^n e^{-k_i^2 k_j^2 t}\right] \\
&= \mathscr{F}^{-1}\left[e^{-k_1^4 t}\right] * \mathscr{F}^{-1}\left[e^{-k_1^2 k_2^2 t}\right] * \cdots * \mathscr{F}^{-1}\left[e^{-k_n^4 t}\right] \\
&= \mathscr{F}^{-1}\left[e^{-k_1^4 t}\right] * \cdots * \mathscr{F}^{-1}\left[e^{-k_n^4 t}\right] * \mathscr{F}^{-1}\left[e^{-2k_1^2 k_2^2 t}\right] * \cdots * \mathscr{F}^{-1}\left[e^{-2k_{n-1}^2 k_n^2 t}\right].
\end{align}
Alternatively, we can write out the inverse Fourier transform explicity and make use of the commutativity of convolution and say
\begin{align}
u(\mathbf{x},t) &= \frac{1}{(2\pi)^{n}} \int_{\mathbb{R}^n} \int_{\mathbb{R}^n} f(\mathbf{x} - \mathbf{s}) \prod_{i,j = 1}^n e^{-\left(k_i^2 k_j^2 t + \frac{\mathrm{i} s_i k_i}{n} \right)} \mathrm{d}\mathbf{k} \, \mathrm{d}\mathbf{s} \\
&= \frac{1}{(2\pi)^{n}} \int_{\mathbb{R}^n} \int_{\mathbb{R}^n} \prod_{i,j = 1}^n e^{-\left(k_i^2 k_j^2 t + \frac{\mathrm{i} (x_i - s_i) k_i}{n} \right)} f(\mathbf{s}) \, \mathrm{d}\mathbf{k} \, \mathrm{d}\mathbf{s}.
\end{align}
This result seems to be about the best one can do. Regardless of which form we use, finding an explicit solution from this integral may be very difficult in general as opposed to resorting to other solving methods. We can still check the solution though by inserting it into the differential equation and verifying the initial condition.
\begin{align}
u_t + \Delta^2 u &= \left(\frac{\partial}{\partial t} + \Delta^2\right) \frac{1}{(2\pi)^{n}} \int_{\mathbb{R}^n} \int_{\mathbb{R}^n} f(\mathbf{x} - \mathbf{s}) \prod_{i,j = 1}^n e^{-\left(k_i^2 k_j^2 t + \frac{\mathrm{i} s_i k_i }{n} \right)} \, \mathrm{d}\mathbf{k} \, \mathrm{d}\mathbf{s} \\
&= \frac{1}{(2\pi)^{n}} \int_{\mathbb{R}^n} \int_{\mathbb{R}^n} \left(\frac{\partial}{\partial t} + \Delta^2\right) \prod_{i,j = 1}^n e^{-\left(k_i^2 k_j^2 t + \frac{\mathrm{i} (x_i - s_i) k_i}{n} \right)} f(\mathbf{s}) \, \mathrm{d}\mathbf{k} \, \mathrm{d}\mathbf{s} \\
&= \frac{1}{(2\pi)^{n}} \int_{\mathbb{R}^n} \int_{\mathbb{R}^n} \left(-\sum_{i,j = 1}^n k_i^2 k_j^2 + \sum_{i,j = 1}^n k_i^2 k_j^2\right) \prod_{i,j = 1}^n e^{-\left(k_i^2 k_j^2 t + \frac{\mathrm{i} (x_i - s_i) k_i}{n} \right)} f(\mathbf{s}) \, \mathrm{d}\mathbf{k} \, \mathrm{d}\mathbf{s} \\
&\equiv 0. \\
u(\mathbf{x},0) &= \int_{\mathbb{R}^n} f(\mathbf{x} - \mathbf{s}) \mathscr{F}^{-1}_{\mathbf{k} \to \mathbf{s}}[1] \, \mathrm{d}\mathbf{s} \\
&= \int_{\mathbb{R}^n} f(\mathbf{x} - \mathbf{s}) \delta(\mathbf{s}) \, \mathrm{d}\mathbf{s} \\
&= f(\mathbf{x}).
\end{align}
If preferred, expressed in full vector form with no indices
\begin{align}
u(\mathbf{x},t) &= \frac{1}{(2\pi)^{n}} \int_{\mathbb{R}^n} \int_{\mathbb{R}^n} f(\mathbf{x} - \mathbf{s}) e^{-\left(|\mathbf{k}|^4 t + i \mathbf{s} \cdot \mathbf{k} \right)} \mathrm{d}\mathbf{k} \, \mathrm{d}\mathbf{s} \\
&= \frac{1}{(2\pi)^{n}} \int_{\mathbb{R}^n} \int_{\mathbb{R}^n} e^{-\left(|\mathbf{k}|^4t + i \left(\mathbf{x} - \mathbf{s}\right) \cdot \mathbf{k} \right)} f(\mathbf{s}) \, \mathrm{d}\mathbf{k} \, \mathrm{d}\mathbf{s}.
\end{align}
The inequality is not true because the LHS and RHS have different scaling behaviour ("units").
More precisely, define a dilation operator $(δ_α f)(x) = f(α^{-1} x)$. Then by change of variables $\lVert δ_α f \rVert_{L^1} = α \lVert f \rVert_{L^1}$ and $\widehat{δ_α f} = α\, δ_{α^{-1}} \hat{f}$.
Now consider replacing $m'$ by $δ_{α^{-1}} m'$ and $f$ by $δ_α f$. For the LHS we then get
\begin{align*}
\renewcommand{\d}{\mathop{}\!d}
&\left( \int_\mathbb{R} \left| \int_{\mathbb{R}} χ_{(t, \infty)}(ξ) m'(α t) α\hat{f}(α ξ) \d t \right|^p \d ξ\right)^{1/p} \\
&\quad= \left( \int_\mathbb{R} \left| \int_{\mathbb{R}} χ_{(α t, \infty)}(α ξ) m'(α t) \hat{f}(α ξ) \d (α t) \right|^p α^{-1} \d (α ξ) \right)^{1/p} \\
& \quad = α^{-1/p} \cdot \text{(original LHS)}
\end{align*}
Meanwhile the RHS of your inequality is unchanged by this replacement. So letting $α \to 0$ we can make the LHS arbitrarily large while keeping the RHS constant so no such $C_p$ can exist (except for $p = \infty$ I suppose).
Best Answer
It seems that the tool being used is the Minkowski integral inequality , with Proposition 3.6 of Duoandikoetxea.
At this point, we cannot quite apply this exactly to our situation, because the measure given by $m'(t)dt$ is a signed measure (where some sets may have a negative measure) rather than a conventional measure. However, note that one of the assumptions on $m$ usually imposed is that it is of bounded variation (it's a consequence of the fact that $\|m'\|_{L^1}$ exists).
In that case, we have the functions $h_1(t)= m'(t)1_{m'(t) \geq 0}$ and $h_2(t) = m'(t)1_{m'(t) \leq 0}$. Call the measures $dm_1(t) = h_1(t)dt$ and $dm_2(t)= -h_2(t)dt$. It is clear that $\int_A m'(t)dt = m_1(A)-m_2(A)$ and $\int_A |m'(t)|dt = m_1(A) + m_2(A)$ for any measurable set $A$. This is a Jordan decomposition of $m'(t)dt$ that we've performed : getting two measures on which we'll execute the Minkowski integral inequality.
The way to start is to somehow relate the LHS with $d \nu(y) = m'(t)dt$ and separately with $dm_1$ and $dm_2$. For this, note that $$ \int_{A} m'(t)dt = m_1(A)-m_2(A) \implies \left|\int_{A} m'(t)dt\right| \leq |m_1(A)| + |m_2(A)| $$ and therefore $$ \left|\int_{A} m'(t)dt\right|^p \leq \left(|m_1(A)| + |m_2(A)|\right)^p \leq 2^p (|m_1(A)|^p + |m_2(A)|^p) $$
for any measurable set $A$. The last inequality follows from two inequalities $$ x +y \leq 2\max(x,y) \quad ; \quad \max(x^p,y^p) \leq (x^p+y^p) $$ for all $x,y$ positive. Once we see this, we can split any $L^p$-integrable function into a positive-negative part , approximate each part by indicators, and obtain for any $m_1(t)dt$ $L^p$-integrable $g$ that $$ \left|\int_{\mathbb R} g(t)m'(t)dt\right|^p \leq 2^p \left(\left|\int_{\Bbb R} g(t)dm_1(t)\right|^p + \left|\int_{\Bbb R} g(t)dm_2(t)\right|^p\right) $$
We get by using $g(t) = \mathcal F^{-1}(\chi_{(t,\infty)}\hat{f})(\xi)$ for any fixed $\xi$ that $$ \left|\int_{\mathbb R} \mathcal F^{-1}(\chi_{(t,\infty)}\hat{f})(\xi)m'(t)dt\right|^p \leq 2^p \left(\left|\int_{\Bbb R} \mathcal F^{-1}(\chi_{(t,\infty)}\hat{f})(\xi)dm_1(t)\right|^p + \left|\int_{\Bbb R} \mathcal F^{-1}(\chi_{(t,\infty)}\hat{f})(\xi)dm_2(t)\right|^p\right) $$ Integrating w.r.t $d\xi$, we have $$ \int_{\mathbb R} \left|\int_{\mathbb R} \mathcal F^{-1}(\chi_{(t,\infty)}\hat{f})(\xi)m'(t)dt\right|^pd \xi\\ \leq 2^p\left(\int_{\mathbb R}\left|\int_{\Bbb R} \mathcal F^{-1}(\chi_{(t,\infty)}\hat{f})(\xi)dm_1(t)\right|^p d \xi + \int_{\mathbb R}\left|\int_{\Bbb R} \mathcal F^{-1}(\chi_{(t,\infty)}\hat{f})(\xi)dm_2(t)\right|^p d \xi\right) \tag{1} $$
To apply the Minkowski integral inequality, we have for $i=1,2$, $$ \int_{\mathbb{R}}\left|\int_\mathbb{R}\underbrace{{\mathcal{F}^{-1}}(\chi_{(t,\infty)}\hat{f})(\xi)}_{f(\xi,t)}\underbrace{dm_i(t)}_{d \nu(t)}\right|^p\underbrace{d\xi}_{d \mu(\xi)} $$
It's obvious that $d \xi$ is a sigma-finite measure on $\mathbb R$. We also have $\|m_i\|_{L^1}\leq \|m'\|_{L^1}$ so the $m_i$ are finite measures. Finally, assuming that the RHS of the integral inequality will be finite, we'll write the expression for it : $$ \int_{\mathbb{R}}\left|\int_\mathbb{R}\underbrace{{\mathcal{F}^{-1}}(\chi_{(t,\infty)}\hat{f})(\xi)}_{f(\xi,t)}\underbrace{dm_i(t)}_{d \nu(t)}\right|^p\underbrace{d\xi}_{d \mu(\xi)} \leq \left(\int_{\Bbb R}\left(\int_{\Bbb R}|{\mathcal{F}^{-1}}(\chi_{(t,\infty)}\hat{f})(\xi)|^p d\xi\right)^{\frac 1p} dm_i(t)\right)^{p} \tag{2} $$
We now use proposition $3.6$ of the book. (Page 59). Recall the operator $S_{a,b}$ defined in the book. It is fairly clear to see that $$ \mathcal F(S_{(t,\infty)}f) (\xi) = (\chi_{(t,\infty)}\hat{f})(\xi) $$
since $S_{a,b}$ is the operator associated with the multiplier $\chi_{a,b}$. Inverting the Fourier transform, $$ S_{(t,\infty)}f = \mathcal F^{-1}(\chi_{(t,\infty)}\hat{f}) $$
However, this means that we can write the RHS of $(2)$ as $$ \left(\int_{\Bbb R}\left(\int_{\Bbb R}|{\mathcal{F}^{-1}}(\chi_{(t,\infty)}\hat{f})(\xi)|^p d\xi\right)^{\frac 1p} dm_i(t) \right)^p = \left(\int_{\mathbb R} \|S_{(t,\infty)}\|_p dm_i(t)\right)^p $$
Using proposition $3.6$, there exists a constant $C_p$ independent of $t$ such that $$ \|S_{(t,\infty)}\|_p \leq C_p \|f\|_p\quad \forall -\infty \leq t \leq \infty $$
Substituting this above and taking the constant $C_p\|f\|_p$ out of the integral gives $$ \left(\int_{\mathbb R} \|S_{(t,\infty)}\|_p dm_i(t)\right)^p \leq \left(C_p \|f\|_p \int_{\mathbb R} dm_i(t)\right)^p \leq C_p^p \|f\|^p_p\|m'\|^p_1 $$
For $i=1,2$.Combining $2$ and $1$ now gives us $$ \int_{\mathbb R} \left|\int_{\mathbb R} \mathcal F^{-1}(\chi_{(t,\infty)}\hat{f})(\xi)m'(t)dt\right|^pd \xi \leq 2^p\left(2C_p^p \|f\|^p_p\|m'\|^p_1\right) \leq 2^{p+1} C_p^p \|f\|_p^p \|m'\|_1^p $$
Taking the $p$th root gives $$ \left(\int_{\mathbb R} \left|\int_{\mathbb R} \mathcal F^{-1}(\chi_{(t,\infty)}\hat{f})(\xi)m'(t)dt\right|^pd \xi \right)^{\frac 1p} \leq 2^{\frac{p+1}p}C_p \|f\|_p\|m'\|_1 $$
where the constant term depends only on $p$, as desired.