Your procedure cannot find all of the solutions that satisfy $u(x,0)=0$ .
Because the PDE is inhomogeneous generally and with only one condition $u(x,0)=0$ , So it is better to take Laplace transform on $t$ :
$\mathcal{L}_{t\to s^2}\{u_t\}-\mathcal{L}_{t\to s^2}\{u_{xx}\}=\mathcal{L}_{t\to s^2}\{g(x,t)\}$
$s^2U(x,s)-u(x,0)-U_{xx}(x,s)=G(x,s)$
$U_{xx}(x,s)-s^2U(x,s)=-G(x,s)$
$U(x,s)=C_1(s)e^{xs}+C_2(s)e^{-xs}-\dfrac{e^{xs}}{2s}\int_0^xG(x,s)e^{-xs}~dx+\dfrac{e^{-xs}}{2s}\int_0^xG(x,s)e^{xs}~dx$
$u(x,t)=\mathcal{L}^{-1}_{s^2\to t}\{C_1(s)e^{xs}\}+\mathcal{L}^{-1}_{s^2\to t}\{C_2(s)e^{-xs}\}-\mathcal{L}^{-1}_{s^2\to t}\left\{\dfrac{e^{xs}}{2s}\int_0^xG(x,s)e^{-xs}~dx\right\}+\mathcal{L}^{-1}_{s^2\to t}\left\{\dfrac{e^{-xs}}{2s}\int_0^xG(x,s)e^{xs}~dx\right\}$
In $\mathbb{R}^n$, the biharmonic operator takes on the form
$$
\nabla^4 = \Delta^2 = \left(\frac{\partial^2 }{\partial x_1^2} + \frac{\partial^2 }{\partial x_2^2} + \cdots + \frac{\partial^2 }{\partial x_n^2}\right)\left(\frac{\partial^2 }{\partial x_1^2} + \frac{\partial^2 }{\partial x_2^2} + \cdots + \frac{\partial^2 }{\partial x_n^2}\right).
$$
As such, $\Delta^2 u$ is a set of 4th order partial derivatives containing a subset of Laplacian-like derivatives and a subset of mixed terms. These mixed terms are the main drivers of what complicates the process in finding a solution.
\begin{align}
\Delta^2 u &= \left(\frac{\partial^2 }{\partial x_1^2} + \frac{\partial^2 }{\partial x_2^2} + \cdots + \frac{\partial^2 }{\partial x_n^2}\right)\left(\frac{\partial^2 u}{\partial x_1^2} + \frac{\partial^2 u}{\partial x_2^2} + \cdots + \frac{\partial^2 u}{\partial x_n^2}\right) \\
&= u_{{xxxx}_1} + u_{{xx}_2{xx}_1} + \cdots + u_{{xx}_n{xx}_1} + u_{{xx}_1{xx}_2} + u_{{xxxx}_2} + \cdots + u_{{xx}_n{xx}_2} + \cdots \\
&+ u_{{xx}_1{xx}_n} + u_{{xx}_2{xx}_n} + \cdots + u_{xxxx_n} \\
&= \sum_{i = 1}^n \sum_{j = 1}^n u_{{xx}_i{xx}_j}.
\end{align}
We have then
$$
u_t + \sum_{i,j = 1}^n u_{{xx}_i{xx}_j} = 0 \ \ \ \text{ with } \ \ \ u(\mathbf{x},0) = f(\mathbf{x}), \ \ \ \mathbf{x} \in \mathbb{R}^n.
$$
Since $n$ is finite, interchanging the integral with the summation is fine. Here $\mathrm{i}^2 = -1$ is the complex unit when $i$ is used as an index instead.
\begin{align}
\mathscr{F}[u_t] + \sum_{i,j = 1}^{n} \mathscr{F}[u_{{xx}_i{xx}_j}] &= \int_{\mathbb{R}^n} u_t \, e^{\mathrm{i} \mathbf{k} \cdot \mathbf{x}} \, \mathrm{d}\mathbf{x} + \sum_{i,j = 1}^{n} \int_{\mathbb{R}^n}u_{{xx}_i{xx}_j} e^{\mathrm{i} \mathbf{k} \cdot \mathbf{x}} \, \mathrm{d}\mathbf{x} \\
&= \frac{\partial}{\partial t}\int_{\mathbb{R}^n} u(\mathbf{x},t) e^{\mathrm{i} \mathbf{k} \cdot \mathbf{x}} \, \mathrm{d}\mathbf{x} + \sum_{i,j = 1}^{n} \int_{\mathbb{R}^n} u(\mathbf{x},t) \frac{\partial^4 e^{\mathrm{i} \mathbf{k} \cdot \mathbf{x}}}{\partial x_i^2 \partial x_j^2} \, \mathrm{d}\mathbf{x} \\
&= \mathscr{F}[u]_t + \sum_{i,j = 1}^{n} k_i^2 k_j^2 \ \mathscr{F}[u] \\
&= 0.
\end{align}
So we have the first order ordinary differential equation with immediately available solution
$$
\mathscr{F}[u](\mathbf{k},t) = \mathscr{F}[f](\mathbf{k},t) \prod_{i,j = 1}^n e^{-k_i^2 k_j^2 t}.
$$
This form is actually equivalent to your finding, but provides a means of writing the solution in terms of the given $f$ through the convolution theorem.
$$
u(\mathbf{x},t) = \int_{\mathbb{R}^n} f(\mathbf{x} - \mathbf{s}) \, \mathscr{F}^{-1}_{\mathbf{k} \to \mathbf{s}}\left[\prod_{i,j = 1}^n e^{-k_i^2 k_j^2 t}\right] \mathrm{d}\mathbf{s}.
$$
By convolution theorem again, we know
\begin{align}
\mathscr{F}^{-1}\left[\prod_{i,j = 1}^n e^{-k_i^2 k_j^2 t}\right] &= \mathscr{F}^{-1}\left[e^{-k_1^4 t} \prod_{i = 1}^n \prod_{j = 2}^n e^{-k_i^2 k_j^2 t}\right] \\
&= \mathscr{F}^{-1}\left[e^{-k_1^4 t}\right]*\mathscr{F}^{-1}\left[\prod_{i = 1}^n \prod_{j = 2}^n e^{-k_i^2 k_j^2 t}\right] \\
&= \mathscr{F}^{-1}\left[e^{-k_1^4 t}\right] * \mathscr{F}^{-1}\left[e^{-k_1^2 k_2^2 t}\right] * \cdots * \mathscr{F}^{-1}\left[e^{-k_n^4 t}\right] \\
&= \mathscr{F}^{-1}\left[e^{-k_1^4 t}\right] * \cdots * \mathscr{F}^{-1}\left[e^{-k_n^4 t}\right] * \mathscr{F}^{-1}\left[e^{-2k_1^2 k_2^2 t}\right] * \cdots * \mathscr{F}^{-1}\left[e^{-2k_{n-1}^2 k_n^2 t}\right].
\end{align}
Alternatively, we can write out the inverse Fourier transform explicity and make use of the commutativity of convolution and say
\begin{align}
u(\mathbf{x},t) &= \frac{1}{(2\pi)^{n}} \int_{\mathbb{R}^n} \int_{\mathbb{R}^n} f(\mathbf{x} - \mathbf{s}) \prod_{i,j = 1}^n e^{-\left(k_i^2 k_j^2 t + \frac{\mathrm{i} s_i k_i}{n} \right)} \mathrm{d}\mathbf{k} \, \mathrm{d}\mathbf{s} \\
&= \frac{1}{(2\pi)^{n}} \int_{\mathbb{R}^n} \int_{\mathbb{R}^n} \prod_{i,j = 1}^n e^{-\left(k_i^2 k_j^2 t + \frac{\mathrm{i} (x_i - s_i) k_i}{n} \right)} f(\mathbf{s}) \, \mathrm{d}\mathbf{k} \, \mathrm{d}\mathbf{s}.
\end{align}
This result seems to be about the best one can do. Regardless of which form we use, finding an explicit solution from this integral may be very difficult in general as opposed to resorting to other solving methods. We can still check the solution though by inserting it into the differential equation and verifying the initial condition.
\begin{align}
u_t + \Delta^2 u &= \left(\frac{\partial}{\partial t} + \Delta^2\right) \frac{1}{(2\pi)^{n}} \int_{\mathbb{R}^n} \int_{\mathbb{R}^n} f(\mathbf{x} - \mathbf{s}) \prod_{i,j = 1}^n e^{-\left(k_i^2 k_j^2 t + \frac{\mathrm{i} s_i k_i }{n} \right)} \, \mathrm{d}\mathbf{k} \, \mathrm{d}\mathbf{s} \\
&= \frac{1}{(2\pi)^{n}} \int_{\mathbb{R}^n} \int_{\mathbb{R}^n} \left(\frac{\partial}{\partial t} + \Delta^2\right) \prod_{i,j = 1}^n e^{-\left(k_i^2 k_j^2 t + \frac{\mathrm{i} (x_i - s_i) k_i}{n} \right)} f(\mathbf{s}) \, \mathrm{d}\mathbf{k} \, \mathrm{d}\mathbf{s} \\
&= \frac{1}{(2\pi)^{n}} \int_{\mathbb{R}^n} \int_{\mathbb{R}^n} \left(-\sum_{i,j = 1}^n k_i^2 k_j^2 + \sum_{i,j = 1}^n k_i^2 k_j^2\right) \prod_{i,j = 1}^n e^{-\left(k_i^2 k_j^2 t + \frac{\mathrm{i} (x_i - s_i) k_i}{n} \right)} f(\mathbf{s}) \, \mathrm{d}\mathbf{k} \, \mathrm{d}\mathbf{s} \\
&\equiv 0. \\
u(\mathbf{x},0) &= \int_{\mathbb{R}^n} f(\mathbf{x} - \mathbf{s}) \mathscr{F}^{-1}_{\mathbf{k} \to \mathbf{s}}[1] \, \mathrm{d}\mathbf{s} \\
&= \int_{\mathbb{R}^n} f(\mathbf{x} - \mathbf{s}) \delta(\mathbf{s}) \, \mathrm{d}\mathbf{s} \\
&= f(\mathbf{x}).
\end{align}
If preferred, expressed in full vector form with no indices
\begin{align}
u(\mathbf{x},t) &= \frac{1}{(2\pi)^{n}} \int_{\mathbb{R}^n} \int_{\mathbb{R}^n} f(\mathbf{x} - \mathbf{s}) e^{-\left(|\mathbf{k}|^4 t + i \mathbf{s} \cdot \mathbf{k} \right)} \mathrm{d}\mathbf{k} \, \mathrm{d}\mathbf{s} \\
&= \frac{1}{(2\pi)^{n}} \int_{\mathbb{R}^n} \int_{\mathbb{R}^n} e^{-\left(|\mathbf{k}|^4t + i \left(\mathbf{x} - \mathbf{s}\right) \cdot \mathbf{k} \right)} f(\mathbf{s}) \, \mathrm{d}\mathbf{k} \, \mathrm{d}\mathbf{s}.
\end{align}
Best Answer
For $t>0$ it is a classical solution. If $u_0$ is nice ($L^1$ or compactly supported distribution to give examples), then $u(\cdot,t)$ is a mollified version of $u_0$. Mollification produces smooth functions, as long as $u_0$ is good enough for the convolution with the Gaussian to be well defined.
This mollification point of view shows that $u(x,t)$ is smooth in $x$ when $t>0$. It is also smooth in $t$; you can differentiate the formula directly or perhaps appeal to $\partial_tu=\Delta u$.
A general rule of thumb is that if a smooth function is a solution in a weak sense, then it is a solution in the classical or strong sense.
If you want smoothness down to $t=0$ or even for $t<0$, it's a different ball game and depends on the regularity of $u_0$.
In general, to see the regularity of a function produced by the Fourier transform, look at the decay of the Fourier transform. You found that $u(\cdot,t)$ is the inverse Fourier transform of $e^{-|k|^2t}\hat u_0(k)$. This means that the decay on the Fourier side is greatly improved as compared to $u_0$. When the Fourier transform decays faster than any power of $|k|$, the function is smooth.
The Fourier method in itself does not assume that the solution is a classical one. In principle, it is enough that $u$ solves the PDE in the sense of, say, Schwartz distributions. The way differentiation turns into multiplication by polynomials works for these distributions even though differentiation is defined weakly. Distribution-valued ODEs (like $\partial_t\hat u=-|k|^2\hat u$) require some care to set up and solve properly, but it can be done. As long as your function class turns weak derivatives to polynomials upon Fourier transform, the method is fine for weak solutions. Or if you can use test functions (in the definition of a weak solutions) that resemble $e^{ik\cdot x}$, you can use the definition directly. And it may sometimes happen that the Fourier transform method is not always perfectly justified, but it leads to a good guess for a solution which can be proven correct and unique by other means.