The reason for this is that in physics we usually have to deal not only with the equation, but also with initial-boundary problem. And some of the initial and boundary conditions lead to a well posed problem, whereas others do not. When we talk about the "time" variable, it usually means for mathematicians that natural initial conditions can be set. However, if you consider your example for the "Laplace" equation with a "time" variable, your initial condition+equation will not constitute a well-posed problem.
For example for the Laplace equation the initial value problem is not well-posed, since small perturbation in the "initial" conditions leads to very large difference for finite t. You should google Hadamard's example of an ill posed problem. In general, Laplace equation does not go good with "initial-value" problems.
About comment by @mkl314:
This question is right to the money and very deep mathematically. Note that distinction between "initial" and "boundary" comes from the physical interpretation of our equations. Nothing prohibits to set the initial conditions for an arbitrary manifold in the space. However, for some initial conditions the problems will be well-posed, for others --- ill posed. It is a very complicated question in general: How to set the initial conditions for a given system of PDE of order $n$ to guarantee the existence, uniqueness, and continuous dependence on the initial conditions. The well known examples of heat, wave, and Laplace equations how we treat them in undergraduate courses come from physical intuition, not from carefully chosen mathematical conditions.
Assuming you're using method of lines.
Let the original initial-boundary problem be
$$
u_t = u_{xx}\\
u(0, x) = f(x)\\
u(t, 0) = a(t)\\
u(t, 1) = b(t).
$$
Introduce a set of points $x_j = jh,\; j = 0,1,\dots, N,\;Nh = 1$.
Let $u_j(t) = u(t, x_j)$. Note that $u_0(t) = a(t),\; u_N(t) = b(t)$ are already known. Unknown are $u_j(t),\; j = 1, 2, \dots, N-1$.
Then $u_{xx}(t, x_j)$ can be approximated as
$$
u_{xx}(t, x_j) \approx \frac{u_{j-1}(t) - 2u_j(t) + u_{j+1}(t)}{h^2}.
$$
Plugging that into PDE gives
$$
u'_j(t) = \frac{u_{j-1}(t) - 2u_j(t) + u_{j+1}(t)}{h^2}
$$
a system of $N-1$ ODEs with initial conditions $u_j(0) = f(x_j)$. That could be solved using any RK method (provided that method is stable). For explicit Euler method that would be
$$
\frac{u_j(t_{n+1}) - u_j(t_n)}{\tau} = \frac{u_{j-1}(t_n) - 2u_j(t_n) + u_{j+1}(t_n)}{h^2}, \; j = 1, 2, \dots, N-1\\
u_j(0) = f(t_j)\\
u_0(t) = a(t), \; u_N(t) = b(t).
$$
This well known explicit scheme is stable when $\frac{\tau}{h^2} \leqslant \frac{1}{2}$.
Edit. For those who ask how to use this method with higher order RK methods. Let's take for example an RK2 method (explicit midpoint) with the following Butcher's tableau:
$$
\begin{array}{c|cc}
0 & 0 & 0\\
1/2 & 1/2 & 0\\
\hline
& 0 & 1
\end{array}
$$
Applied to ODE in form $\mathbf u' = \mathbf F(t, \mathbf u)$ this method expands to
$$
\mathbf r = \mathbf F(t_n, \mathbf u_n)\\
\mathbf s = \mathbf F\left(t_n + \frac{\tau}{2}, \mathbf u_n + \frac{\tau}{2} \mathbf r\right)\\
\frac{\mathbf u_{n+1} - \mathbf u_n}{\tau} = \mathbf s
$$
I've used $\mathbf r$ and $\mathbf s$ instead of common $\mathbf k_{1,2}$ to reduce the number of indices involved. Here $\mathbf r$ and $\mathbf s$ are intermediate values that depend solely on values of $u_j$ at $t = t_n$.
For our case the right hand side of the ODE is
$$
F_j(t, u_1, u_2, \dots, u_{N-1}) = \frac{1}{h^2}\begin{cases}
u_0(t) - 2 u_1 + u_2, &j = 1\\
u_{j-1} - 2 u_j + u_{j+1}, &1 < j < N-1\\
u_{N-2} - 2 u_{N-1} + u_N(t), &j = N-1
\end{cases}
$$
Note that $u_0(t)$ and $u_N(t)$ are given explicitly as $a(t)$ and $b(t)$. This is why I have separated cases $j=1$ and $j = N-1$ in the definition of $F_j$.
Putting this altogether gives
$$
r_j = \frac{u_{j-1}(t_n) - 2 u_j(t_n) + u_{j-1}(t_n)}{h^2}\\
s_j = \frac{1}{h^2}\begin{cases}
u_0\left(t + \frac{\tau}{2}\right) - 2 \left(u_{1}(t_n) + \frac{\tau}{2}r_{1}\right) + \left(u_2(t_n) + \frac{\tau}{2}r_2\right), &j = 1\\
\left(u_{j-1}(t_n) + \frac{\tau}{2}r_{j-1}\right) - 2 \left(u_j(t_n) + \frac{\tau}{2}r_j\right) + \left(u_{j+1}(t_n) + \frac{\tau}{2}r_{j+1}\right), &1 < j < N-1\\
\left(u_{N-2}(t_n) + \frac{\tau}{2}r_{N-2}\right) - 2 \left(u_{N-1}(t_n) + \frac{\tau}{2}r_{N-1}\right) + u_N\left(t + \frac{\tau}{2}\right), &j = N-1
\end{cases}\\
\frac{u_j(t_{n+1}) - u_j(t_n)}{\tau} = s_j
$$
Note that $r_j$ and $s_j$ are helper values to step from $u_j(t_n)$ to $u_j(t_{n+1})$ and are different for each time step. One may want to attribute values $r_j$ and $s_j$ to some moment of time. A reasonable choice would be attributing all each value $\mathbf k_i$ with moment $t_n + \tau c_i$. Here $c_i$ is the first column of the Butcher's tableau.
$$
r_j(t_n) = \frac{u_{j-1}(t_n) - 2 u_j(t_n) + u_{j-1}(t_n)}{h^2}\\
s_j\left(t_n + \frac{\tau}{2}\right) = \frac{1}{h^2} \times \\ \times \begin{cases}
u_0\left(t + \frac{\tau}{2}\right) - 2 \left(u_{1}(t_n) + \frac{\tau}{2}r_{1}(t_n)\right) + \left(u_2(t_n) + \frac{\tau}{2}r_2(t_n)\right), &j = 1\\
\left(u_{j-1}(t_n) + \frac{\tau}{2}r_{j-1}(t_n)\right) - 2 \left(u_j(t_n) + \frac{\tau}{2}r_j(t_n)\right) + \left(u_{j+1}(t_n) + \frac{\tau}{2}r_{j+1}(t_n)\right), &1 < j < N-1\\
\left(u_{N-2}(t_n) + \frac{\tau}{2}r_{N-2}(t_n)\right) - 2 \left(u_{N-1}(t_n) + \frac{\tau}{2}r_{N-1}(t_n)\right) + u_N\left(t + \frac{\tau}{2}\right), &j = N-1
\end{cases}\\
\frac{u_j(t_{n+1}) - u_j(t_n)}{\tau} = s_j\left(t_n + \frac{\tau}{2}\right)
$$
While this is the answer to the question "How to apply RK2 to this ODE" I really don't like the final form. Instead let's write the same method in a slightly different form:
$$
\frac{\mathbf u_{n+1/2} - \mathbf u_n}{\tau / 2} = \mathbf F(t_n, \mathbf u_n)\\
\frac{\mathbf u_{n+1} - \mathbf u_n}{\tau} = \mathbf F(t_n + \frac{\tau}{2}, \mathbf u_{n+1/2}).
$$
One can check that the methods are the same by plugging $\mathbf u_{n+1/2} = \mathbf u_n + \frac{\tau}{2} \mathbf r$. Just like $\mathbf r$ and $\mathbf s$ the $\mathbf u_{n+1/2}$ is a helper value used to perform a step from $\mathbf u_n$ to $\mathbf u_{n+1}$.
Applied to our ODE this method gives
$$
\frac{u_j(t_{n+1/2}) - u_j(t_n)}{\tau / 2} = \frac{u_{j-1}(t_n) - 2 u_j(t_n) + u_{j-1}(t_n)}{h^2}, \quad j = 1, 2, \dots, N-1\\
u_0(t_{n+1/2}) = a\left(t_n + \frac{\tau}{2}\right), \quad
u_N(t_{n+1/2}) = b\left(t_n + \frac{\tau}{2}\right),\\
\frac{u_j(t_{n+1}) - u_j(t_n)}{\tau / 2} = \frac{u_{j-1}(t_{n+1/2}) - 2 u_j(t_{n+1/2}) + u_{j-1}(t_{n+1/2})}{h^2}, \quad j = 1, 2, \dots, N-1\\
u_0(t_{n+1}) = a\left(t_n + \tau\right), \quad
u_N(t_{n+1}) = b\left(t_n + \tau\right).
$$
Though not strictly necessary I have defined values $u_0(t_{n+1/2})$ and $u_N(t_{n+1/2})$ to get rid of treating $j=1$ and $j=N-1$ as separate cases.
Best Answer
Yes, the same method works. You first need to find the eigenfunctions of the corresponding homogeneous problem (let me denote them as $e_i(x)$, these are $X(x)$ in your solution, there is a countable number of them), then you represent your non-homogeneous term (in your case $f(x,t)=1$) as a series using the found eigenfunctions $f(x,t)=\sum f_i(t)e_i(x)$, and then look for the solution in the form $u(x,t)=\sum T_i(t)e_i(x)$ (just plug this expression in your equation). For the unknown functions $T_i(t)$ you'll get differential equations which take into account $f_i(t)$.
Again, all the details with great care are shown in the book I referenced above in my comment.