Let us assume that function $g$ is defined on an interval $(a,b)$, $g(x)\in(a,b)$ in that interval, and that there is a constant $c<1$ such that for each $x,y \in (a,b)$,
\begin{equation}
\left|g(y)-g(x)\right| < c |x-y|. \tag{1}
\end{equation}
If $g$ has a derivative, this becomes $g'(x)<c$ for $x\in(a,b)$.
The fixed point iteration is defined by $x_{k+1}=g(x_k)$, where $x_0$ is an arbitrarily chosen starting point in $(a,b)$.
Let us assume that the function has a fixed point at $\hat{x}\in(a,b)$, that is $\hat{x}=g(\hat{x})$.
Now at step $k$, the absolute error of our current guess to the fixed point is $e_k = |x_k-\hat{x}|$. We get
$$
e_{k+1} = |x_{k+1}-\hat{x}| = |g(x_k)-g(\hat{x})| < c|x_k - \hat{x}| = c e_k.
$$
Therefore, the sequence $(e_k)_k$ is nonnegative and bounded above by the sequence $(c^ke_0)_k$, which converges to $0$. Therefore, $\lim_{k\to\infty}e_k=0$. This means that the fixed point iteration converges to $\hat{x}$.
For general $g:\mathbb{R}\to\mathbb{R}$, we can make following observations:
If (1) holds in $\mathbb{R}$, we can replace $(a,b)$ with $\mathbb{R}$ in the above proof. One can also see that the function has exactly one fixed point in that case (if $g$ is differentiable, the derivative of $g(x)-x$ is smaller than a negative constant, thus $g(x)-x$ has exactly one zero; if $g$ is not differentiable, a similar argument still holds).
If (1) does not hold in $\mathbb{R}$ but holds in an interval $(a,b)$ containing a fixed point, we can see that $g(a)>a$ and $g(b)<b$, so $g(x) \in (a,b)$ as required. Now the fixed point iteration converges to the fixed point if $x_0$ is chosen inside the interval.
Assuming you're using method of lines.
Let the original initial-boundary problem be
$$
u_t = u_{xx}\\
u(0, x) = f(x)\\
u(t, 0) = a(t)\\
u(t, 1) = b(t).
$$
Introduce a set of points $x_j = jh,\; j = 0,1,\dots, N,\;Nh = 1$.
Let $u_j(t) = u(t, x_j)$. Note that $u_0(t) = a(t),\; u_N(t) = b(t)$ are already known. Unknown are $u_j(t),\; j = 1, 2, \dots, N-1$.
Then $u_{xx}(t, x_j)$ can be approximated as
$$
u_{xx}(t, x_j) \approx \frac{u_{j-1}(t) - 2u_j(t) + u_{j+1}(t)}{h^2}.
$$
Plugging that into PDE gives
$$
u'_j(t) = \frac{u_{j-1}(t) - 2u_j(t) + u_{j+1}(t)}{h^2}
$$
a system of $N-1$ ODEs with initial conditions $u_j(0) = f(x_j)$. That could be solved using any RK method (provided that method is stable). For explicit Euler method that would be
$$
\frac{u_j(t_{n+1}) - u_j(t_n)}{\tau} = \frac{u_{j-1}(t_n) - 2u_j(t_n) + u_{j+1}(t_n)}{h^2}, \; j = 1, 2, \dots, N-1\\
u_j(0) = f(t_j)\\
u_0(t) = a(t), \; u_N(t) = b(t).
$$
This well known explicit scheme is stable when $\frac{\tau}{h^2} \leqslant \frac{1}{2}$.
Edit. For those who ask how to use this method with higher order RK methods. Let's take for example an RK2 method (explicit midpoint) with the following Butcher's tableau:
$$
\begin{array}{c|cc}
0 & 0 & 0\\
1/2 & 1/2 & 0\\
\hline
& 0 & 1
\end{array}
$$
Applied to ODE in form $\mathbf u' = \mathbf F(t, \mathbf u)$ this method expands to
$$
\mathbf r = \mathbf F(t_n, \mathbf u_n)\\
\mathbf s = \mathbf F\left(t_n + \frac{\tau}{2}, \mathbf u_n + \frac{\tau}{2} \mathbf r\right)\\
\frac{\mathbf u_{n+1} - \mathbf u_n}{\tau} = \mathbf s
$$
I've used $\mathbf r$ and $\mathbf s$ instead of common $\mathbf k_{1,2}$ to reduce the number of indices involved. Here $\mathbf r$ and $\mathbf s$ are intermediate values that depend solely on values of $u_j$ at $t = t_n$.
For our case the right hand side of the ODE is
$$
F_j(t, u_1, u_2, \dots, u_{N-1}) = \frac{1}{h^2}\begin{cases}
u_0(t) - 2 u_1 + u_2, &j = 1\\
u_{j-1} - 2 u_j + u_{j+1}, &1 < j < N-1\\
u_{N-2} - 2 u_{N-1} + u_N(t), &j = N-1
\end{cases}
$$
Note that $u_0(t)$ and $u_N(t)$ are given explicitly as $a(t)$ and $b(t)$. This is why I have separated cases $j=1$ and $j = N-1$ in the definition of $F_j$.
Putting this altogether gives
$$
r_j = \frac{u_{j-1}(t_n) - 2 u_j(t_n) + u_{j-1}(t_n)}{h^2}\\
s_j = \frac{1}{h^2}\begin{cases}
u_0\left(t + \frac{\tau}{2}\right) - 2 \left(u_{1}(t_n) + \frac{\tau}{2}r_{1}\right) + \left(u_2(t_n) + \frac{\tau}{2}r_2\right), &j = 1\\
\left(u_{j-1}(t_n) + \frac{\tau}{2}r_{j-1}\right) - 2 \left(u_j(t_n) + \frac{\tau}{2}r_j\right) + \left(u_{j+1}(t_n) + \frac{\tau}{2}r_{j+1}\right), &1 < j < N-1\\
\left(u_{N-2}(t_n) + \frac{\tau}{2}r_{N-2}\right) - 2 \left(u_{N-1}(t_n) + \frac{\tau}{2}r_{N-1}\right) + u_N\left(t + \frac{\tau}{2}\right), &j = N-1
\end{cases}\\
\frac{u_j(t_{n+1}) - u_j(t_n)}{\tau} = s_j
$$
Note that $r_j$ and $s_j$ are helper values to step from $u_j(t_n)$ to $u_j(t_{n+1})$ and are different for each time step. One may want to attribute values $r_j$ and $s_j$ to some moment of time. A reasonable choice would be attributing all each value $\mathbf k_i$ with moment $t_n + \tau c_i$. Here $c_i$ is the first column of the Butcher's tableau.
$$
r_j(t_n) = \frac{u_{j-1}(t_n) - 2 u_j(t_n) + u_{j-1}(t_n)}{h^2}\\
s_j\left(t_n + \frac{\tau}{2}\right) = \frac{1}{h^2} \times \\ \times \begin{cases}
u_0\left(t + \frac{\tau}{2}\right) - 2 \left(u_{1}(t_n) + \frac{\tau}{2}r_{1}(t_n)\right) + \left(u_2(t_n) + \frac{\tau}{2}r_2(t_n)\right), &j = 1\\
\left(u_{j-1}(t_n) + \frac{\tau}{2}r_{j-1}(t_n)\right) - 2 \left(u_j(t_n) + \frac{\tau}{2}r_j(t_n)\right) + \left(u_{j+1}(t_n) + \frac{\tau}{2}r_{j+1}(t_n)\right), &1 < j < N-1\\
\left(u_{N-2}(t_n) + \frac{\tau}{2}r_{N-2}(t_n)\right) - 2 \left(u_{N-1}(t_n) + \frac{\tau}{2}r_{N-1}(t_n)\right) + u_N\left(t + \frac{\tau}{2}\right), &j = N-1
\end{cases}\\
\frac{u_j(t_{n+1}) - u_j(t_n)}{\tau} = s_j\left(t_n + \frac{\tau}{2}\right)
$$
While this is the answer to the question "How to apply RK2 to this ODE" I really don't like the final form. Instead let's write the same method in a slightly different form:
$$
\frac{\mathbf u_{n+1/2} - \mathbf u_n}{\tau / 2} = \mathbf F(t_n, \mathbf u_n)\\
\frac{\mathbf u_{n+1} - \mathbf u_n}{\tau} = \mathbf F(t_n + \frac{\tau}{2}, \mathbf u_{n+1/2}).
$$
One can check that the methods are the same by plugging $\mathbf u_{n+1/2} = \mathbf u_n + \frac{\tau}{2} \mathbf r$. Just like $\mathbf r$ and $\mathbf s$ the $\mathbf u_{n+1/2}$ is a helper value used to perform a step from $\mathbf u_n$ to $\mathbf u_{n+1}$.
Applied to our ODE this method gives
$$
\frac{u_j(t_{n+1/2}) - u_j(t_n)}{\tau / 2} = \frac{u_{j-1}(t_n) - 2 u_j(t_n) + u_{j-1}(t_n)}{h^2}, \quad j = 1, 2, \dots, N-1\\
u_0(t_{n+1/2}) = a\left(t_n + \frac{\tau}{2}\right), \quad
u_N(t_{n+1/2}) = b\left(t_n + \frac{\tau}{2}\right),\\
\frac{u_j(t_{n+1}) - u_j(t_n)}{\tau / 2} = \frac{u_{j-1}(t_{n+1/2}) - 2 u_j(t_{n+1/2}) + u_{j-1}(t_{n+1/2})}{h^2}, \quad j = 1, 2, \dots, N-1\\
u_0(t_{n+1}) = a\left(t_n + \tau\right), \quad
u_N(t_{n+1}) = b\left(t_n + \tau\right).
$$
Though not strictly necessary I have defined values $u_0(t_{n+1/2})$ and $u_N(t_{n+1/2})$ to get rid of treating $j=1$ and $j=N-1$ as separate cases.
Best Answer
If you want to (numerically) solve the equation $$g(x)=x$$ by choosing some starting point and iterating $g$, then in order for a solution $x_0$ to the above equation to attract nearby points to it, it must satisfy: $$|g'(x_0)|\leq 1$$ (Although, if $g'(x_0)=1$, it still might not attract nearby points). The reason for this is that we require that if we choose some $y$ near $x_0$ that $g(y)$ will be even closer to $x_0$ than $y$ was; so, if we take the linear approximation to $g$ near $x_0$ we have that $g$ is about (noting that $g(x_0)=x_0$): $$x_0+g'(x_0)(y-x_0)$$ and the distance of this term from $x_0$ is: $$|g'(x_0)(y-x_0)|$$ so the factor $g'(x_0)$ acts to scale $y$ towards $x_0$ - and if it's less than one, it's pulling $y$ closer. If it's more than one, it's pushing $y$ further away.
It's worth noting that other behaviors (like cycles) may occur, and that this becomes a deep topic of study quickly.