Can someone represent the 4th order Runge-Kutta method graphically when iterating from $(x_k,t_k)$ to $(x_{k+1},t_{k+1})$?
[Math] Graphical analysis of the 4th order Runge-Kutta method
numerical methodsordinary differential equationsrunge-kutta-methods
Related Solutions
You have an ODE $$ \frac{dx}{dt} = f(t,x) = (x+1)t $$ which separates to $$ \frac{dx}{x+1} = tdt\\ \log |x+1| + C = \frac{t^2}{2}\\ x = C_1 e^{t^2/2} - 1. $$ With $x(0) = 0$ that gives $C_1 = 1$ and $$ x(t) = e^{t^2/2} - 1. $$
Computing the first step we get $$\begin{aligned} k_1 &= hf(0, 0) = 0\\ k_2 &= hf\left(\frac{h}{2}, \frac{k_1}{2}\right) = h\frac{h}{2}\\ k_3 &= hf\left(\frac{h}{2}, \frac{k_2}{2}\right) = h\frac{h(h^2 + 4)}{8}\\ k_4 &= hf\left(h, k_3\right) = h\frac{h(8+4h^2+h^4)}{8}\\ x_{1} &= \frac{1}{6}(k_1 + 2k_2 + 2k_3 + k_4) = \frac{1}{48} h^2 \left(h^4+6 h^2+24\right) = \\ &= \frac{h^2}{2} + \frac{h^4}{8} + O(h^6). \end{aligned} $$ And the exact solution is $$ x(h) = e^{h^2/2} - 1 = \frac{h^2}{2} + \frac{h^4}{8} + O(h^6). $$ This agrees with the fact that method is of fourth order, thus the local truncation error is $O(h^5)$.
On the history
See Butcher: A History of the Runge-Kutta method
In summary, people (Nystroem, Runge, Heun, Kutta,...) at the end of the 19th century experimented with success in generalizing the methods of numerical integration of functions in one variable $$\int_a^bf(x)dx,$$ like the Gauss, trapezoidal, midpoint and Simpson methods, to the solution of differential equations, which have an integral form $$y(x)=y_0+\int_{x_0}^x f(s,y(s))\,ds.$$
Carl Runge in 1895[1] came up with ("by some curious inductive process" - "auf einem eigentümlich induktiven Wege" wrote Heun 5 years later) the 4-stage 3rd order method \begin{align} k_1&=f(x,y)Δx,\\ k_2&=f(x+\tfrac12Δx,y+\tfrac12k_1)Δx\\ k_3&=f(x+Δx,y+k_1)Δx\\ k_4&=f(x+Δx,y+k_3)Δx\\ y_{+1}&=y+\tfrac16(k_1+4k_2+k_4) \end{align}
[1] "Über die numerische Auflösung von Differentialgleichungen", Math. Ann. 46, p. 167-178
Inspired by this Karl Heun in 1900[2] explored methods of the type $$ \left.\begin{aligned}k^i_m &= f(x+ε^i_mΔx,y+ε^i_mk^{i+1}_m)Δx,~~ i=1,..,s,\\ k^{s+1}_m&=f(x,y)Δx\end{aligned}\right\},~~ m=1,..,n\\ y_{+1}=y+\sum_{m=1}^n\alpha_mf(x+ε^0_mΔx,y+ε^0_mk^1_m)Δx $$ He computed the order conditions by Taylor expansion and constructed methods of this type up to order 4, however the only today recognizable Runge-Kutta methods among them are the order-2 Heun-trapezoidal method and the order 3 Heun method.
[2] "Neue Methode zur approximativen Integration der Differentialgleichungen einer unabhängigen Veränderlichen", Z. f. Math. u. Phys. 45, p. 23-38
Wilhelm Kutta in his publication one year later in 1901[3] considered the scheme of Heun wasteful in the number of function evaluations and introduced what is today known as explicit Runge-Kutta methods, where each new function evaluation potentially contains all previous values in the $y$ update. \begin{align} k_1&=f(x,y)Δx,\\ k_m&=f(x+c_mΔx, y+a_{m,1}k_1+...+a_{m,m-1}k_{m-1})Δx,&& m=2,...,s\\[0.5em] y_{+1}&=y+b_1k_1+...+b_sk_s \end{align} He computed order conditions and presented methods up to order $5$ in parametrization and examples. He especially noted the 3/8 method for its symmetry and small error term and the "classical" RK4 method for its simplicity in using always only the last function value in the $y$ updates.
[3] "Beitrag zur näherungsweisen Lösung totaler Differentialgleichungen", Z. f. Math. u. Phys. 46, p. 435-453
On the order dependence of the performance
The Euler method has global error order 1. Which means that to get an error level of $10^{-8}$ (on well-behaved example problems) you will need a step size of $h=10^{-8}$. Over the interval $[0,1]$ this requires $10^8$ steps with $10^8$ function evaluations.
The classical RK4 method has error order 4. To get an error level of $10^{-8}$ you will thus need a step size of $h=0.01$. Over the interval $[0,1]$ this requires $100$ steps with $400$ function evaluations.
If you decrease the step by a factor of $10$ to $h=0.001$, the RK4 method will need $1000$ steps with $4000$ function evaluations to get an error level of $10^{-12}$. This is still much less effort than used in the Euler example above with a much better result.
Using double
precision floating point numbers you will not get a much better result with any method using a fixed step size, as smaller step sizes result in an accumulating floating point noise that dominates the error of the method.
Best Answer
You can look at the graphic here.
Also, you can merge/correlate that diagram in with the graphics I added here.