You're just hitting machine precision. Verify the slope of the log for error values larger than say $10^{-12}$ as for smaller errors, you'll see rounding errors due to the double precision arithmetic of your computer, and to the finite number of digits of the coefficients of the butcher table you have (quite often in textbooks, you don't even have the first 16 digits).
You have a system $\dot{x} = f(x)$ with
$$
f(x) = A x = \begin{pmatrix}
0 & 1 & 0 \\
0 & 0 & 1 \\
-10 & -5 & -2
\end{pmatrix} \begin{pmatrix}
x_1 \\
x_2 \\
x_3
\end{pmatrix} = \begin{pmatrix}
x_2 \\
x_3 \\
-10 x_1 - 5 x_2 - 2 x_3
\end{pmatrix}
$$
The formula for Runge Kutta with step size $h$ is:
$$
\begin{align}
k_1 &= f(x(t)) \\
k_2 &= f(x(t) + \frac{h}{2}k_1) \\
k_3 &= f(x(t) + \frac{h}{2}k_2) \\
k_4 &= f(x(t) + h k_3) \\
x(t + h) &= x(t) + \frac{h}{6}(k_1 + 2 k_2 + 2 k_3 + k_4)
\end{align}
$$
So all you need to do is choose a step size $h$ and initial condition $x(0) = x_0$.
For example use $h = 0.01$ and
$$
x_0 = \begin{pmatrix}
5 \\
-2 \\
3
\end{pmatrix}
$$
Insert this:
$$
\begin{align}
k_1 &= f(x_0) = \begin{pmatrix}
-2 \\
3 \\
-46
\end{pmatrix} \\
k_2 &= f(x_0 + \frac{h}{2}k_1) = \begin{pmatrix}
-1.985 \\
2.77 \\
-45.515
\end{pmatrix} \\
k_3 &= f(x_0 + \frac{h}{2}k_2) = \begin{pmatrix}
-1.98615 \\
2.772425 \\
-45.51485
\end{pmatrix} \\
k_4 &= f(x_0 + h k_3) = \begin{pmatrix}
-1.97227575 \\
2.5448515 \\
-45.02970925
\end{pmatrix} \\
\end{align}
$$
And so:
$$
x(0.01) = x(0) + \frac{0.01}{6}(k_1 + 2 k_2 + 2 k_3 + k_4) = \begin{pmatrix}
4.98014237375 \\
-1.972283830833333 \\
2.544850984583333
\end{pmatrix}
$$
If you repeat this 500 times you get:
If you simulate even longer you can see this is an undamped oscillation which makes sense because the three eigenvalues of $A$ are $-2, \sqrt{5}i, -\sqrt{5}i$.
Best Answer
Taking the question text from your other question at SO on this topic (that you deleted as suggested, but did not transfer to enhance this question).
A: In $$y'(t)=f(t,y(t),u(t)),$$ each constant segment $u(t)=u_k$ on $t\in[t_k,t_{k+1})$ can be considered to generate an independent ODE system $$y'(t)=f_k(t,y(t))~\text{ where }~f_k(t,y)=f(t,y,u_k).$$ Each of these systems on their own is smooth, thus perfectly solvable with reasonable step sizes, as long as it does not diverge to infinity.
Then you can construct a composite function that is the solution of the original equation, $$y(t)=y_k(t) ~\text{ for }~ t\in[t_k,t_{k+1}]$$ where $y_k$ is a solution to $$ y_k'(t)=f_k(t,y_k(t)),~~y_k(t_k)=y_{k-1}(t_k). $$
A: That is not really a problem if you compute the solutions separately. On the segment $[t_k,t_{k+1}]$, the solution is equal to the solution of the $k$th equation. That the solver uses points outside this interval does not change its exactness inside the interval, to the contrary, it allows the solver to keep using relatively large step sizes.
Changing the equation exactly at $t_{k+1}$ would introduce a discontinuity in the equation or one of its derivatives which throws off the step size controller, leading to excessively small step sizes and the corresponding large number of function evaluations to pass over that point.