If you write your solution now as $$\vec{x}(t)=\left(C_1\begin{bmatrix} 1 \\ 1 \end{bmatrix}+C_2\begin{bmatrix} 0 \\ 1\end{bmatrix}\right)e^{-t}+C_2\begin{bmatrix}1 \\ 1\end{bmatrix}te^{-t}$$
it may be a little easier to see what's going on. As $t\to\pm\infty$, the equation is dominated by the second term, so $$\vec{x}(t)\approx C_2\begin{bmatrix} 1 \\ 1\end{bmatrix}te^{-t}\;\;\;\;\;\;\;\;\;\;\;t\to\pm\infty$$
As $t\to\infty$, the whole thing decays to $0$, so it makes sense that all solution curves begin to look like they fall along this eigenvector as they decay to $0$. Applying the same logic as $t\to-\infty$, it would seem that solutions grow along this vector as well, but this is only true on a macroscopic scale, as the solutions are growing exponentially in both directions. The first term, no longer decaying to $0$, shifts the solution away from the eigenvector and is growing large, but not as large as the second term. This means that when zoomed in, solutions look like they are leaving the eigenvector, but zoomed out solutions look like they are along the eigenvector (that is, until you let time grow large enough, in which you will see it move away slowly).
The last thing to consider is the direction of the curve. Suppose that $\vec{x}$ is in the first quadrant as $t\to\infty$. This means that $C_2$ is positive, so if $t\to-\infty$, then the sign on the second term is now negative, and the solution turns around to enter the third quadrant. This means that solutions sort of "spin" by turning $180^\circ$ around while still growing away from the origin.
The vector $\begin{bmatrix} 0 \\ 1\end{bmatrix}$ tells you which way the solution spins. If $C_2$ is positive, then that means that at $t=0$, the solution is above (in the xy-plane) of the line through the vector $\begin{bmatrix} 1 \\ 1\end{bmatrix}$ and the origin, so the solution must remain on this side of said line. The dominating term is in the first quadrant as $t\to\infty$, so the solution must spin clockwise to decay this way. It turns out that if solutions spin clockwise on one side of the eigenvector, they do the same on the other side as well; the same is true if they spin counterclockwise.
This type of equilibrium is called a Degenerate Node, in case you were curious. Here is an example of what one might look like.
Write the general solution of the homogeneous linear ODE
$$
\tag{1}
\frac{\mathrm{d}\vec{y}}{\mathrm{d}x} = A(x) \vec{y}
$$
in the matrix form as
$$
\Phi(x; x_0) \vec{c},
$$
where $\vec{c} = \mathrm{col}(c_1, \dots, c_n)$ is a constant matrix (that is, independent of $x$), and $\Phi$ is the state-transition matrix such that $\Phi(x_0, x_0) = I$ (the identity matrix).
Now we are looking for a general solution of the nonhomogeneous linear ODE
$$
\tag{2}
\frac{\mathrm{d}\vec{y}}{\mathrm{d}x} = A(x) \vec{y} + \vec{b}(x)
$$
in the form
$$
\Phi(x, x_0) \vec{c}(x).
$$
Incidentally, here is the source of the term "variation of constants". We look for conditions for the above function of $x$ to be a solution of $(2)$, that is, to have
$$
\frac{\mathrm{d}}{\mathrm{d}x}(\Phi(x, x_0) \vec{c}(x)) = A(x) (\Phi(x, x_0) \vec{c}(x)) + \vec{b}(x).
$$
But
$$
\frac{\mathrm{d}}{\mathrm{d}x}(\Phi(x, x_0) \vec{c}(x)) = \frac{\mathrm{d}}{\mathrm{d}x} \Phi(x, x_0) \cdot \vec{c}(x) + \Phi(x, x_0) \cdot \vec{c}'(x)
$$
and
$$
\frac{\mathrm{d}}{\mathrm{d}x} \Phi(x, x_0) = A(x) \Phi(x, x_0),
$$
so after cancellation we obtain
$$
\Phi(x, x_0) \cdot \vec{c}'(x) = \vec{b}(x),
$$
which is, for each fixed $x$, a Cramer (because $\det{\Phi(x, x_0)} = W(x, x_0) \ne 0$) system of $n$ linear equations with $n$ unknowns, $c'_1(x), \dots, c'_n(x)$. The unknowns are given by the Cramer rule. We have thus obtained the derivative of the matrix function $\vec{c}(x)$, so we have to integrate that derivative. And that is the source of the integral.
The "Wrońskian" determinant was named after a nineteenth-century Polish philosopher and mathematician, Józef Maria Hoene-Wroński.
EDIT: I changed slightly the terminology and provided a reference to a Wikipedia entry.
Best Answer
Your image is the result of drawing many large secants inside the convex closed solution curve. To see it happening, I ran the integation on some smaller time intervals, still using the time step 1
If you go further to
T=4000
as you did, the shift of the secant positions will have covered the full solution curve and result in an image just as you got.Letting the solver return the internally generated sampling nodes and values instead of prescribing a step size 1 results in the image
This was produced with python's
scipy.integrate.solve_ivp
Using matlab withwill do essentially the same, but insert 3 interpolated points in each segment, so that the plot will follow the curve much more closely.
Another measure is to reduce the step size to
0.1
or smaller. Adding some secants for time steps 1,2,4 to add some content gives