Terminology
When we are solving for eigenvalues of a system, and an eigenvalue is repeated, then one worries as to whether there exist enough linearly independent eigenvectors.
If we have an eigenvalue with multiplicity $n$ which has less than $n$ linearly independent eigenvectors, we will not have enough solutions. With this in mind, we define the algebraic multiplicity of an eigenvalue to be the number of times it is a root of the characteristic equation. We define the geometric multiplicity of an eigenvalue to be the number of linearly independent eigenvectors for the eigenvalue.
Mathematically, we can state that the algebraic multiplicity of an eigenvalue $\lambda$ is, by definition, the largest integer $k$ such that $(x−\lambda)^k$ divides the characteristic polynomial. The geometric multiplicity of $\lambda$ is the dimension of its eigenspace, that is, it is the dimension of $\{X \in \mathbb{C}^{n×1} : AX=λX\}$, where $n$ is the dimension of the matrix.
The null space of the matrix is called the eigenspace associated with the eigenvalue $\lambda$.
When the geometric multiplicity of an eigenvalue is less than the algebraic multiplicity, we say the matrix is defective. In the case of defective matrices, we must search for additional solutions using generalized eigenvectors.
Analyze System
In this system, we have:
$$x' = \begin{pmatrix}3&-4\\1&-1\end{pmatrix}x$$
Aside: The matrix here is rank $= 2$. This means that, $\dim(A) = \text{rank}(A) + \text{nullity}(A) = 2 = 2 + 0$, in other words, $A$ is an invertible matrix.
The corresponding characteristic equation for $A$ is:
$$\lambda^2-2\lambda+1 = (\lambda-1)(\lambda-1) \implies \lambda_1=\lambda_2=1$$
The algebraic multiplicity for the eigenvalue $\lambda_1 = 1$ is $2$. Lets find the geometric multiplicity.
To find an eigenvector, we set up and solve $[A- \lambda I]v_i = 0$, so we have:
$$[A - 1 I]v_1 = \begin{pmatrix}2 & -4\\ 1 & -2\end{pmatrix}v_1 = 0.$$
The RREF of this matrix is:
$$\begin{pmatrix}1 & -2\\ 0 & 0\end{pmatrix}v_1 = 0.$$
This gives us an eigenvector $v_1 = (2, 1)$.
Observations
- The rank of the RREF matrix is $1$ and we know, from the rank-nullity theorem, that the $\dim = 2 = \text{rank} + \text{nullity} = 1 + \text{nullity} \rightarrow \text{nullity} = 1$.
- Note that sometimes this is more generally called $\dim (\text{image}~ T) + \dim (\ker ~T) = \dim ~V$
- We know that the nullity is the geometric multiplicity, which is $1$. This means we can only find one independent eigenvector, so we have what is called a deficient matrix.
- The eigenspace for $\lambda = 1$ is the nullspace of $[A - \lambda I] = [A- 1 I] = \text{Span}\left\{\begin{pmatrix}2\\ 1\end{pmatrix}\right\}$. Notice how this agrees with the eigenvector we found above (as it should)?
- From the eigenspace terminology above, we can write $E(1)= \left\{X \in \mathbb{C}^{2×1} : AX = 1X = \begin{pmatrix}2\\ 1\end{pmatrix}\right\}$
- Note that there is a nice way of getting everything using the factorization of the characteristic polynomial, but that is for another day.
- From all of this, we still need to find a second independent (generalized) eigenvector.
To find a second eigenvector, we try:
$$[A - 1I]v_2 = v_1 \rightarrow \begin{pmatrix}1 & -2\\ 0 & 0\end{pmatrix}v_2 = \begin{pmatrix}2\\1\end{pmatrix}$$
This leads to: $a = 1 + 2b \rightarrow \text{let}~~ b = 0, a = 1 \rightarrow v_2 = (1, 0)$.
So, we can write our general solution as:
$$x(t) = \begin{bmatrix}x_1(t)\\ x_2(t)\end{bmatrix} = e^t\left[ c_1 v_1 + c_2(v_1 t + v_2)\right] = e^t\left[ c_1 \begin{pmatrix}2\\1\end{pmatrix} + c_2\left(\begin{pmatrix}2\\1\end{pmatrix} t + \begin{pmatrix}1\\0\end{pmatrix}\right)\right] = e^t\left[ c_1 \begin{pmatrix}2\\1\end{pmatrix} + c_2\begin{pmatrix}2t + 1\\t\end{pmatrix}\right] $$
If we wanted to write the matrix exponential, we would have:
$$e^{At} = e^t\begin{pmatrix}2 t+1 & -4 t \\ t & 1-2 t \end{pmatrix}$$
We can also draw the phase portrait for the system. We have a critical point at $(x, y) = (0,0)$. From the eigenvalues, we have a positive, repeated real root $\lambda = 1 \rightarrow$ a degenerate node. The phase portrait is as follows.
Best Answer
You start off well and then get lost a little bit towards the end. Let me try to clear up the confusion.
The first step towards a solution, as you correctly note, is to transform the system of ODEs into a matrix equation. Concretely, you'll find
$$\mathbf{x}(t)=A \mathbf{x}(t)+\mathbf{g}(t)$$
with $\mathbf{g}(t)=(e^t,1)^T$, $\mathbf{x}(t)=(x(t),y(t))^T$ and the matrix $A$ has the form $$A=\begin{pmatrix} 1 & 2 \\ 2 & 1 \end{pmatrix}.$$
The standard ansatz $\mathbf{x}(t)=\mathbf{\mu} e^{r t}$ shows you that solutions to the homogeneous equation (without the $\mathbf{g}(t)$ part) willl have to satisfy the following two properties:
Eigenvalues and eigenvectors are easily found, as you wrote above, you'll find $\begin{pmatrix}1 \\1 \end{pmatrix}$ for EV 3 and $\begin{pmatrix}-1 \\1 \end{pmatrix}$ for EV -1. Once you've got this, your ready to write down the general solution to the homogeneous system, which has the form
$$\mathbf{x}_c(t)=C_1 e^{3 t}\begin{pmatrix}1 \\1 \end{pmatrix} +C_2 e^{- t} \begin{pmatrix}-1 \\1 \end{pmatrix}.$$ In the same vein as you'd treat an nonhomogeneous ODE, the full solution will be the sum of the above plus a particular solution to the nonhomogeneous system, $\mathbf{x}(t)=\mathbf{x}_c(t)+\mathbf{x}_p(t)$. To that end, introduce the fundamental matrix $M$, whose rows are the two solutions to the above eigenvector problem:
$$M(t)=\begin{pmatrix} e^{3t} & -e^{-t} \\ e^{3t} & e^{-t} \end{pmatrix}.$$
For the particular solution, we make the ansatz $\mathbf{x}_p(t)=M(t)\mathbf{u}(t)$, with some yet-to-be-determined vector $\mathbf{u}(t)$. Inserting this into the very first equation yields $\dot{\mathbf{u}}(t) =M(t)^{-1} \mathbf{g}(t)$, solutions to which are obtained by integration.
I'll stop at this point to give you a chance to put the final pieces together yourself, but don't hesitate if you have additional questions!