$\DeclareMathOperator{\span}{span}$
For $U+V$:
By the definition of $U+V$, we know that
$$
U+V=\span\left(\{(1,-1,1,-1),(1,3,-1,4)\} \cup \{(1,0,1,-3),(0,1,0,-2)\}\right)
$$
We do not know, however, whether $\{(1,-1,1,-1),(1,3,-1,4),(1,0,1,-3),(0,1,0,-2)\}$ forms a basis of this space, since we don't know whether these vectors are linearly independent (hint: they're not). Given a set of vectors, how do we extract a maximal linearly-independent subset?
Hint: given a set of vectors $\{v_1,\dots,v_k\} \subset \mathbb{R}^n$, you can extract a maximal linearly independent subset by row reducing the matrix
$$
\pmatrix{|&|&&|\\v_1&v_2&\cdots&v_k\\|&|&&|}
$$
and selecting the vectors corresponding to the pivots of the row-reduced matrix.
For $U \cap V$:
Which vectors of the form $a_1\cdot (1,-1,1,-1) + a_2\cdot(1,3,-1,4)$ satisfy the equations defining $V$? Could you solve for $a_1$ and $a_2$ using some sort of matrix equation?
That is, define $u_1 = (1,-1,1,-1)^T$ and $u_2 = (1,3,-1,4)^T$. Any $u \in U$ can be expressed in the form
$$
u = a_1u_1 + a_2u_2 = \pmatrix{u_1 & u_2}\pmatrix{a_1\\a_2} =
\pmatrix{a_1+a_2 \\ -a_1 + 3a_2 \\ a_1 - a_2 \\ -a_1 + 4a_2}
$$
We want to find the values of $u$ (and hence of $a_1$ and $a_2$) that satisfy the equalities defining $V$.
As you found, $V$ is the solution set to the matrix equality
$$
\pmatrix{1&0&-1&0\\1&2&2&1} \pmatrix{x_1\\x_2\\x_3\\x_4} = 0
$$
So, in order for $a_1u_1 + a_2 u_2$ to be in $V$, it has to satisfy
$$
\pmatrix{1&0&-1&0\\1&2&2&1} (a_1 u_1 + a_2 u_2) = 0
$$
That is, we need to find the solution to the matrix equation
$$
\pmatrix{1&0&-1&0\\1&2&2&1} \pmatrix{u_1 & u_2}\pmatrix{a_1\\a_2} = 0
$$
Let $\{v_1 \dots v_m\}$ be e basis for $\ker(f)$
Let $\{v_1 \dots v_n\}$ be e basis for $E$
We claim that $A=\{T(v_{m+1}) \dots T(v_n)\}$ is a basis for $\operatorname*{Im}(f)$
Let $v \in \operatorname*{Im}(f)$. Then $v=T(w)$ for $w \in E$. This means that $w$ can be expressed as
$$w=\sum_{1}^{n}\alpha_iv_i$$
Then $v=T(w)=T(\sum_{1}^{n}\alpha_iv_i)=\sum_{1}^{n}\alpha_iT(v_i)=\sum_{m+1}^{n}\alpha_iT(v_i)$ since the first $m$ terms are in the kernel.
Hence $v \in \operatorname*{span}\{A\}$.
We now have to show linear independence:
Let $\sum_{m+1}^{n}\alpha_iT(v_i)=0 \Leftrightarrow T(\sum_{m+1}^{n}\alpha_iv_i)=0 \Leftrightarrow \sum_{m+1}^{n}\alpha_iv_i \in \ker(f)$
But as each $v_i$ is not in the kernel, this implies that the $\alpha_i$'s are equal to $0$ which proves the claim
Best Answer
Converting to a matrix is a very good idea: if $A$ is the matrix of $T$ with respect to $\{x_1,x_2,x_3,x_4\}$, then you can substitute the image of $T$ with the column space $C(A)$ of $A$ and the kernel of $T$ with the null space $N(A)$ of $A$.
However, some computations are necessary, because you need to find either $C(A)+N(A)$ or $C(A)\cap N(A)$, in order to apply Grassmann’s formula.
The column space is spanned by $$ \left\{\, v_1=\begin{bmatrix} 1 \\ 0 \\ 0 \\ 0 \end{bmatrix}, v_2=\begin{bmatrix} 0 \\ 1 \\ 0 \\ 0 \end{bmatrix} \,\right\} $$ The null space is spanned by $$ \left\{\, v_3=\begin{bmatrix} -1 \\ 1 \\ 0 \\ 0 \end{bmatrix}, v_4=\begin{bmatrix} -1 \\ 0 \\ 1 \\ 0 \end{bmatrix} \,\right\} $$ The vector $v_3$ is a linear combination of $v_1$ and $v_2$; on the other hand, $\{v_1,v_2,v_4\}$ is linearly independent. Therefore $$ \dim(C(A)+N(A))=3 $$ Now Grassmann’s formula says $$ \dim(C(A)\cap N(A))=\dim C(A)+\dim N(A)-\dim(C(A)+N(A))=2+2-3=1 $$
Since $C(A)\cap N(A)$ contains $v_3$ and has dimension $1$, so we conclude that $C(A)\cap N(A)$ is spanned by $v_3$.