To test whether T is a linear transformation, you need to check that for some vectors $a$ and $b$ and some constant $c$
$$T(a + b) = T(a) +T(b)$$
$$T(ca) = cT(a)$$
$$T(0) = 0$$
So for example,
A. $T(x_1,x_2,x_3)=(x_1,0,x_3)$
$$T(x_1+y_1,x_2+y_2,x_3+y_3)=(x_1+y_1,0(x_2+y_2),x_3+y_3)=T(x_1,0,x_3)+T(y_1,0,y_2)$$$$T(cx_1,cx_2,cx_3)=T(cx_1,(c)0,cx_3)=cT(x_1,0,x_3)$$
$$T(0,0,0)=0$$
B. $T(x_1,x_2)=(2x_1−3x_2,x_1+4,5x_2)$
$$T(x_1+y_1,x_2+y_2)=(2(x_1+y_1)−3(x_2+y_2),(x_1+y_1)+4,5(x_2+y_2))=(2x_1+2y_1−3x_2-3y_2,x_1+y_1+4,5x_2+5y_2)$$$$T(x_1,x_2)+T(y_1,y_2)=(2x_1−3x_2,x_1+4,5x_2)+(2y_1−3y_2,y_1+4,5y_2)=(2x_1−3x_2+2y_1-3y_2,x_1+y_1+8,5x_2+5y_2)\not=T(x_1+y_1,x_2+y_2)$$
So B is not a linear transformation.
Pick any nonzero vector $v$. Then either $Tv$ is linearly independent of $v$, or $Tv=pv$ for some scalar $p$.
If $Tv$ is linearly independent of $v$, then $\{Tv,v\}$ is a basis of $\mathbb R^2$. Therefore $T^2v$ is a linear combination of $Tv$ and $v$ and $T^2v+aTv+bv=0$ for some scalars $a$ and $b$. It follows that both $(T^2+aT+bI)v$ and $(T^2+aT+bI)(Tv)=T(T^2+aT+bI)v$ are zero. That is, $T^2+aT+bI$ maps a basis of $\mathbb R^2$ to zero. Hence $T^2+aT+bI$ must be zero.
If $Tv=pv$, then $(T-pI)v=0$. Let $u$ be any vector that is linearly independent of $v$. Then $\{u,v\}$ form a basis of $\mathbb R^2$. Hence $Tu$ is a linear combination of $u$ and $v$ and $Tu=qu+rv$ for some scalars $q$ and $r$. It follows that both $(T-pI)(T-qI)u=(T-pI)(rv)$ and $(T-pI)(T-qI)v=(T-qI)\left((T-pI)v\right)$ are zero. That is, $(T-pI)(T-qI)$ maps a basis of $\mathbb R^2$ to zero. Hence $(T-pI)(T-qI)$ must be zero. Expand the product, we get $T^2+aT+bI=0$ for some scalars $a$ and $b$.
Edit. If you are comfortable with the concept of minimal polynomial, a better proof is to show that the minimal polynomial of a linear operator $T$ on an $n$-dimensional vector space is at most of degree $n$. See this answer for instance.
Best Answer
The idea is that you want to show $$ (T_1 + T_2)(u + v) = (T_1 + T_2)u + (T_1 + T_2)v $$ for all $u,v \in V$, and $$ (T_1 + T_2)(cu) = c(T_1 + T_2)(u) $$ for all $u\in V$ and all scalars $c$.
So you want to $\textit{avoid}$ using example vectors, because you want to verify the properties for $\textit{every}$ vector. We could prove the first part in the following way: $$ (T_1 + T_2)(u + v) = T_1(u + v) + T_2(u + v) $$ This by the definition of the sum of two transformations. Then the linearity of $T_1$ and $T_2$ kicks in: $$ T_1(u + v) + T_2(u + v) = T_1(u) + T_1(v) + T_2(u) + T_2(v) $$ Then, group the transformations being applied to $u$ and $v$: $$ T_1(u) + T_1(v) + T_2(u) + T_2(v) = (T_1 + T_2)u + (T_1 + T_2)v $$ Putting it all together gives us: $$ (T_1 + T_2)(u + v) = (T_1 + T_2)u + (T_1 + T_2)v $$ Verifying the other property is similar, and verifying both properties for $cT_1$ is the same as well.