Please note that it makes no sense in general to "give" a linear transformation between linear spaces that are not the standard spaces (the spaces $F^n$; in this case, $\mathbb{R}^n$) simply by giving a matrix. Vectors in general vector spaces are not tuples.
In order to be able to interpret the matrix as a linear transformation, there has to be a specified basis for both the source and target spaces. In the case of $\mathbb{R}^n$, these bases are usually assumed to be the standard bases, but for other vector spaces there is usually no clear choice. Even for the vector spaces of polynomials, there are several bases that may be in use depending the setting; for instance, the basis $\{1,x,x^2\}$ for $P_2$ may seem "obvious", but then so does $\{x^2,x,1\}$, and this will give you a different transformation; and if you are working with $P_2$ as an inner product space with inner product given by $$\langle p,q\rangle = \int_a^b p(t)q(t)\,dt,$$
then neither of those bases makes good sense.
So when you start your question, you need to specify the bases you are using. Otherwise, giving the matrix does not really tell you what the linear transformation "really" is.
That said, there are some things that you can compute even without knowing what the bases are: the rank of the matrix will be the dimension of the image of the linear transformation (once you specify the bases you will be able to actually describe the image, not only say what its dimension is; it's possible you can say what the image is anyway, e.g., if you know the dimension is $0$ or the dimension is the dimension of the target space), and the nullity of the matrix will be the dimension of the nullspace of the linear transformation (but without knowing what the basis of the domain is, you may not be able to actually describe that nullspace, only say what its dimension is).
The nullspace is never empty: the nullspace is the collection of all vectors that map to $0$. Since the zero vector always maps to zero, the zero vector is always in the nullspace. In the case of your first map, since the nullity is zero, the dimension of the nullspace is $0$. The only subspace of dimension $0$ is the zero subspace, so the nullspace of your first linear transformation is $\{0\}$, regardless of what the basis is.
In your second example, the nullity of the matrix is spanned by the vector $(-10,3,4)^t$. That means that, if your basis consists of the polynomials $p_1(x)$, $p_2(x)$, and $p_3(x)$, then the nullspace will consist exactly of all vectors of the form
$$\alpha\Bigl( -10p_1(x) + 3p_2(x) + 4p_3(x)\Bigr),\qquad \alpha\in\mathbb{R};$$
for example, if your ordered basis is $[1,x,x^2]$, then they are all scalar multiples of $-10 + 3x + 4x^2$. If your ordered basis is $[x^2,x,1]$, then they are all scalar multiples of $-10x^2 + 3x + 4$. If your ordered basis is $[1,1+x,1+x+x^2]$, then the nullspace consists of all multiples of
$$-10(1) + 3(1+x) + 4(1+x+x^2) = -3 + 7x + 4x^2.$$
(Can you see why specifying the basis is important?)
Note that this is not the same as saying $-10x^2+3x+4=0$; that equation makes no sense in the context of these vector spaces and linear transformations.
Suppose that $T$ has $n$ distinct eigenvalues say $c_1, c_2,\ldots ,c_n$. Let $v_1, v_2,\ldots ,v_n $ be the corresponding eigenvectors. First thing you can show that eigenvectors $v_1, v_2,\ldots ,v_n $ corresponding to distinct eigenvalues of $T$ are linearly independent. Then since $\dim(V) = n$, $S$ = $\{v_1, v_2,\ldots ,v_n \}$ is an ordered basis of $V$ which consists of eigenvectors of $T$. Hence $T$ is diagonalizable.
Hint for proving result: Eigenvectors corresponding to distinct eigenvalues of $T$ are linearly independent.
Suppose that $T$ has $n$ distinct eigenvalues say $c_1, c_2,\ldots ,c_n$. Let $v_1, v_2,\ldots ,v_n $ be the corresponding eigenvectors.
Then $T(v_i) =c_i v_i$ for $i = 1, 2, \ldots ,n$. We shall prove that $S$ = $\{v_1, v_2,\ldots ,v_n \}$ is linearly independent. We proceed by induction on $n$. If $n = 1$, then $S = \{v_1\}$ is linearly independent as $v_1\neq 0$. Suppose that the set $\{v_1, v_2,\ldots ,v_k \}$ is linearly independent, where $k<m$. We shall prove that $\{v_1, v_2,\ldots ,v_{k+1} \}$ is linearly independent.
Best Answer
As a linear operator on $P_n$, differentiation can be expressed in matrix terms, but doing so is completely unnecessary and obscures what’s really going on; you’re better off thinking in terms of the definition of the null space of a linear transformation.
The null space of an operator $T:V\to W$ is simply the set $\{v\in V:T(v)=0_W\}$ of vectors in $V$ that get sent to the $0$ vector of $W$ by $T$. For what polynomials $p(x)\in P_n$ is it true that $$\frac{d}{dx}p(x)$$ is the zero vector of $P_n$? (For starters, what is the zero vector of $P_n$?) This has a very simple answer that requires no fiddling with matrices.
Once you’ve handled the first derivative, the rest should be easy. To find the null space of $\frac{d^3}{dx^3}$, for instance, just ask yourself which $p(x)\in P_n$ have the property that $$\frac{d^3}{dx^3}p(x)$$ is the zero vector of $P_n$.