The kernel of this map is the following:
$$ker(T) = \{ a_0 + a_1 x + a_2 x^2 + a_3 x^3 \in P_3 : a_3 x^2 - a_0 = 0\}$$
If this polynomial $a_3 x^2 - a_0$ is equal to zero for all $x$ values, then we know that $a_3$ and $a_0$ must be zero. So the kernel is in fact the set of all polynomials in $P_3$ with $a_3 = a_0 = 0$, in other words, all polynomials of the form $a_1 x + a_2 x^2$.
You can somewhat check this result using rank nullity. The dimension of $P_3$ is equal to $4$. The dimension of the image is $2$, and hence the dimension of the kernel is $4-2 = 2$.
Let $V$ and $W$ be vector spaces over a field $\mathbb{K}$. You (hopefully!) should know that a function $f\colon V\to W$ is a linear transformation if for all $u,v\in V$ and all $\lambda,\mu\in\mathbb{K}$, we have
$$f(\lambda u+\mu v)=\lambda f(u)+\mu f(v).$$
(There are more efficient equivalent definitions, but this should hopefully look familiar). For example, if $V=\mathbb{R}^2$ and $W=\mathbb{R}$, then the map $\alpha\colon\mathbb{R}^2\to\mathbb{R}$ defined by $\alpha(a,b)=a$ is a linear transformation.
Now for some of the other terms - both isomorphisms and linear functionals are specific types of linear maps. A linear functional is a linear map whose codomain (i.e. $W$, in the notation above) is equal to the field $\mathbb{K}$ (which is in particular a vector space over itself). Our example $\alpha$ from before is a functional, because $W=\mathbb{R}$.
A linear transformation is an isomorphism if it is invertible. The map $\alpha$ above is not invertible because it isn't injective. However, the map $\beta\colon\mathbb{R}^2\to\mathbb{R}^2$ defined by $\beta(a,b)=(b,a)$ is an isomorphism (it is in fact its own inverse!). However, $\beta$ is not a linear functional, because its codomain is not $\mathbb{R}$.
A dual space is entirely different, and is not a type of linear transformation. Given a vector space $V$ over $\mathbb{K}$, the dual space $V^*$ is the set of all linear functionals with domain $V$, i.e. the set of all linear maps $V\to\mathbb{K}$. In fact this is more than a set; it is a vector space over $\mathbb{K}$, under the operations $(f+g)(v)=f(v)+g(v)$ and $(\lambda\cdot f)(v)=\lambda\cdot f(v)$.
I hope this helps clarify the definitions a little.
Edit: You added a subquestion about matrices - I intentionally didn't use matrices anywhere in my answer. One advantage of this is that everything I say works even for infinite dimensional vector spaces, where matrices don't really work (it is possible to imagine matrices of infinite size, but this isn't necessarily a good idea!). The other reason to avoid them is that to "turn a linear map $V\to W$ into a matrix" requires choosing bases for $V$ and $W$; this choice is arbitrary, and different choices result in different matrices, which can very quickly get confusing.
On the other hand, it is very useful to know how to check (for example) whether a linear map between finite dimensional vector spaces is invertible by choosing some bases to get a matrix representing it, and then doing computations with the matrix.
Best Answer
Hint: Define $T$ such that its image is contained in the kernel of $S$.
Take $V=\langle v\rangle$ one-dimensional generated by $v$.
Take $W=\langle w_1,w_2\rangle$ two-dimensional generated by $w_1,w_2$.
Take $U=\langle u_1,u_2\rangle$ two-dimensional generated by $u_1,u_2$.
Define $T(v)=w_1$ and $S(w_1)=0, S(w_2)=u_1$ and extend them by linearity.