Consider the inner product of $P_1(\mathbb R)$ given by
$$\langle p(x),q(x)\rangle = \int_0^1p(x)q(x)x^2 dx\quad\text{(note the factor $x^2$ in the integrand)}$$
Let $T\colon P_1(\mathbb R)\to P_1(\mathbb R)$ be given by $T(p(x))= p(x)+p'(x)$. Determine $T^*(a+bx)$ where $a,b\in\mathbb R$ are arbitrary scalars.
This is what I've done, don't think the answer is right though so need help with what I'm doing wrong and the right way to think:
Adjoint mapping $T^*$ of $T$ is the linear operator satisfying the following property for all $p(x), q(x) \in P_1(\mathbb R)$:
$$\langle T(p(x)), q(x)\rangle = \langle p(x), T^*(q(x))\rangle$$
First, we calculate $T(p(x))$:
$$T(p(x)) = p(x) + p'(x)$$
Now we can use this to find $T^*(a+bx)$ for arbitrary scalars $a$ and $b$. We use the definition of the inner product:
\begin{align*}
\langle T(a+bx), q(x)\rangle &= \langle (a+bx) + (a+bx)', q(x)\rangle \\
&= \rangle (a+bx) + b, q(x)\rangle
\end{align*}
Now we need to find $T^*(q(x))$. Since $T^*$ is a linear operator, we can write $T^*(q(x))$ as $T^*(q(x)) = c + dx$, where $c$ and $d$ are unknown constants.
So now we have:
$$\langle T^*(q(x)), p(x)\rangle = \langle c + dx, p(x)\rangle$$
To get $T^*(q(x))$ from this, we need to compare the two sides of the equation and determine the constants $c$ and $d$.
To do so, we can use the properties of the given inner product:
\begin{align*}
\langle c + dx, p(x)\rangle &= \int_0^1 (c+dx)p(x)x^2\,dx
\end{align*}
To match the two sides of the equation above, one must:
$$c + dx = (a+bx) + b$$
This gives us two equations:
$$c = a + b$$
$$d = b$$
So $T^*(q(x)) = a + b + x$. Therefore is:
$$T^*(a+bx) = a + b + x$$
Best Answer
You need to use the definition of the adjoint here to your advantage. First note that in the canonical basis $\{1,x\}\to \{(1,0)^T,(0,1)^T\}$ we can represent $T$ as a matrix:
$$T=\begin{pmatrix}1&1\\1&0\end{pmatrix}$$
Since the adjoint also needs to be defined in the same space, it will also be a $2\times 2$ matrix whose most general form is
$$T^*=\begin{pmatrix}\alpha&\gamma\\\beta&\delta\end{pmatrix}$$
We can write a matrix equation for $T^*$ by looking at the inner products with the basis and using the definition of the adjoint to calculate them in two ways. To wit,
$$\langle T^*(1),1\rangle=\alpha\langle1,1\rangle+\beta\langle x,1\rangle=\langle1,T(1)\rangle$$ $$\langle T^*(1),x\rangle=\alpha\langle1,x\rangle+\beta\langle x,x\rangle=\langle1,T(x)\rangle$$ $$\langle T^*(x),1\rangle=\gamma\langle 1,1\rangle+\delta\langle x,1\rangle=\langle x,T(1)\rangle$$ $$\langle T^*(x),x\rangle=\gamma\langle 1,x\rangle+\delta\langle x,x\rangle=\langle x,T(x)\rangle$$
These equations can be recast in matrix form
$$\begin{pmatrix}\langle 1,1\rangle&\langle x,1\rangle\\\langle 1,x\rangle&\langle x,x\rangle\end{pmatrix}\begin{pmatrix}\alpha&\gamma\\\beta&\delta\end{pmatrix}=\begin{pmatrix}\langle 1,T(1)\rangle&\langle x,T(1)\rangle\\\langle 1,T(x)\rangle&\langle x,T(x)\rangle\end{pmatrix}$$
and now one can invert the inner product matrix in the front to obtain values of $\alpha, \beta,\gamma, \delta$.
For finite-dimensional linear spaces with an inner product this procedure can be easily generalized. In fact, the following matrix equation holds
$$T^{*}=GT^{\dagger} G^{-1}$$
where the dagger denotes hermitian conjugation, and $G_{ij}=\langle e_i, e_j\rangle$ is the Gram matrix. When the basis is orthonormal with respect to the inner product, $G$ is the identity, and hence the well-known fact that the adjoint is the hermitian conjugate follows.
Can you proceed from here?