Well, you are correct that $V=W\oplus W'$, but a much simpler approach is to use the fact that for any projection we have $T^2=T$ so that $T(T-I)=0$. So in general we then have that the minimal polynomial of $T$ is $m_T(x)=x(x-1)$ (the special "trivial" cases being $T= 0$ so that $W=0$ and $T = I$ so that $W'=0$). Since this polynomial consist only of linear factors, $T$ is diagonalizable with eigenvalues $1$ and $0$.
Now we have $T(w)=w$ if and only if $w \in W$, so that the eigenspace associate with $1$ is $W$. Similarly we have $T(w')=0$ If and only if $w' \in W'$ so that the eigenspace associated with $0$ is $W'$. So any basis $\beta=\beta_W \cup \beta_{W'}$ where $\beta_W$ and $\beta_{W'}$ are bases for $W$ and $W'$ respectively is such that $[T]_\beta$ is diagonal.
Start with the first part. As you said, the eigenvalues are $\lambda = \pm 1$. I'm not 100% clear on how you've gone about this, but even if this was just a guess, we will vindicate this in the end.
Let's first look for the eigenvectors corresponding to $\lambda = -1$. We wish to solve the equation
$$T(A) = A^\top = (-1)A,$$
where $A$ is a $2 \times 2$ real matrix. Matrices $A$ in $M_{2 \times 2}(\Bbb{R})$ take the form
$$A = \begin{pmatrix} a & b \\ c & d\end{pmatrix}.$$
Then, we are solving
$$\begin{pmatrix} a & c \\ b & d\end{pmatrix} = \begin{pmatrix} -a & -b \\ -c & -d\end{pmatrix}.$$
By equating entries,
\begin{align*}
a &= -a \\
c &= -b \\
b &= -c \\
d &= -d.
\end{align*}
This is now a system of linear equations. The first and fourth equations imply that $a = 0$ and $d= 0$, while the second and third equations contend the same thing: $c = -b$. Thus, our eigenvector must take the following form:
$$A = \begin{pmatrix} 0 & -b \\ b & 0\end{pmatrix},$$
where $b \in \Bbb{R}$. Please verify that $T(A) = -A$, as required, so $A$ is definitely an eigenvector for eigenvalue $-1$, so long as $A \neq 0$. That is, so long as $b \neq 0$. As such, our eigenspace is spanned by a single vector:
$$\begin{pmatrix} 0 & -1 \\ 1 & 0\end{pmatrix}.$$
We can do the same thing for the other eigenvalue. We're now solving
$$\begin{pmatrix} a & c \\ b & d\end{pmatrix} = \begin{pmatrix} a & b \\ c & d\end{pmatrix}.$$
By equating entries,
\begin{align*}
a &= a \\
c &= b \\
b &= c \\
d &= d.
\end{align*}
Now, the first and fourth equations are tautological, and can be ignored. The the second and third terms tell us the same thing again: $c = b$. Thus, our eigenvectors take the form,
\begin{align*}
A &= \begin{pmatrix} a & b \\ b & d\end{pmatrix} \\
&= a \begin{pmatrix} 1 & 0 \\ 0 & 0 \end{pmatrix} + b \begin{pmatrix} 0 & 1 \\ 1 & 0 \end{pmatrix} + d \begin{pmatrix} 0 & 0 \\ 0 & 1 \end{pmatrix}.
\end{align*}
Thus, $A$ must be a linear combination of the above three matrices. Verify that they are eigenvectors, that they're linearly independent, and hence conclude that they form a basis for the eigenspace.
So, we can form an eigenbasis
$$\left\{\begin{pmatrix} 0 & -1 \\ 1 & 0\end{pmatrix}, \begin{pmatrix} 1 & 0 \\ 0 & 0 \end{pmatrix}, \begin{pmatrix} 0 & 1 \\ 1 & 0 \end{pmatrix}, \begin{pmatrix} 0 & 0 \\ 0 & 1 \end{pmatrix}\right\}.$$
Note that if we were missing any eigenvalues, we wouldn't have four linearly independent eigenvectors, so indeed, $-1$ and $+1$ are the only two eigenvalues.
That's it for part 1). For part 2), I would suggest thinking about how this generalises. Our eigenbasis from the previous part consisted of matrices with a single $1$ in the diagonal (and $0$s elsewhere), as well as a matrix with two symmetric off-diagonal $1$s, and another matrix where these $1$s had different signs. Think about how you'd generalise this to more dimensions.
Best Answer
Let's represent $T$ as a matrix acting on $P_1(\mathbb{R})$, with ordered basis $(1,x)$. Then, we can compute that $T(1) = 2x + 1$ and $T(x) = -6x -6$, so the matrix of $T$ is $$ \begin{pmatrix} 1 & -6 \\ 2 & -6 \end{pmatrix}, $$ since the first column is the vector $T(1)$ and the second column is the vector $T(x)$, both with respect to the basis $(1,x)$. Now, we can go about diagonalizing this matrix as we normally would: it has eigenvalues $\lambda_1 = -3$ and $\lambda_2 = -2$ with eigenvectors $\begin{pmatrix} 3 \\ 2 \end{pmatrix}$ and $\begin{pmatrix}2 \\ 1 \end{pmatrix}$ respectively. It follows that $$ \begin{pmatrix} -3 & 0 \\ 0 & -2 \end{pmatrix} = \begin{pmatrix} 3 & 2 \\ 2 & 1 \end{pmatrix}^{-1}\begin{pmatrix} 1 & -6 \\ 2 & -6 \end{pmatrix}\begin{pmatrix} 3 & 2 \\ 2 & 1 \end{pmatrix}. $$ That is, the above diagonal matrix is the matrix of the linear transformation $T$ with respect to the ordered basis $\left( \begin{pmatrix} 3 & 2 \\ 2 & 1 \end{pmatrix} \cdot 1, \begin{pmatrix} 3 & 2 \\ 2 & 1 \end{pmatrix} \cdot x \right)$.