The dual space $V^*$ of a $K$-vectorspace $V$ is defined as all linear maps from $V$ to $K$
$$V^*=\{\lambda \colon V \to K \mid \lambda \text{ linear}\}.$$ Since we can identify every linear map with a matrix, we denote the elements of the dual space with $1\times n$-matrices where $n$ is the dimension of $V$. Now let $\beta^* = \{f_1,f_2,f_3\}$ be the dual basis of $\beta = \{v_1,v_2,v_3\}$. By definition the dual basis must satisfy $f_i(v_j) = \delta_{ij}$. We can convert this into a linear equation system:
$$\begin{pmatrix}
f_1^1 & f_1^2 & f_1^3 \\
f_2^1 & f_2^2 & f_2^3 \\
f_3^1 & f_3^2 & f_3^3
\end{pmatrix} \cdot
\begin{pmatrix}
v_1^1 & v_2^1 & v_3^1 \\
v_1^2 & v_2^2 & v_3^2 \\
v_1^3 & v_2^3 & v_3^3
\end{pmatrix} =
\begin{pmatrix}
1 & 0 & 0 \\
0 & 1 & 0 \\
0 & 0 & 1
\end{pmatrix}.$$
The first matrix on the left-hand side is obtained by writing the dual space basis vectors (the $1\times n$-transformation matrices) row-wise. Let’s call it $A$. The second matrix on the left-hand side is obtained by writing the basis vectors column-wise. We will call it $B$. If we multiply the above equation with $B^{-1}$ from right, we get
$$A= \mathbb{1}_\mathrm n \cdot B^{-1} = B^{-1}.$$ This means the rows of $B^{-1}$ are the dual basis vectors.
In summary you have to write $v_1,v_2,v_3$ column-wise in a matrix, invert it and then read the dual basis vectors from the rows.
The solution would be $$f_1=(1,-1,0), f_2=(1,-1,1), f_3=(-1/2,1,-1/2)$$
Both $P,Q$ are matrices from identity transformations, so $Q=[I_V]_\beta^{\beta'}$ and
\begin{align*}P^{-1}=([I_W]_\gamma^{\gamma'})^{-1}=[I_W^{-1}]_{\gamma'}^\gamma=[I_W]_{\gamma'}^\gamma\end{align*}
since the inverse of $I_W$ is itself. Now
\begin{align*}P^{-1}[T]_{\gamma}^{\beta} Q&=[I_W]_{\gamma'}^\gamma[T]_\gamma^\beta[I_V]_\beta^{\beta'}\\
&=[I_WT]_{\gamma'}^{\beta}[I_V]_\beta^{\beta'}\\
&=[T]_{\gamma'}^\beta[I_V]_\beta^{\beta'}\\
&=[TI_V]_{\gamma'}^{\beta'}\\
&=[T]_{\gamma'}^{\beta'}\end{align*}
since the definition of matrix multiplication means composition of (linear-)transformations.
In case you don't know why, here is bonus about matrix multiplication (But notice that the notation I used is different from your, for $T:V_\beta\to W_\gamma$ I written $[T]_\beta^\gamma$):
Let $T:\mathsf{V}\to\mathsf{W}$ a linear transformation and $\beta=\{v_1,v_2\}, \gamma=\{w_1,w_2\}$ are bases of $\mathsf{V}, \mathsf{W}$ respectively. The value of interest is
$$T(v).$$
Let $v=xv_1+yv_2$, then
$$\begin{align}
T(v)&=T(xv_1+yv_2)\\
&=xT(v_1)+yT(v_2).
\end{align}$$
No matter what value of $v$ is, $T(v_1),T(v_2)$ are needed, the notation can be simplified. Let
$$T(v_1)=aw_1+bw_2,\\
T(v_2)=cw_1+dw_2,$$
represent $T(v_1), T(v_2)$ in columns
$$
\begin{array}{ll}
T(v_1) & T(v_2)\\
aw_1 & cw_1\\
{+} & {+}\\
bw_2 & dw_2\\
\end{array}
$$
Put $w_1, w_2$ on the left side as a note and omit the plus signs
$$
\begin{array}{lll}
& T(v_1) & T(v_2) \\
w_1 & a & c \\
w_2 & b & d \\
\end{array}
$$
Since $T(v)=xT(v_1)+yT(v_2)$
$$
\begin{array}{}
& x & y \\
& T(v_1)\ \ \ +& T(v_2)\ \ = & T(v) \\
w_1 & a & c & e \\
w_2 & b & d & f \\
\end{array}
$$
An $\color{blue}{operation}$ can be defined such that
$$
e=\color{blue}{x}a+\color{blue}{y}c\\
f=\color{blue}{x}b+\color{blue}{y}d
$$
that is
$$
\begin{bmatrix}e\\f\end{bmatrix}
{=}
\begin{bmatrix}a & c\\b & d\end{bmatrix}
\color{blue}{oper.}
\begin{bmatrix}\color{blue}{x}\\\color{blue}{y}\end{bmatrix},
$$
The order $w_1, w_2$ are listed is associated to this notation, so the idea of ordered basis is required to denote the linear transformation matrix
$$\large[T]_\beta^\gamma$$
which here the lower-script $\beta$ means the matrix will work as a transformation when you multiply it with
$$\large[v]_\beta$$
the coordinate vector relative to $\beta$, and then you will get the output $w$ as coordinate vector relative to $\gamma$. We like this operation, so the $\color{blue}{operation}$ is defined s.t.
$$\large[T(v)]_\gamma = [T]_\beta^{\gamma} \color{blue}{\Large\cdot} [v]_\beta$$
Now open your book, find the definition of $\color{blue}{matrix\ multiplication}$ again and appreciate it.
--
Since for a linear transformation matrix $\large[U]_\alpha^\beta$, we can decompose it into column vectors from left to right, each as a coordinate vector, the composition of $\large[T]_{\beta}^{\gamma}$ and $\large[U]_{\alpha}^{\beta}$ now is
\begin{align*}
\large[T]_\beta^\gamma[U]_\alpha^\beta
&=\large[T]_\beta^\gamma[U(a_1)]_\beta \Bigg| [T]_\beta^\gamma[U(a_2)]_\beta\Bigg|\dots\Bigg|[T]_\beta^\gamma[U(a_n)]_\beta\\
&=\large[T(U(a_1))]_\gamma\Bigg|[T(U(a_2))]_\gamma\Bigg|\dots\Bigg|[T(U(a_n))]_\gamma\\
&=\large[TU(a_1)]_\gamma\Bigg|[TU(a_2)]_\gamma\Bigg|\dots\Bigg|[TU(a_n)]_\gamma\\
&=\large[TU]_\alpha^\gamma
\end{align*}
(For those vertical bars I meant augmentation of column-vector(s))
Best Answer
I believe that your main issue is that you are used to think of bases in an abstract fashion. That is, if $\beta:=\{x_1, \ldots, x_n\}$ is a basis for a vector space $X$ then the dual basis $\beta^*=\{f_1, \ldots, f_n\}$ are linear functionals such that $f_{i}(x_j)=\delta_{i,j}$. However, for this question you have some concrete vector spaces and some well known bases for each of them.
First of all since $\beta$ is the standard ordered bases for $P_1(\Bbb{R})$ we actually have $\beta=\{1, x\}$. Thus, the dual basis is $\beta^*=\{f_1, f_2\}$, where $f_1, f_2 : P_1(\Bbb{R}) \to \Bbb{R}$ are such $f_1(1)=1$, $f_1(x)=0$, $f_2(1)=0$ and $f_2(x)=1$ (think of $1$ as $x_1$ and $x$ as $x_2$ in the abstract fashion above). Hopefully this answers one of your questions.
Similarly, $\gamma=\{(1,0), (0,1)\}$ is the standard basis for $\Bbb{R}^r$ and therefore the dual basis is $\gamma^*:=\{ g_1 ,g_2\}$ where $g_1, g_2: \Bbb{R}^2 \to \Bbb{R}$ are such that $g_1(1,0)=1$, $g_1(0,1)=0$, $g_2(1,0)=0$ and $g_2(0,1)=1$ (think of $(1,0)$ as $x_1$ and $(0,1)$ as $x_2$ in the abstract fashion above). Therefore, since $g_1$ is linear $$ g_1(1,1)=g_1( (1,0)+(0,1) ) = g_1(1,0)+g_1(0,1)=1+0=1 $$ This should answer what $g(1,1)$ is and why is it equal to $1$.
Finally your main goal is to find the entries $a,b,c$ and $d$ for the matrix of the linear transformation $T^t$ with respect to the bases $\gamma^*$ and $\beta ^*$. To do this you have to use that there are two ways to compute $T^t(g_1)(1)$, namely
This gives you the value of $a$. Analogously there are two ways to compute $T^t(g_1)(x)$, namely
This now gives the value of $c$. Similarly, when computing both $T^t(g_2)(1)$ and $T^t(g_2)(x)$ using matrix way and the definition way you should be able to find the values for $b$ and $d$.
Do you think you can take it from here now?
I hope this is helpful.