If you have a basis $\mathcal B$ of a vector space $V$, the dual basis $\mathcal B^*$ of $V^*$ simply consists in the (parallel) projections on the vectors of $\mathcal B$.
Thus, in the example you mention, the standard basis is the set of matrices $E_{ij}=(e_{kl})$, where $e_{kl}=0$ if $(k,l)\ne (i,j)$, $e_{ij}=1$. Its dual basis is made up of the maps
$$A=(a_{kl})\longmapsto a_{ij}.$$
Second question: the standard basis is used just because all cordinates are given in the standard basis… Note that since $f_1=2e_1+e_2$, we have $f_1^*=2e_1^*+e_2^*$.
Last question: no, the dual basis does not induce an inner product, because this notion is valid for vector spaces over abstract fields, while inner products are deined for real (or complex) vector spaces. However, for real vector spaces the notions are linked through a natural pairing:
\begin{align*}
V\times V^*&\to \mathbf R,\\
(x,u)&\mapsto u(x).
\end{align*}
Let me first give some general context before considering the question at hand. If $V$ is a real finite dimensional vector space, the dual space of $V$ is the space of all linear maps $f \colon V \rightarrow \mathbb{R}$. Such maps $f$ are called linear functional - you feed them vectors in $V$ and they spit out scalars. Given a basis $\beta = (v_1, \dots, v_n)$ for $V$, one can construct a basis $\beta^{*} = (f_1, \dots, f_n)$ for the dual space $V^{*}$ that satisfies $f_i(v_j) = \delta_{ij}$ (where $\delta_{ij} = 1$ if $i = j$ and $0$ otherwise). The dual basis $\beta^{*}$ is determined uniquely by the original basis $\beta$. In particular, this shows that if $V$ is $n$-dimensional then so is the dual space $V^{*}$.
If $T \colon V \rightarrow V$ is a linear map, it induces a linear map $T^{*} \colon V^{*} \rightarrow V^{*}$ between the dual spaces by the formula $T(f)(v) = f(Tv)$ (that is, the linear functional $T(f)$ eats a vector $v \in V$, applies $T$ to it and then applies $f$ to the result).
In part $(a)$, you are asked to compute the linear functional $T^{*}(f)$ (you denote it by $T^{t}(f)$ but I think it is best to reserve this notation only for matrices in order to avoid some confusion). Let us try and do that. The expression $T^{*}(f)$ should be a linear functional on $\mathbb{R}^2$ so let us try and feed it with a vector $(x,y)$:
$$ (T^{*}(f))(x,y) = f(T(x,y)) = f(3x + 2y, x) = 2(3x + 2y) + x = 7x + 4y.$$
Thus, if we set $g(x,y) = 7x + 4y$, we see that $T^{*}(f) = g$.
In part $(b)$, you are asked to compute the matrix representation of the dual operator $T^{*}$ with respect to a given basis $(f_1,f_2)$. The basis $(f_1,f_2)$ is said to be the dual basis to the standard basis $(e_1,e_2)$. Let us try and write $f_1,f_2$ explicitly. A general linear functional $f$ on $\mathbb{R}^2$ has the form $f(x,y) = ax + by$ for some $a,b \in \mathbb{R}$. Writing $f_1(x,y) = ax + by$, we see that it must satisfy
$$ f_1(e_1) = f_1(1,0) = a = 1, f_1(e_2) = f_1(0,1) = b = 0 $$
and so $f_1(x,y) = x$. Similarly, $f_2(x,y) = y$ and so the dual basis acts on a vector $(x,y)$ simply by returning the coordinates of the vector. Now, in order to compute the matrix representation of $T^{*}$ with respect to the basis $(f_1,f_2)$, we must compute $T^{*}(f_1),T^{*}(f_2)$ and express the result in terms of $f_1, f_2$:
$$ T^{*}(f_1) = a f_1 + c f_2, T^{*}(f_2) = bf_1 + c f_2. $$
Having done that, we will know that
$$ [T^{*}]_{\beta^{*}} = \begin{pmatrix} a & b \\ c & d \end{pmatrix} $$
(this has nothing to do with dual spaces, it simply follows from the definition of what it means to represent an operator as a matrix with respect to a given basis). In our case,
$$ (T^{*}(f_1))(x,y) = f_1(T(x,y)) = f_1(3x + 2y, x) = 3x + 2y = (3f_1 + 2f_2)(x,y), \\
(T^{*}(f_2))(x,y) = f_2(T(x,y)) = f_2(3x + 2y, x) = x = f_1(x,y) $$
and so $T^{*}(f_1) = 3f_1 + 2f_2, T^{*}(f_2) = f_1$ and
$$ [T^{*}]_{\beta^{*}} = \begin{pmatrix} 3 & 1 \\ 2 & 0 \end{pmatrix}. $$
Finally, for part $(c)$, we need to compute $[T]_{\beta}$ and so we need to compute $T(e_1),T(e_2)$ and express the result in terms of $e_1,e_2$:
$$ T(e_1) = T(1,0) = (3, 1) = 3e_1 + e_2, \\
T(e_2) = T(0,1) = (2, 0) = 2e_1 + 0 \cdot e_2 $$
and we get
$$ [T]_{\beta} = \begin{pmatrix} 3 & 2 \\ 1 & 0 \end{pmatrix}, \left( [T]_{\beta} \right)^{t} = \begin{pmatrix} 3 & 1 \\ 2 & 0 \end{pmatrix}. $$
You might notice that we got $[T^{*}]_{\beta^{*}} = \left( [T]_{\beta} \right)^t$ and in fact you can prove that this will always be the case.
Best Answer
The definition of $\{f_1,f_2\}$ as the dual basis to a basis $\{v_1,v_2\}$ say, is that $f_i$ are the linear maps such that $f_i(v_j) = \delta_{ij}$, extended linearly to all of $V$. In other words, if $V$ is $n$ dimensional, then $f_i(\sum_{j=1}^n \lambda_jv_j) = \lambda_i$. In the case you're given, $v_1 = (2,1)$ and $v_2 = (3,1)$ so $f_1(2,1) =1 $and $f_2(3,1) = 0$.