Your confusion appears to be coming from an assumption that $V=\mathbb F^n$, that is, that vectors are $n$-tuples of scalars. The notation might make more sense to you if you choose some other set of objects as your vectors, such as polynomials of degree at most $n$ with real coefficients†, so that the important distinction between vectors and their coordinates is more apparent: the vector $\mathbf v$ is then a polynomial, while its coordinate tuple with respect to some ordered basis $\mathcal B$, denoted by $[\mathbf v]_{\mathcal B}$, is a $n$-tuple of real numbers. This notation highlights and maintains the difference between a vector and its coordinate tuple, even when the vectors are themselves tuples of scalars.††
The application of the linear transformation $T:V\to W$ to $\mathbf v\in V$ is denoted by $T\mathbf v$—it’s common in algebra to use simple juxtaposition and omit the brackets that you’re no doubt used to. Let’s again take $V$ and $W$ to be vector spaces of polynomials. A critical thing to note is that $T$ operates on polynomials and produces polynomials: writing $T[\mathbf v]_{\mathcal B}$ is nonsensical since that means that you’re trying to apply $T$ to an $n$-tuple of real numbers instead. On the other hand, writing $[T]_{\mathcal B\mathcal A}[\mathbf v]_{\mathcal A}$ does make sense. Here, the juxtaposition represents matrix multiplication instead of function application, which is probably another source of confusion. We left-multiply the column vector $[\mathbf v]_{\mathcal A}$ by the matrix $[T]_{\mathcal B\mathcal A}$ to obtain another column vector, which happily is equal to $[T\mathbf v]_{\mathcal B}$, i.e., the coordinate tuple of the polynomial $T\mathbf v$ with respect to $\mathcal B$.
The identity $$[T\mathbf v]_{\mathcal B} = [T]_{\mathcal B\mathcal A}[\mathbf v]_{\mathcal A}$$ basically says that we can arrive at the same result in two different ways. For the left-hand side, we take the result of applying $T$ to the polynomial $\mathbf v$ and compute its coordinates relative to $\mathcal B$, while for the right-hand side, we first compute the coordinates of the polynomial $\mathbf v$ relative to $\mathcal A$ and then multiply that by the matrix that represents $T$ relative to the two bases. To construct this matrix, we apply $T$ to each element of $\mathcal A$ and then compute the coordinates of that polynomial with respect to $\mathcal B$. Expressed in this notation, the $i$th column of $[T]_{\mathcal B\mathcal A}$ is the coordinate tuple $[T\mathbf a_i]_{\mathcal B}$, as is written in the text.
† The points I make could also be made by taking elements of $V$ to be row vectors of reals instead of column vectors, but using polynomials makes it much more obvious that these are a different type of object from their coordinate tuples.
†† The distinction between elements of $\mathbb R^n$ and their coordinate tuples will no doubt come up in some exercises, if it hasn’t already. For instance, consider $V=\{(x,y,z)\in\mathbb R^3 \mid x+y+z=1\}$. This is a two-dimensional subspace of $\mathbb R^3$, so the coordinates of any element of $V$ relative to a basis of $V$ are elements of $\mathbb R^2$. Note, too, that there’s no obvious “standard basis” for this space as there is for $\mathbb R^3$. If $W$ is another two-dimensional subspace of $\mathbb R^3$, the matrix that represents a linear transformation from $V$ to $W$ will be $2\times2$, not $3\times3$.
I hope you understand what they are trying to achieve: we are looking for a linear transformation $T$ which reflects points of $\Bbb R^2$ on the line $y=2x$. We will completely know a linear transformation, if we know it's action on a basis of $\Bbb R^2$. You can check independently, that the vectors
$$\begin{bmatrix}
1 \\2
\end{bmatrix}, \begin{bmatrix}-2 \\ 1 \end{bmatrix} $$
form a basis for $\Bbb R^2. $So we will know $T$ if we know it's action on this basis. Call this basis $\beta'$. Why are we choosing this basis? This is because it's easy to see $T$'s action on $\beta'$. Since $T$ reflects points on the line $y = 2x$, you can easily check that
$$T \begin{bmatrix} 1 \\ 2 \end{bmatrix} = \begin{bmatrix}1 \\2 \end{bmatrix}$$
this is because the point $(1,2)$ lies on the line of reflection. Similarly using geometric arguments, you can show that $T(-2,1) = (2,-1)$. Therefore the matrix of $T$ with respect to the basis $\beta'$ is $$\begin{bmatrix}1 & -2 \\2 & 1 \end{bmatrix}$$
Since we are used to dealing with vectors in the usual canonical basis, we simply do a change of basis and find the corresponding matrix of $T$ with respect to the usual basis.
Best Answer
Another way of finding $f_{1}$ and $f_{2}$ (without using the standard basis) would be to consider an arbitrary $(x, y) \in \mathbb{R}^{2}$ and writing $$ (x, y) = f_{1}(x, y)(2, 1) + f_{2}(x, y)(3,1). $$ This gives us two equations for $f_{1}(x, y)$ and $f_{2}(x, y)$. After solving them, we get that $f_{1}(x, y) = 3y - x$ and $f_{2}(x, y) = x - 2y$.
Once we have a basis $\beta$ for $V$, the dual basis $\beta^{*}$ gives us the coordinates of any $v \in V$ with respect to $\beta$. Similarly, $\beta$ gives us the coordinates of any functional in $V^{*}$ with respect to $\beta^{*}$. To be more specific:
Let $\beta = (v_{1}, \ldots, v_{n})$ be a basis for $V$ and $\beta^{*} = (f_{1}, \ldots, f_{n})$ the corresponding dual basis. Then, for any $v \in V$, $$ v = f_{1}(v) v_{1} + \ldots + f_{n}(v) v_{n} .$$ Similarly, if $f \in V^{*}$, $$ f = f(v_{1}) f_{1} + \ldots + f(v_{n}) f_{n} .$$
It is a consequence of linearity: $$ f_{1}(x, y) = f_{1}( x(1, 0) + y(0,1) ) = xf_{1}(1, 0) + yf_{1}(0,1). $$