The equation of a circle is $x^2 + y^2 = r^2$, or in terms of vectors $(x,y) \pmatrix{x\cr y} = r^2$ An invertible linear transformation $T$ takes $\pmatrix{x\cr y}$ to $\pmatrix{X\cr Y} = T\pmatrix{x\cr y}$. Thus $\pmatrix{x\cr y\cr} = T^{-1} \pmatrix{X\cr Y}$, and $(x,y) = (X, Y) (T^{-1})^\top$. The equation becomes
$$(X, Y) (T^{-1})^\top T^{-1} \pmatrix{X\cr Y} = r^2 $$
Note that $(T^{-1})^\top T^{-1}$ is a real symmetric matrix, so it can be diagonalized, and its eigenvalues are positive.
Your confusion appears to be coming from an assumption that $V=\mathbb F^n$, that is, that vectors are $n$-tuples of scalars. The notation might make more sense to you if you choose some other set of objects as your vectors, such as polynomials of degree at most $n$ with real coefficients†, so that the important distinction between vectors and their coordinates is more apparent: the vector $\mathbf v$ is then a polynomial, while its coordinate tuple with respect to some ordered basis $\mathcal B$, denoted by $[\mathbf v]_{\mathcal B}$, is a $n$-tuple of real numbers. This notation highlights and maintains the difference between a vector and its coordinate tuple, even when the vectors are themselves tuples of scalars.††
The application of the linear transformation $T:V\to W$ to $\mathbf v\in V$ is denoted by $T\mathbf v$—it’s common in algebra to use simple juxtaposition and omit the brackets that you’re no doubt used to. Let’s again take $V$ and $W$ to be vector spaces of polynomials. A critical thing to note is that $T$ operates on polynomials and produces polynomials: writing $T[\mathbf v]_{\mathcal B}$ is nonsensical since that means that you’re trying to apply $T$ to an $n$-tuple of real numbers instead. On the other hand, writing $[T]_{\mathcal B\mathcal A}[\mathbf v]_{\mathcal A}$ does make sense. Here, the juxtaposition represents matrix multiplication instead of function application, which is probably another source of confusion. We left-multiply the column vector $[\mathbf v]_{\mathcal A}$ by the matrix $[T]_{\mathcal B\mathcal A}$ to obtain another column vector, which happily is equal to $[T\mathbf v]_{\mathcal B}$, i.e., the coordinate tuple of the polynomial $T\mathbf v$ with respect to $\mathcal B$.
The identity $$[T\mathbf v]_{\mathcal B} = [T]_{\mathcal B\mathcal A}[\mathbf v]_{\mathcal A}$$ basically says that we can arrive at the same result in two different ways. For the left-hand side, we take the result of applying $T$ to the polynomial $\mathbf v$ and compute its coordinates relative to $\mathcal B$, while for the right-hand side, we first compute the coordinates of the polynomial $\mathbf v$ relative to $\mathcal A$ and then multiply that by the matrix that represents $T$ relative to the two bases. To construct this matrix, we apply $T$ to each element of $\mathcal A$ and then compute the coordinates of that polynomial with respect to $\mathcal B$. Expressed in this notation, the $i$th column of $[T]_{\mathcal B\mathcal A}$ is the coordinate tuple $[T\mathbf a_i]_{\mathcal B}$, as is written in the text.
† The points I make could also be made by taking elements of $V$ to be row vectors of reals instead of column vectors, but using polynomials makes it much more obvious that these are a different type of object from their coordinate tuples.
†† The distinction between elements of $\mathbb R^n$ and their coordinate tuples will no doubt come up in some exercises, if it hasn’t already. For instance, consider $V=\{(x,y,z)\in\mathbb R^3 \mid x+y+z=1\}$. This is a two-dimensional subspace of $\mathbb R^3$, so the coordinates of any element of $V$ relative to a basis of $V$ are elements of $\mathbb R^2$. Note, too, that there’s no obvious “standard basis” for this space as there is for $\mathbb R^3$. If $W$ is another two-dimensional subspace of $\mathbb R^3$, the matrix that represents a linear transformation from $V$ to $W$ will be $2\times2$, not $3\times3$.
Best Answer
In an active transformation, given a basis, we start from a vector and we find a new vector in the same basis.
In a passive transformation we have a vector expressed in a basis and we express it in a new basis.
The figure illustrate the action of a matrix $A$ as an active transformation and of $A^{-1}$ as the corresponding passive transformation.
Here we have: $$ A=\begin{bmatrix} 1&2\\ -2&4 \end{bmatrix} \qquad A^{-1}=\frac{1}{8} \begin{bmatrix} 4&-2\\ 2&1 \end{bmatrix} $$
The matrix $A$ acts on a vector $\mathbf{x}$ that in the standard basis $S$ (represented in black) has components $\mathbf{x}=[3,2]_S^T$, and, as active transformation, gives the vector $\mathbf{x'}=A\mathbf{x}=[7,2]_S^T$.
Note that in the new basis $B$ that has as basis vectors the columns of $A$ (represented in blue) this vector has components $\mathbf{x'}=[3,2]_B^T$.
The inverse matrix $A^{-1}$ represents the passive transformation that gives the components of the vector $\mathbf{x}$ in the new basis $B$:
$$ A^{-1}\mathbf{x}= \frac{1}{8} \begin{bmatrix} 4&-2\\ 2&1 \end{bmatrix} \begin{bmatrix} 3\\ 2 \end{bmatrix}= \begin{bmatrix} 1\\ 1 \end{bmatrix} $$