Last question first: degree is not (depending on how you look at it) a property of elements of polynomial rings. It's a property of elements of graded polynomial rings, and the easiest way to choose a grading is to choose a set of generators. Your example is a little confused: when you consider $k[x^2]$ you are implicitly considering $x^2$ to have degree $1$, but you don't need to adopt this convention; you can declare that it has degree $2$ instead. (In any case, the notation $k[x^2]$ is misleading: when you write this you are really talking about the entire inclusion map $k[x^2] \to k[x]$, so everything depends on whether you want this to be a map of rings or a map of graded rings.)
Your point 3 seems to answer your first question. I'm not sure why you find $k[\mathbb{A}^n]$ objectionable but $k[V]$ not given that I assume you think $\mathbb{A}^n$ is a variety. I think this is a fine way to refer to a polynomial ring without naming its generators. Your worry about the distinction between polynomials and the functions they induce can be ignored if $k$ is algebraically closed, and otherwise you should just remind yourself that in the remaining cases the functor from $k$-varieties to $\text{Set}$ isn't faithful, so the answer is not to take the set-theoretic picture too seriously in the first place and work directly with the opposite of the category of finitely-generated reduced $k$-algebras. In this category I describe exactly how to recover the ring of functions geometrically in this blog post. (I should be more explicit about what I mean here: if you work in the right category, the polynomial ring in $n$ variables over $k$ is the space of functions on $\mathbb{A}^n$.)
I also don't understand your first two sentences; they seem to be inconsistent with each other. (Off-topic: I don't know what you look like, but because of my Gravatar you know what I look like. I'm the guy sitting in the back of class on his laptop, so if you'd like to introduce yourself that would be cool.)
Your confusion appears to be coming from an assumption that $V=\mathbb F^n$, that is, that vectors are $n$-tuples of scalars. The notation might make more sense to you if you choose some other set of objects as your vectors, such as polynomials of degree at most $n$ with real coefficients†, so that the important distinction between vectors and their coordinates is more apparent: the vector $\mathbf v$ is then a polynomial, while its coordinate tuple with respect to some ordered basis $\mathcal B$, denoted by $[\mathbf v]_{\mathcal B}$, is a $n$-tuple of real numbers. This notation highlights and maintains the difference between a vector and its coordinate tuple, even when the vectors are themselves tuples of scalars.††
The application of the linear transformation $T:V\to W$ to $\mathbf v\in V$ is denoted by $T\mathbf v$—it’s common in algebra to use simple juxtaposition and omit the brackets that you’re no doubt used to. Let’s again take $V$ and $W$ to be vector spaces of polynomials. A critical thing to note is that $T$ operates on polynomials and produces polynomials: writing $T[\mathbf v]_{\mathcal B}$ is nonsensical since that means that you’re trying to apply $T$ to an $n$-tuple of real numbers instead. On the other hand, writing $[T]_{\mathcal B\mathcal A}[\mathbf v]_{\mathcal A}$ does make sense. Here, the juxtaposition represents matrix multiplication instead of function application, which is probably another source of confusion. We left-multiply the column vector $[\mathbf v]_{\mathcal A}$ by the matrix $[T]_{\mathcal B\mathcal A}$ to obtain another column vector, which happily is equal to $[T\mathbf v]_{\mathcal B}$, i.e., the coordinate tuple of the polynomial $T\mathbf v$ with respect to $\mathcal B$.
The identity $$[T\mathbf v]_{\mathcal B} = [T]_{\mathcal B\mathcal A}[\mathbf v]_{\mathcal A}$$ basically says that we can arrive at the same result in two different ways. For the left-hand side, we take the result of applying $T$ to the polynomial $\mathbf v$ and compute its coordinates relative to $\mathcal B$, while for the right-hand side, we first compute the coordinates of the polynomial $\mathbf v$ relative to $\mathcal A$ and then multiply that by the matrix that represents $T$ relative to the two bases. To construct this matrix, we apply $T$ to each element of $\mathcal A$ and then compute the coordinates of that polynomial with respect to $\mathcal B$. Expressed in this notation, the $i$th column of $[T]_{\mathcal B\mathcal A}$ is the coordinate tuple $[T\mathbf a_i]_{\mathcal B}$, as is written in the text.
† The points I make could also be made by taking elements of $V$ to be row vectors of reals instead of column vectors, but using polynomials makes it much more obvious that these are a different type of object from their coordinate tuples.
†† The distinction between elements of $\mathbb R^n$ and their coordinate tuples will no doubt come up in some exercises, if it hasn’t already. For instance, consider $V=\{(x,y,z)\in\mathbb R^3 \mid x+y+z=1\}$. This is a two-dimensional subspace of $\mathbb R^3$, so the coordinates of any element of $V$ relative to a basis of $V$ are elements of $\mathbb R^2$. Note, too, that there’s no obvious “standard basis” for this space as there is for $\mathbb R^3$. If $W$ is another two-dimensional subspace of $\mathbb R^3$, the matrix that represents a linear transformation from $V$ to $W$ will be $2\times2$, not $3\times3$.
Best Answer
Most commonly in mathematics, one says in plain English something to the effect of "The proof goes through with $\mathbb{R}^n$ replaced by $V$."
But if you are really looking for a notation, perhaps you might borrow one from lambda calculus. In defining $\beta$-reduction, there is notation for substituting free variables in expressions. I've seen a few notations for "$E$ with the variable $x$ replaced by $E'$":
I personally find the first notation to be clearest among them.