Q-1: I wasn't sure of the answer to your first question, so I did some searching around. Specifically, the group of $n \times n$ orthogonal matrices, denoted $O(n)$, does not appear to be a normal subgroup of the group of all invertible $n \times n$ matrices, denoted $GL_n(\mathbb{R})$.
What I mean by the above is that there exist orthogonal matrices $A$ and invertible matrices $S$ such that $SAS^{-1}$ is not orthogonal. Recall that the matrix of the linear tranformation given by $A$ under a different basis is given by $SAS^{-1}$ for some change-of-basis matrix $S$. Thus, the answer to your first question is no.
However, if we only use orthogonal change-of-basis matrices, i.e. we assume that the new basis is orthonormal, then the answer is yes. Observe, if $S$ is orthogonal, then so is $S^{-1}$, and we have
$$(SAS^{-1})^T(SAS^{-1}) = (S^{-1})^TA^TS^TSAS^{-1} = (S^{-1})^TA^TAS^{-1} = (S^{-1})^TS^{-1} = I$$
(we have just used that $A^TA = I$ for orthogonal matrices $A$). This shows that $SAS^{-1}$ is still orthogonal, so changing to a different orthonormal basis preserves orthogonality of the linear transformation.
Q-2: We do not have a concept of orthogonality, including orthogonal transformations, unless our vector spaces are indeed inner product spaces. As you point out, there is always a way to impose an inner product on a given vector space $V$, namely by picking a basis, using that basis to construct an isomorphism to $\mathbb{R}^n$, and then taking the dot product in $\mathbb{R}^n$. This is an example of something we usually call "non-canonical," which roughly means there were choices involved in the definition of this inner product. Namely, we had to choose a basis for $V$, and there are many different ways to do this, yielding many different inner products. Therefore, we do not typically use this inner product. Rather, we would hope that $V$ comes with a more "natural" or "canonical" inner product to define orthogonal transformations between arbitrary spaces.
Q-3: Again, if $V$ and $W$ are inner product spaces, only then may we discuss orthogonality, so let's assume they do indeed have inner product structures. Then as we found above, while a transformation may be orthogonal, its matrix with respect to a particular basis need not be. Once again, we will need to assume our bases $\mathcal{B}$ and $\mathcal{C}$ for $V$ and $W$ resp. are orthonormal with respect to the inner product structures on $V$ and $W$ resp. Under these circumstances, I believe we can conclude that the matrix for the orthogonal transformation $T$ is an orthogonal matrix (although I have not proven this fact. You should try to find a proof, or try to write a proof yourself!)
Nothing about that theorem, or its proof, mentions anything about the standard basis, so yes. Given any ordered bases $B$ and $C$ of $F^n$ and $F^m$ respectively (or any other pair of finite-dimensional vector spaces over the same field that you care to name, though it doesn't matter, since they're all isomorphic to such things), for any vector $v \in F^n$, define $v_B$ to be the column vector (if you somehow like your operators on the right, substitute "row vector") whose entries are the coefficients of the unique linear combination of elements of $B$ equal to $v$, and similarly in $F^m$. Then for any linear transformation $T: F^n \to F^m$, define $M_B^C(T)$ to be the matrix whose $(i,j)$-th entry is the coefficient of the $j$th element of $C$ in the unique linear combination of elements of $C$ equal to the image under $F$ of the $i$th element of $B$. Then $M_B^C(F)v_B = (F(v))_C$.
Best Answer
I've never seen the notation $\tilde A$ used to mean $A$ w.r.t. the standard basis, but $A$ is ALWAYS w.r.t. some basis. Think about it, matrices have components. What would those components be if the matrix were not w.r.t. to some basis?
So $f$ is basis-free -- it doesn't matter which basis you choose, $f$ will always be the linear map that does a specific thing (determined by its definition is).
$A$ is basis-dependent. You can only specify a matrix representation of a transformation $f$ if you've already chosen a basis. And of course, the same matrix will NOT work if you later decide to change your basis (though you can transform it with an invertible matrix $P$ like $P^{-1}AP$).
$\tilde A$ is apparently the matrix representation of $f$ w.r.t. the standard basis. This is of course, basis-dependent.
$\vec x$ is an object just like $f$. By that I mean it is intrinsicly basis-free. The coordinates of $\vec x$ are determined after a basis is chosen. But we don't usually use any special notation to specify whether $\vec x$ is a coordinate vector or an abstract vector UNLESS we're doing a change of basis problem.