People keep mentioning the restriction on the size of a Schauder basis, but I think it's more important to emphasize that these bases are bases with respect to different spans.
For an ordinary vector space, only finite linear combinations are defined, and you can't hope for anything more. (Let's call these Hamel combinations.) In this context, you can talk about minimal sets whose Hamel combinations generate a vector space.
When your vector space has a good enough topology, you can define countable linear combinations (which we'll call Schauder combinations) and talk about sets whose Schauder combinations generate the vector space.
If you take a Schauder basis, you can still use it as a Hamel basis and look at its collection of Hamel combinations, and you should see its Schauder-span will normally be strictly larger than its Hamel-span.
This also raises the question of linear independence: when there are two types of span, you now have two types of linear independence conditions. In principle, Schauder-independence is stronger because it implies Hamel-independence of a set of basis elements.
Finally, let me swing back around to the question of the cardinality of the basis.
I don't actually think (/know) that it's absolutely necessary to have infinitely many elements in a Schauder basis. In the case where you allow finite Schauder bases, you don't actually need infinite linear combinations, and the Schauder and Hamel bases coincide. But definitely there is a difference in the infinite dimensional cases. In that sense, using the modifier "Schauder" actually becomes useful, so maybe that is why some people are convinced Schauder bases might be infinite.
And now about the limit on Schauder bases only being countable. Certainly given any space where countable sums converge, you can take a set of whatever cardinality and still consider its Schauder span (just like you could also consider its Hamel span). I know that the case of a separable space is especially useful and popular, and necessitates a countable basis, so that is probably why people tend to think of Schauder bases as countable. But I had thought uncountable Schauder bases were also used for inseparable Hilbert spaces.
Q-1: I wasn't sure of the answer to your first question, so I did some searching around. Specifically, the group of $n \times n$ orthogonal matrices, denoted $O(n)$, does not appear to be a normal subgroup of the group of all invertible $n \times n$ matrices, denoted $GL_n(\mathbb{R})$.
What I mean by the above is that there exist orthogonal matrices $A$ and invertible matrices $S$ such that $SAS^{-1}$ is not orthogonal. Recall that the matrix of the linear tranformation given by $A$ under a different basis is given by $SAS^{-1}$ for some change-of-basis matrix $S$. Thus, the answer to your first question is no.
However, if we only use orthogonal change-of-basis matrices, i.e. we assume that the new basis is orthonormal, then the answer is yes. Observe, if $S$ is orthogonal, then so is $S^{-1}$, and we have
$$(SAS^{-1})^T(SAS^{-1}) = (S^{-1})^TA^TS^TSAS^{-1} = (S^{-1})^TA^TAS^{-1} = (S^{-1})^TS^{-1} = I$$
(we have just used that $A^TA = I$ for orthogonal matrices $A$). This shows that $SAS^{-1}$ is still orthogonal, so changing to a different orthonormal basis preserves orthogonality of the linear transformation.
Q-2: We do not have a concept of orthogonality, including orthogonal transformations, unless our vector spaces are indeed inner product spaces. As you point out, there is always a way to impose an inner product on a given vector space $V$, namely by picking a basis, using that basis to construct an isomorphism to $\mathbb{R}^n$, and then taking the dot product in $\mathbb{R}^n$. This is an example of something we usually call "non-canonical," which roughly means there were choices involved in the definition of this inner product. Namely, we had to choose a basis for $V$, and there are many different ways to do this, yielding many different inner products. Therefore, we do not typically use this inner product. Rather, we would hope that $V$ comes with a more "natural" or "canonical" inner product to define orthogonal transformations between arbitrary spaces.
Q-3: Again, if $V$ and $W$ are inner product spaces, only then may we discuss orthogonality, so let's assume they do indeed have inner product structures. Then as we found above, while a transformation may be orthogonal, its matrix with respect to a particular basis need not be. Once again, we will need to assume our bases $\mathcal{B}$ and $\mathcal{C}$ for $V$ and $W$ resp. are orthonormal with respect to the inner product structures on $V$ and $W$ resp. Under these circumstances, I believe we can conclude that the matrix for the orthogonal transformation $T$ is an orthogonal matrix (although I have not proven this fact. You should try to find a proof, or try to write a proof yourself!)
Best Answer
Yes. It's easily seen that a linear map is determined by specifying the images of any basis. In particular in $\mathbb{C}^n$ an arbitrary basis $\{b_1,\ldots,b_n\}$ can be matched up with the "orthogonal" (orthonormal) standard basis vectors.
The mapping of the standard ordered basis $\{e_1,\ldots,e_n\}$ (considered as columns) back to $\{b_1,\ldots,b_n\}$ is simply multiplication by matrix $M = [b_1|\ldots|b_n]$ taking the $b_i$'s as columns. Therefore the inverse mapping $L:\mathbb{C}^n \to \mathbb{C}^n$ sought in the Question is explicitly multiplication by $M^{-1}$.
If the vectors $b_i$ are real-valued, so are $M$ and $M^{-1}$.