Actually, a basis change doesn't matter: consider an arbitrary orthonormal basis $\{e_i\}_{i = 1}^{n}$ for $\mathbb{R}^n$ (though you could easily generalize to any finite dimensional inner product space). If $v = \sum a_i e_i$ and $w = \sum b_i e_i$, then
$$ v\cdot w = \left( \sum_i a_i e_i \right) \cdot \left( \sum_j b_j e_j \right) = \sum_i \sum_j a_i b_j (e_i \cdot e_j) = \sum_i a_i b_i = \underline{v}^T \underline{w}$$
As for your second question, the proof above also works for any inner product on a finite dimensional real vector space, and thus shows that there is only one inner product on such a space, namely the dot product. For complex vector spaces, the definition of an inner product changes slightly (it becomes conjugate-linear in one factor), but the result is the same: there is only one (up to isometry) Hilbert space of a given dimension (which is the cardinality of any given orthonormal basis).
This is a tremendously common confusion to have, and in my experience, people are notoriously bad at explaining this concept. I'm sorry that you had to deal with people who were abrasive in addition to poor expositors.
In an arbitrary vector space, you cannot talk about components. They actually don't exist. Now, you can impose them on a finite-dimensional space by providing a bijective linear transformation from the arbitrary vector space to $F^n$, but then they're just that: an imposition, because any other bijective linear transformation will choose different would-be "components".
Components exist in $F^n$ because of the actual nature of the objects involved. So you don't need a basis, you can just look at an arbitrary object $(a,b,\dots,n)$, and find any of its components, because they're built into the object. This can be confusing because we also write coordinate vectors in this way, and when the basis is the standard basis, there is no difference between the components and the coordinates. However, in any other basis, there will be a difference.
(Edit: Val made an important point in the comments. I should have been more careful when I said there was "no difference". The fact is that coordinates and components are never conceptually the same, but I meant to say that in the standard basis case they will be numerically equal.)
Lacking a basis at all, you might want to say that $F^n$ still has coordinates implied by its components. But, in my opinion, this seems silly, since you cannot do the same in other spaces.
So the short answer is: Yes, there is a difference, because components are part of the objects.
As for your "collection of vectors" notion, they are basically the same. But it is easy to imagine a collection of vectors which is not a vector space: for example the circle in $\mathbb{R}^2$. This is definitely a collection, and the objects in it are definitely vectors, but it is not a vector space.
What I assume you meant by "collection" was what we might call a "meaningfully structured collection", and the meaningful structure is described precisely as an abelian group over which elements can be scaled by objects in a field. In that sense, your notion is correct, though a bit less transparent.
Best Answer
Let's concentrate on the reals, in fact on $\Bbb R^3$. The vectors are column vectors containing three real numbers. Because $0$ and $1$ are special real numbers, it turns out to be really nice to work with vectors that are mostly zeroes. So $$ e_1 = \pmatrix{1\\0\\0}, e_2 = \pmatrix{0\\1\\0}, e_3 = \pmatrix{0\\0\\1}, $$ which turns out to be a basis for 3-space, are a really nice set. They come up a lot, so they get a name: "The standard basis". This generalizes to $\Bbb R^n$, and I'm pretty sure you understand the pattern.
As it happens, when we use the "standard inner product" on $\Bbb R^n$, these vectors turn out to all have length one, and be mutually perpendicular. Those two properties also come up a lot, so we give them a name: we say the basis is an "orthonormal" basis.
So at this point, you see that the standard basis, with respect to the standard inner product, is in fact an orthonormal basis.
But not every orthonormal basis is the standard basis (even using the standard inner product). For instance, in $\Bbb R^2$, for any value you pick for $t$, the vectors $$ v_1 = \pmatrix{\cos t \\ \sin t} , v_2 = \pmatrix{\sin t\\ \cos t} $$ are an orthonormal basis as well. (They're just the standard basis rotated counterclockwise by an angle $t$.)
The phrase "orthonormal basis" is always qualified with "with respect to ..." or "under ...", and then an inner product gets named. Well ... not always. Sometimes we're in the middle of talking about some inner product, and it's implicit. But a basis that's orthonormal with respect to one inner product may not be orthonormal with respect to another. Consider, on $\Bbb R^2$, the inner product defined by $$ \langle \pmatrix{a\\b} , \pmatrix{c\\d} \rangle = ac + 2bd. $$ Under this inner product, the vector $e_2$ has length $4$, so $\{e_1, e_2 \}$ is not an orthonomal basis with respect to this (peculiar) inner product.