One of the ingredients of a vector space is a definition of scalar multiplication; but you need to know what field the scalars are in! A vector space $V$ over $\mathbb{F}$ has as part of its data a map (scalar multiplication) $\mathbb{F}\times V\to V$. The same set $V$ could be given a different vector space structure with a multiplication map $\mathbb{K}\times V$; this is most common when $\mathbb{K}\subset\mathbb{F}$, and the second map is simply a restriction of the first one.
For example, $\mathbb{C}$ is a vector space over $\mathbb{C}$ as you mention in the question, but it is also a vector space over $\mathbb{R}$, since you can simply restrict scalar multiplication to real scalars. These two structures are quite different; for example, the single element $1$ is a basis of $\mathbb{C}$ over $\mathbb{C}$, since every complex number is of the form $z\cdot 1$ for some unique $z\in\mathbb{C}$. Over $\mathbb{R}$, one possible basis is $1,i$, since every complex number is of the form $a+bi$ for a unique pair $(a,b)\in\mathbb{R}^2$. Notice that the dimension of $\mathbb{C}$ changed depending on whether we gave it the structure of a vector space over $\mathbb{C}$ or over $\mathbb{R}$.
Two square matrices $A$ and $B$ are said to be similar, or conjugate, if there exists an invertible square matrix $P$ such that $A = P^{-1}BP$. This is equivalent to saying that $A$ and $B$ represent the same linear transformation in different bases, with $P$ providing the change-of-basis matrix that relates them.
If one wants to solve a linear equation but is working in an inconvenient basis, it may help to change the basis to a more convenient one. Sometimes one can find a convenient basis by inspection, but in general one often changes the basis to obtain the Jordan canonical form of the desired matrix. For solving linear equations the Jordan canonical form is ideal, since (1) it has a very simple structure (upper triangular, and only $1$-s just above the diagonal) and (2) it can be computed for any square matrix.
It is important for theoretical reasons to know that one can always find the Jordan canonical form of a square matrix. It simplifies many abstract proofs to assume a matrix in the proof is in Jordan canonical form. If you know a little abstract algebra, the Jordan canonical form is also of interest in the sense that it completely classifies the conjugacy classes of matrices over the complex numbers (and some other fields as well), and is a special case of a more general phenomenon regarding module homomorphisms.
However, as for more real-world purposes the Jordan canonical form is not ideal. The primary example of a real-world application would be solving a system of linear equations (for example, one that comes up when trying to solve a system of linear ODEs), and unfortunately the Jordan canonical form is not well-suited to this task in practice. The reason is that the Jordan canonical form is very sensitive to perturbations in the original matrix; that is, if an entry $a_{ij}$ in the matrix $A$ is perturbed to $a_{ij}+\epsilon$, it is very possible for the Jordan canonical form of the new matrix to be wildly different from the original Jordan canonical form. (That is, the Jordan canonical form is not numerically stable.)
The numerical instability of the Jordan canonical form makes it bad in real-life applications, where systems of linear equations arise from real-world data that always has a level of uncertainty. For this reason, in real-world applications one must abandon the Jordan canonical form for numerically stable algorithms. One example of such an algorithm is the Schur factorization, which also transforms (using unitary matrices) a matrix into a conjugate upper triangular matrix, and thus simplifies the solution of linear systems.
Best Answer
The finite-dimensional case is much simpler, one can say much more about it, and it already suffices for many applications (systems of equations, first-order linear homogeneous constant-coefficient ODEs). It's a great place to start for practical, theoretical, and pedagogical reasons.
In the infinite-dimensional case a few things still work (bases (although this requires the axiom of choice), rank-nullity), but:
and probably other terrible stuff I'm forgetting.
Also, if you don't believe the axiom of choice, vector spaces can have no bases or trivial dual. Here is an illustrative example: consider the uncountable-dimensional vector space $\mathbb{R}^{\mathbb{N}}/\mathbb{R}^{(\mathbb{N})}$ given by the quotient of the space of functions $\mathbb{N} \to \mathbb{R}$ by the subspace of functions with finite support (meaning they are zero except at finitely many points). Can you explicitly write down a basis of this vector space? (Already it's not trivial and a nice exercise to write down uncountably many linearly independent vectors.) Can you explicitly write down a nonzero linear functional on it?
In order to deal with infinite-dimensional linear algebra, which historically was almost entirely motivated by the desire to solve differential equations such as the heat equation or wave equation, we need to bring in tools from analysis; this involves talking about topological vector spaces, normed vector spaces, Banach spaces, Hilbert spaces, etc. depending on the desired application. This is much more machinery than is needed in the finite-dimensional case and is covered in a course on functional analysis, not an introduction to linear algebra.