I think the following article:
Gregory H. Moore. The axiomatization of linear algebra: 1875-1940. Historia Mathematica, Volume 22, Issue 3, 1995, Pages 262–303
(Available here from Elsevier)
may shed some light on your question, although you may not have enough mathematical experience to understand the entire article. Here is my understanding having browsed the article, but I must stress that I am not a mathematical historian, so please don't quote me!
The idea of an abstract space where an addition is defined between elements and there is a field action (rather than a particular realization as, for instance, $\mathbb{R}^n$ or $C([0,1])$) seems to be due to Peano in 1888, where he called them linear systems. The definition of an abstract vector space didn't catch on until the 1920s in the work of Banach, Hahn, and Wiener, each working separately. Hahn defined linear spaces in order to unify the theory of singular integrals and Schur's linear transformations of series (both employing infinite dimensional spaces). Wiener introduced vector systems which seems to be roughly equivalent to Banach's definition, which was motivated by finding a common framework to understand integral operators (Banach's 1922 paper "Sur les operations dans les ensembles abstraites et leur application aux équations intégrales" is available online and is quite readable) which were defined on champs (domains).
I understand the modern name vector space is popular because of a widely circulated 1941 textbook by Birkhoff and MacLane, A Survey of Modern Algebra, where the term is used.
As Asaf and Hans have indicated in their comments, the motivation for calling such spaces vector spaces is because intuitively, they generalize our understanding of "vectors" (differences between points) in a finite dimensional Euclidean. The motivation for calling such spaces linear spaces is because our ability to add together different elements is the crucial feature which lets us apply the general theory to solve specific problems which are not obviously (to the 1920's eye) about vectors (in particular, in PDE and mathematical physics).
In your course, it is unlikely you will cover material that requires this abstraction, but it is a good habit for later mathematics to work in generality while you maintain your intuition in concrete examples.
Isomorphisms are defined in many different contexts; but, they all share a common thread.
Given two objects $G$ and $H$ (which are of the same type; maybe groups, or rings, or vector spaces... etc.), an isomorphism from $G$ to $H$ is a bijection $\phi:G\rightarrow H$ which, in some sense, respects the structure of the objects. In other words, they basically identify the two objects as actually being the same object, after renaming of the elements.
In the example that you mention (vector spaces), an isomorphism between $V$ and $W$ is a bijection $\phi:V\rightarrow W$ which respects scalar multiplication, in that $\phi(\alpha\vec{v})=\alpha\phi(\vec{v})$ for all $\vec{v}\in V$ and $\alpha\in K$, and also respects addition in that $\phi(\vec{v}+\vec{u})=\phi(\vec{v})+\phi(\vec{u})$ for all $\vec{v},\vec{u}\in V$. (Here, we've assumed that $V$ and $W$ are both vector spaces over the same base field $K$.)
Best Answer
A vector space is defined as a quadruple $(\mathbf{V},\mathbb{K},\oplus,\odot)$ where $\mathbf{V}$ is a set of elements called vectors, $\mathbb{K}$ is a field $(\mathbb{K},+,\cdot)$ , $\oplus$ is a binary operation (called sum) on $\mathbf{V}$ such that $(\mathbf{V},\oplus)$ is an Abelian group and $a\odot\mathbf{v}:\mathbb{K}\times\mathbf{V} \rightarrow \mathbf{V}$ is a scalar multiplication such that, $\forall a,b \in \mathbb{K}$ and $\forall \mathbf{u,v} \in \mathbf{V}$ we have: $$ a\odot(b\odot\mathbf{v})=(a\cdot b)\odot\mathbf{v} $$ $$ 1\odot\mathbf{v}=\mathbf{v} $$ $$ a \odot (\mathbf{u}\oplus\mathbf{v})=a \odot\mathbf{u}\oplus a\odot \mathbf{v} $$ $$ (a+b)\odot \mathbf{v}=a\odot \mathbf{v}\oplus b\odot \mathbf{v} $$ Note that $(+,\cdot)$ are the operations on $\mathbb{K}$ and are different from the operations $(\oplus, \odot)$.
This ''structure'' is a generalization of the $3$ dimensional space of geometry, where the vectors are oriented segments and between them we define an ''addition'', using the parallelogram low, and a ''scalar multiplication'' by a real number.