I'm studying linear algebra, and I've been getting to grips with the idea of groups and fields and vector spaces. From what I understand, to be a vector is just to be an element of a vector space, and all sorts of unusual (to the beginner) things turn out to be vector spaces. For example, the set of all real-valued functions is a vector space over $\mathbb{R}$. I take it this means that all real-valued functions (e.g. $\sin x$) are vectors over $\mathbb{R}$. But how does this abstract definition fit with the earlier account we are given of vectors as quantities with magnitude and direction or directed line segments?
The relation between the high school definition of a vector and the abstract algebra definition
linear algebravector-spacesvectors
Related Solutions
For question 1) the answer is "usually no". For example, let $n$ be a positive integer greater than $1$ and let $V$ be a one dimensional subspace of $\mathbb R^n$. The zero vector in $V$ is certainly not the scalar $0$. The reason I say "usually" no is that if you view $\mathbb R$ as a vector space over $\mathbb R$, then the zero vector happens to be equal to the zero scalar. You could cook up some other examples like that.
For question 2, speed is a scalar and velocity is a vector. If an object has speed $0$, then its velocity is the zero vector, but its speed is not equal to its velocity. (They could not be equal because they are not even the same type of mathematical object.)
To be more concrete, let's say that I introduce a coordinate system in my lab and measure that my speed (in meters/sec) is the number $0$. Then my velocity (in meters / sec) is $(0,0,0)$. And $0 \neq (0,0,0)$.
Here's another way to make the same point. Suppose that a particle's position at time $t$ is $f(t)$, where $f:\mathbb (a,b) \to \mathbb R^3$ is a differentiable function. The particle's velocity at time $t_0$ is $f'(t_0)$, and the particle's speed at time $t_0$ is $\| f'(t_0)\|$. Suppose that the particle's speed at time $t_0$ is the number $0$. Then the particle's velocity at time $t_0$ is $f'(t_0) = (0,0,0)$. And again, $0 \neq (0,0,0)$.
A vector space is a triplet $(V,F, \psi)$ where $V$ is a commutative (Abelian) group and $F$ is a field and $\psi$ is a special kind of function from $F\times V$ to $V.$
Before going further, a few standard conventions:
$+$ is used for the group-operation of $V$ and also for the addition-operation of $F.$
$ 0$ is used for the group-identity of $V$ and also for the additive-identity of $F.$
$1$ is used for the multiplicative-identity of $F.$
$fv$ is used for $\psi(f,v).$
The special properties of $\psi$ are:
If $f_1,f_2\in F$ and $v\in V$ then $(f_1+f_2)v=(f_1v)+(f_2v)$ and $f_1(f_2v)=(f_1f_2)v.$
If $f\in F$ and $v_1,v_2\in V$ then $f(v_1+v_2)=(fv_1)+(fv_2).$
If $f\in F$ and $v\in V$ then $[\,fv=0$ iff ($f=0$ or $v=0)\,].$
If $v\in V$ then $1v=v.$
This is called a vector-space over $F.$ But it is very common to refer to $V$ as the vector-space.
In some contexts $vf$ is defined to be $fv$ when $f\in F$ and $v\in V.$
Best Answer
During one's original study of vectors, while thinking of them as "quantities with magnitude and direction" or as "directed line segments", one slowly learns various properties of vectors involving the operations of arithmetic.
For example, one learns that two vectors in the same vector space (i.e. in $2$-dimensional space $\mathbb R^2$ or in $3$-dimensional space $\mathbb R^3$) can be added to produce another vector, using the parallelogram law of addition. Abstracting this slightly one simply says
One also learns that it doesn't matter what order you add the two vectors in, the result of that operation will turn out the same. Abstractly,
Just like with numbers, if you want to add three or more vectors, you have to do it two-at-a-time by inserting parentheses to group terms, but it doesn't matter how you insert those parentheses:
One next learns that there is actually an exception to the rule that a vector is a "directed line segment" or a "quantity with magnitude and direction", because there is a very special vector $\vec O$ which has no magnitude and points nowhere. You can still add this vector to other vectors, but the outcome is special: $\vec O + \vec V = \vec V$ no matter what $\vec V$ is.
Now, I could go on in this vein, continuing with a slow explanation of how scalar multiplication works, and then writing down the abstract laws of scalar multiplication. But perhaps you have seen the pattern: the abstract laws that I am writing down are building up to be the abstract definition of a vector space. The above four laws are already about half of the definition of a vector space; I'm sure you can carry out the rest of this process with scalar multiplication.
Abstraction is a central process in the human development of mathematics. We look at familiar objects (e.g. vectors), we study their properties, if we're clever enough we abstract those properties, and then we notice: Hey! There are many other settings where those properties hold! I think I'll call those settings vector spaces!