Elements of a real vectorspace certainly have direction, but they don't really have a magnitude. Well actually, they... kind-of have a magnitude. But for a proper magnitude, you need further structure, such as a norm or inner product. Let me explain.
Vector Spaces.
Suppose $V$ is a real vectorspace.
Definition 0. Given a vectors $x,y \in V$, we say that $x$ and $y$
have the same direction iff:
- there exists $r \in \mathbb{R}_{\geq 0}$ such that $x = ry,$ and
- there exists $r \in \mathbb{R}_{\geq 0}$ such that $y = rx$.
(The $r$'s don't have to be the same.)
This induces an equivalence relation on $V$, so we get a partitioning of $V$ into cells. Each cell is an open ray, so long as we regard $\{0\}$ as an open ray. You may wish to exclude $\{0\}$ from its privileged position as a ray, in which case you should only deal with non-zero vectors; that is, you need to be dealing with $V \setminus \{0\}$ rather than $V$.
Irrespective of which conventions are used, we can make sense of direction using these ideas:
Definition 1. The direction of $x \in V$ is the unique open ray $R \subseteq V$ such that $x \in R$.
Notice that the equivalence relation of having the same direction is preserved under scalar multiplication; what I mean is that if $v$ and $w$ have the same direction, then $av$ and $aw$ have the same direction, for any $a \in \mathbb{R}$. Geometrically, this means that if we scale a ray, we'll end up with a subset of another ray.
As for magnitude; well, if you choose a ray $R \subseteq V$, then we can partially order $R$ as follows. Given $x,y \in R$, we define that $x \geq y$ iff $x = ry$ for some $r \in \mathbb{R}_{\geq 1}$. So some vectors along this ray are longer than others, hence magnitude.
Inner Product Spaces.
Actually, this isn't the whole story. The problem with vector spaces is that if $x$ and $y$ don't belong to the same ray (nor to the the "negatives" of each others rays), then there's no way of comparing the magnitudes of $x$ and $y$. We can't say which is longer! Now there are mathematical situations where this limitation is desirable, but physically, you probably don't want this. A related issue is that you can't really make sense of angles in a (mere) vector space; at least, not without some further structure.
For this reason, when physicists say "vector", what they usually mean is "element of a finite-dimensional inner-product space." This is a (finite-dimensional) vector space $V$ with further structure; in particular, it comes equipped with a function
$$\langle-,-\rangle : V \times V \rightarrow \mathbb{R}$$
that is required to satisfy certain axioms resembling the dot product. Especially important for us is that these axioms include a "non-negativity" condition:
$$\langle x,x\rangle \geq 0$$
Using this, we can define the magnitude of vectors as follows.
Definition 2. Suppose $V$ is a real inner product space. Then the norm (or "magnitude") of $x \in V$, denoted $\|x\|$, is defined a follows:
$$\|x\| = \langle x,x\rangle^{1/2}$$
This allows us to compare the magnitudes of vectors that don't live in the same ray; we simply define that $x \geq y$ means $\|x\| \geq \|y\|.$ When confined to a single ray, this agrees with our earlier definition! Be careful though, because the relation $\geq$ we just defined is only a preorder.
In fact, the inner product gives us more than just magnitudes; it also gives angles!
Definition 3. Suppose $V$ is a real inner product space. Then the angle between of $x,y \in V$, denoted $\mathrm{ang}(x,y)$, is defined a follows:
$$\mathrm{ang}(x,y) = \cos^{-1}\left(\frac{\langle x,y\rangle}{\|x\|\|y\|}\right)$$
It can be shown that vectors $x$ and $y$ have the same direction (in the sense described at the beginning of my post) iff the angle between them is $0$. In fact, you can modify the above definition so that it defines the angle between any two non-zero open rays. In this case, it turns out that two rays are equal iff the angle between them is $0$.
The reason the author says this is that it's a kind of motivation for the definition of a linear mapping. Just saying a linear map is any $f : V \rightarrow W$ for vector spaces $V$ and $W$ which satisfies:
$$f(u+v) = f(u) + f(v)$$
and
$$f(\lambda u) = \lambda f(u)$$
is indeed the definition, but it is not enlightening in the slightest, so the author is attempting to give some insight into why we have this definition.
Generally, when vectors are added together, or multiplied by a scalar, the result is also a vector (the same type of object).
How are the above properties of a linear transformation preserving this?
Let's go over what vector properties mean. First, we have that adding vectors produces a vector, so, for vectors in boldface, we have:
$$\mathbf{u + v} = \mathbf{u} + \mathbf{v}$$
For example:
$$\begin{bmatrix} 1 + 2 \\ 4+5 \\ -1 + 15 \end{bmatrix} = \begin{bmatrix} 1 \\ 4 \\ -1 \end{bmatrix} + \begin{bmatrix}2 \\ 5 \\ 15 \end{bmatrix}$$
We also have the ability to pull out scalars:
$$\mathbf{au} = a\mathbf{u}$$
For example:
$$\begin{bmatrix} 10 \\ 40 \\ -10 \end{bmatrix} = 10\begin{bmatrix} 1 \\ 4 \\ -1 \end{bmatrix}$$
What this tells us is that the linear map is compatible with the rules of vector arithmetic. This is one of the author's intentions behind saying that it maintains structure.
Indeed, if $\mathbf{w} = \lambda \mathbf{u} + \delta \mathbf{v}$, if $f$ is a linear mapping, then $f(\mathbf{w}) = \lambda f(\mathbf{u}) + \delta f(\mathbf{v})$ (can you prove this?). So a linear map uniquely preserves the relationship between different vectors. Can you find three vectors $\mathbf{a}, \mathbf{b}, \mathbf{c}$ such that $\mathbf{a} = \mathbf{b} + \mathbf{c}$, but for $g(x) = x^2$, we have $g(\mathbf{a}) \neq g(\mathbf{b}) + g(\mathbf{c})$? Why does this tell us that $g$ is not a linear map? Can you find a counterexample to prove that $g$ is not a linear map?
This idea that maps "preserve the decomposition of a vector into a sum of scaled vectors" will be vital to defining things like a basis of a vector space, and other useful properties.
As long as the LHS and RHS of both given properties belong to the same vector space, can I not consider the vector properties preserved? Why do they have to be equal?
In this case, once again consider $g(x) = x^2$. This function runs from $\mathbb{R}$ to $\mathbb{R}$, both of which are vector spaces, but it does not satisfy our properties. Hence while the output is a vector, the way that the output relates to its component parts is not the same as the way that the input relates to its component parts.
Once again, this is related to the idea of a basis, in which we find we may write a vector as a unique linear combination of a finite set of vectors (if the vector space is finite dimensional), and linear maps preserve that combination, but I do not know if you have been exposed to a basis yet.
Best Answer
In modern mathematics, there's a tendency to define things in terms of what they do rather than in terms of what they are.
As an example, suppose that I claim that there are objects called "pizkwats" that obey the following laws:
These rules specify what pizkwats do by saying what rules they obey, but they don't say anything about what pizkwats are. We can find all sorts of things that we could call pizkwats. For example, we could imagine that pizkwats are the numbers 0 and 1, with addition being done modulo 2. They could also be bitstrings of length 137, with "addition" meaning "bitwise XOR." Or they could be sets, with “addition” meaning “symmetric difference.” Each of these groups of objects obey the rules for what pizkwats do, but neither of them "are" pizkwats.
The advantage of this approach is that we can prove results about pizkwats knowing purely how they behave rather than what they fundamentally are. For example, as a fun exercise, see if you can use the above rules to prove that
This means that anything that "acts like a pizkwat" must support a commutative addition operator. Similarly, we could prove that
The advantage of setting things up this way is that any time we find something that "looks like a pizkwat" in the sense that it obeys the rules given above, we're guaranteed that it must have some other properties, namely, that it's commutative and that every element has its own and unique inverse. We could develop a whole elaborate theory about how pizkwats behave and what pizkwats do purely based on the rules of how they work, and since we specifically never actually said what a pizkwat is, anything that we find that looks like a pizkwat instantly falls into our theory.
In your case, you're asking about what a vector is. In a sense, there is no single thing called "a vector," because a vector is just something that obeys a bunch of rules. But any time you find something that looks like a vector, you immediately get a bunch of interesting facts about it - you can ask questions about spans, about changing basis, etc. - regardless of whether that thing you're looking at is a vector in the classical sense (a list of numbers, or an arrow pointing somewhere) or a vector in a more abstract sense (say, a function acting as a vector in a "vector space" made of functions.)
As a concluding remark, Grant Sanderson of 3blue1brown has an excellent video talking about what vectors are that explores this in more depth.