Elements of a real vectorspace certainly have direction, but they don't really have a magnitude. Well actually, they... kind-of have a magnitude. But for a proper magnitude, you need further structure, such as a norm or inner product. Let me explain.
Vector Spaces.
Suppose $V$ is a real vectorspace.
Definition 0. Given a vectors $x,y \in V$, we say that $x$ and $y$
have the same direction iff:
- there exists $r \in \mathbb{R}_{\geq 0}$ such that $x = ry,$ and
- there exists $r \in \mathbb{R}_{\geq 0}$ such that $y = rx$.
(The $r$'s don't have to be the same.)
This induces an equivalence relation on $V$, so we get a partitioning of $V$ into cells. Each cell is an open ray, so long as we regard $\{0\}$ as an open ray. You may wish to exclude $\{0\}$ from its privileged position as a ray, in which case you should only deal with non-zero vectors; that is, you need to be dealing with $V \setminus \{0\}$ rather than $V$.
Irrespective of which conventions are used, we can make sense of direction using these ideas:
Definition 1. The direction of $x \in V$ is the unique open ray $R \subseteq V$ such that $x \in R$.
Notice that the equivalence relation of having the same direction is preserved under scalar multiplication; what I mean is that if $v$ and $w$ have the same direction, then $av$ and $aw$ have the same direction, for any $a \in \mathbb{R}$. Geometrically, this means that if we scale a ray, we'll end up with a subset of another ray.
As for magnitude; well, if you choose a ray $R \subseteq V$, then we can partially order $R$ as follows. Given $x,y \in R$, we define that $x \geq y$ iff $x = ry$ for some $r \in \mathbb{R}_{\geq 1}$. So some vectors along this ray are longer than others, hence magnitude.
Inner Product Spaces.
Actually, this isn't the whole story. The problem with vector spaces is that if $x$ and $y$ don't belong to the same ray (nor to the the "negatives" of each others rays), then there's no way of comparing the magnitudes of $x$ and $y$. We can't say which is longer! Now there are mathematical situations where this limitation is desirable, but physically, you probably don't want this. A related issue is that you can't really make sense of angles in a (mere) vector space; at least, not without some further structure.
For this reason, when physicists say "vector", what they usually mean is "element of a finite-dimensional inner-product space." This is a (finite-dimensional) vector space $V$ with further structure; in particular, it comes equipped with a function
$$\langle-,-\rangle : V \times V \rightarrow \mathbb{R}$$
that is required to satisfy certain axioms resembling the dot product. Especially important for us is that these axioms include a "non-negativity" condition:
$$\langle x,x\rangle \geq 0$$
Using this, we can define the magnitude of vectors as follows.
Definition 2. Suppose $V$ is a real inner product space. Then the norm (or "magnitude") of $x \in V$, denoted $\|x\|$, is defined a follows:
$$\|x\| = \langle x,x\rangle^{1/2}$$
This allows us to compare the magnitudes of vectors that don't live in the same ray; we simply define that $x \geq y$ means $\|x\| \geq \|y\|.$ When confined to a single ray, this agrees with our earlier definition! Be careful though, because the relation $\geq$ we just defined is only a preorder.
In fact, the inner product gives us more than just magnitudes; it also gives angles!
Definition 3. Suppose $V$ is a real inner product space. Then the angle between of $x,y \in V$, denoted $\mathrm{ang}(x,y)$, is defined a follows:
$$\mathrm{ang}(x,y) = \cos^{-1}\left(\frac{\langle x,y\rangle}{\|x\|\|y\|}\right)$$
It can be shown that vectors $x$ and $y$ have the same direction (in the sense described at the beginning of my post) iff the angle between them is $0$. In fact, you can modify the above definition so that it defines the angle between any two non-zero open rays. In this case, it turns out that two rays are equal iff the angle between them is $0$.
There are several misconceptions in the OP about both mathematicians' and physicists' use of the word "vector", and even about what scalars and tensors are. To keep this a concise overview I'll be linking to fuller explanations.
Firstly, anything you've heard about magnitude and direction was just an attempt to help schoolchildren avoid certain fallacies without having to explain the entire concept of a vector space to them. The aim is to make sure they understand that, for example, a particle's momentum points a certain way but its amount of energy doesn't.
In general, vectors are not tuples. Admittedly some sets of tuples satisfy the axioms of a vector space if you define arithmetic the usual way, but vectors are so much more general than that case, as examples discussed above show. What is true in general is that, if a vector space $V$ has a basis of the form $\left\{e_i|i\in I \right\}$, then each vector in $V$ is expressible as a linear combination of the $e_i$. Depending on the details, this "linear combination" might be a sum or an integral. Armed with this, the coefficients used can provide a tuple representation of vectors (although in some cases you need infinitely many numbers), but the vector is an independent object. The map is not the territory. In fact, making a terrain look different by creating a new map that's rotated relative to an old one is a special case of what you'll sometimes here called a basis change. Since you're familiar with $\mathbb{R}^n$, I'll give a simple example. The vectors $\left(\begin{array}{c}
1\\
0
\end{array}\right),\,\left(\begin{array}{c}
0\\
1
\end{array}\right)$ comprise a basis of $\mathbb{R}^2$, but I can rotate a 2D map by an angle $\theta$ because $\left(\begin{array}{c}
\cos\theta\\
\sin\theta
\end{array}\right),\,\left(\begin{array}{c}
\sin\theta\\
-\cos\theta
\end{array}\right)$ comprise a basis too.
I should also point out in passing that, while in some contexts the word "basis" simply means a choice of $\left\{e_i|i\in I \right\}$ for which this can be done, the proper definition requires that the linear combination need only use finitely many of the $e_i$. Many vector spaces of interest that do not have finite dimension nonetheless meet some additional technical conditions that allow the less strict meaning of "basis" to be useful. However, the famous statement that two bases of a vector space have the same cardinality refers to the finite-combinations-only definition.
So that's what mathematicians mean by vector spaces. A vector space is always "over" a field of scalars. Just as a vector is defined as an element of a vector space which in turn has a long definition, a scalar is defined as an element of a field which in turn has a long definition.
Or is it? Let's talk about what physicists really mean when they discuss vectors. On the one hand, they know about all the mathematics I mentioned above. On the other hand, they also want to describe nature in terms of quantities that transform in certain convenient ways when we switch coordinate systems, to exemplify "symmetries". This leads them to define "vector" in a stricter way. For example, one thing schoolchildren aren't told is that, although angular momentum has a magnitude and direction, it's not a vector because of the way it transforms under reflections. The distinction in $\mathbb{R}^3$ between vectors and axial vectors takes some explaining. The confusion is understandable. Position and momentum are "in" $\mathbb{R}^3$ and are vectors; angular momentum is "in" $\mathbb{R}^3$ is an axial vector. The reason is simply that none of these things are really "in" a famous set of tuples, because they're not tuples at all; they're quantities that admit a tuple representation. That's one similarity axial vectors have with "true" vectors.
With the development of differential geometry, we realised there is a more elegant way to talk about all this. Instead of distinguishing between true vectors and axial vectors, we can distinguish between contravariant and covariant vectors, provided our "both types count" definition of vector means "rank one tensor". quantity $T^{\alpha_1\cdots\alpha_p}_{\beta_1\cdots\beta_q}$ with $p,\,q$ non-negative integers is called a tensor of rank $p+q$ and order (or type) $\left(p,\,q\right)$ iff a coordinate transformation of spacetime from $x^\mu$ to $x^{'\nu}$ obeys $$T^{'\alpha_1\cdots\alpha_p}_{\beta_1\cdots\beta_q}=\sum_{\gamma_1\cdots\gamma_p \delta_1\cdots\delta_q}\frac{\partial x^{'\alpha_1}}{\partial x^{\gamma_1}}\cdots\frac{\partial x^{'\alpha_p}}{\partial x^{\gamma_p}}\frac{\partial x^{\delta_1}}{\partial x^{'\beta_1}}\cdots\frac{\partial x^{\delta_q}}{\partial x^{'\beta_q}}T^{\gamma_1\cdots\gamma_p}_{\delta_1\cdots\delta_q}.$$(We never actually write the summation sign; we take for granted that any index that appears twice, once as a subscript and once as a superscript, is summed over all possible values. In relativity, there is one such value for each spacetime dimension.) A tensor of rank $0$ is a scalar, and is unchanged under coordinate transformations. A tensor of positive rank is called covariant if $p=0$, contravariant if $q=0$ and mixed otherwise. Mixed tensors have $p\geq 1$ and $q\geq 1$, so have rank $\geq 2$.
Something that looks like a tensor by virtue of its indices may not transform the right way to actually be a tensor. (Of course, if there are no indices at all something would "look like a scalar", but might not be one.) Here are three important examples.
Best Answer
0-tensors are constant functions, which we identify with scalars.
1-tensors are linear functions, which we identify with vectors. This identification amounts to selecting an inner product: we identify the vector $x$ with the function $y \mapsto \langle x,y \rangle$.
2-tensors are bilinear functions, which we identify with matrices. This identification also amounts to selecting an inner product: we identify the matrix $A$ with the function $(x,y) \mapsto \langle x,Ay \rangle$.
Things become a bit foreign when we go to $k$-tensors with $k>2$. One way to think about it is that a $k$-tensor takes a vector and gives back a $(k-1)$-tensor. Thus for instance a $3$-tensor takes a vector and gives back a matrix.