I am quite sure *vector spaces" and "abstract vector spaces" mean the same thing, and as Micah suggests, "abstract vector spaces" may simply make it more explicit that the spaces of concern are not necessarily $\mathbb C^n$ or $\mathbb R^n$. However, most courses and/or texts on linear algebra teach vector spaces as spaces which need not be $\mathbb R^n$ or $\mathbb C^n$.
For example, from Wikipedia, you can read:
Vectors in vector spaces do not necessarily have to be arrow-like objects as they appear in the mentioned examples: vectors are best thought of as abstract mathematical objects with particular properties ...
...Historically, the first ideas leading to vector spaces can be traced back as far as 17th century's analytic geometry, matrices, systems of linear equations, and Euclidean vectors. The modern, more abstract treatment, first formulated by Giuseppe Peano in 1888, encompasses more general objects than Euclidean space...
A vector by definition is an element of some vector space. Unless specified otherwise, this is the definition you should have in mind. Now let me try to clear up some of your specific questions.
- Here by VECTOR we do not mean the vector quantity which we have defined in vector algebra as a directed line segment.
Without context, it's impossible for me to figure out exactly what is being meant here. Most likely, they previously introduced a specific vector space, such as $\mathbf{R}^n$ and now they want to discuss a different vector space where direction may not have a clear definition.
- Matrices having a single row or column are referred to as vectors.
This is a bit more advanced than what you are probably studying. It basically comes down to how matrices actually arise. Once you fix a basis for your vector space, there is a bijective correspondence between linear transformations and matrices. Then all matrices arise as such. The proof involves taking a basis for the domain and then the columns (or rows) are the images under this map. Well, the image is an element of the codomain, i.e. an element of a vector space, so we can call it a vector.
This way, we can see that all columns of such a matrix is a vector of the codomain. For rows, now just switch the codomain and the domain.
- I also watched a video in which at approximately 3:55 he says that a point in two dimensional real coordinate space is written in matrix form in LINEAR ALGEBRA.
This is going back to 1, where we are once again working in what appears to be $\mathbf{R}^2$. He says it is more common to write the vector $(5,0) \in \mathbf{R}^2$ as a column matrix instead of as a point notation. This is merely a naming or a left vs. right ($xA = b$ vs. $Ax = b$ if you will) and has nothing to do with whether it's a vector.
Best Answer
Recall that if a vector space has a finite basis it is said to be finite dimensional, and the dimension is defined to be the number of vectors that make up this basis. Basis are (possibly finite) sets of vectors that span the vector space and are linearly independent. One can prove that every vector in said vector space can be written in one and only one way as a linear combination of these basis vectors. Say $V$ is a $K$ vector space with basis $B=\{v_1,\ldots,v_n\}$. Then if we have $$v=\alpha_1v_1+\cdots+\alpha_nv_n$$
we write $(v)_B=(\alpha_1,\ldots,\alpha_n)$ and say $v$ has coordinates $(\alpha_1,\ldots,\alpha_n)$ in the basis $B$. This immediately gives a mapping $V\to F^n$ given by $$v\mapsto (v)_B$$
This is the same as mapping each basis vector $v_i$ to $$(0,0,\ldots,\underbrace{1}_i,\ldots,0)$$ which entirely determines the transformation.
Note that $0\mapsto (0,0,\ldots,0)$; that $(v+w)_B=(v)_B+(w)_B$ and $(\lambda v)_B=\lambda (v)_B$ so this is a linear transformation, which gives an isomorphism between $V$ and $F^n$. This means $V$ and $F^n$ are essentially the same as vector spaces, that is, "there is only one vector space of dimension $n$ over a field $F$ up to isomorphism."