Your problem in understanding coplanarity is of distinguishing "points" from "vectors," and of distinguishing 2-dimensional linear subspaces from planes in general. Every three points determine a plane, as you say, but in general this plane doesn't pass through the origin. In linear algebra we single out the planes that pass through the origin, since they're subspaces of $\mathbb{R}^3$. Then we have a different definition of the plane determined by just two vectors, which is their linear span $\{au+bv: a, b \in \mathbb{R}\}$. This necessarily includes the origin. You can also think of the span of $u$ and $v$ in terms of points, as the plane determined by $u, v$, and the origin.
As you've said, if $w$ is in the span of $u$ and $v$, i.e. is a linear combination of them, then $u,v,w$ are linearly dependent. What this means is precisely that $u,v,w$ are coplanar, in the sense that $w$ is in the plane determined by $u,v,$ and the origin.
Here are answers to your questions:-
- Firstly, when you say scalars, $a_1, a_2, \cdots, a_n$, they are real numbers and hence can also be $0$. Keeping this in mind, suppose there is a set $S = \left\lbrace v_1, v_2, \cdots, v_n \right\rbrace \subseteq V$. Then, quite obviously, the vector $\textbf{0} \in V$ cab ve written as
$$0 \cdot v_1 + 0 \cdot v_2 + \cdots + 0 \cdot v_n = \textbf{0}$$
This is what we call the "trivial linear combination".
In fact, what confusion you have in mind is that when you say that a vector is a linear combination of other vectors, there must be at least one vector and one scalar with which you can construct your "linear combination".
- When you talk about a "set", elements cannot be repeated. So, there is no point of asking if the elements of the set are distinct.
Lastly, I do not know what book you are following, but I feel that a better version of definitions of linear dependence and independence is the following:-
Linear Independence
A finite set $S = \left\lbrace v_1, v_2, \cdots, v_n \right\rbrace \subseteq V$ is said to be linearly independent iff
$$\alpha_1 \cdot v_1 + \alpha_2 \cdot v_2 + \cdots + \alpha_n \cdot v_n = \textbf{0}$$
implies that $\alpha_1 = \alpha_2 = \cdots = \alpha_n = 0$. This actually means that the only way you can obtain the zero vector $\textbf{0}$ from a linearly "independent" set is by setting the scalars (coefficients) to be $0$, which we call the "trivial" combination.
In case of an infinite set $S \subseteq V$, it is said to be linearly independent iff every finite subset of $S$ is linearly independent. We have definition of linear independence of finite sets which can be used.
Linear Dependence
A finite set $S = \left\lbrace v_1, v_2, \cdots, v_n \right\rbrace \subseteq V$ is said to be linearly "dependent" iff it is not linearly independent. Thus, we need to negate the statement for linear independence. The negation of the statement
"$\exists \alpha_1, \alpha_2, \cdots, \alpha_n \in \mathbb{R}$ and $i \in \left\lbrace 1, 2, \cdots, n \right\rbrace$ such that $\alpha_1 \cdot v_1 + \alpha_2 \cdot v_2 + \cdots + \alpha_n \cdot v_n = \textbf{0}$ and $\alpha_i \neq 0$"
This statement means that the vector $v_i \in S$ can be actually written as a linear combination of the other vectors. In particular,
$$v_i = \left( - \dfrac{\alpha_1}{\alpha_i} \right) \cdot v_1 + \left( - \dfrac{\alpha_2}{\alpha_i} \right) \cdot v_2 + \cdots + \left( - \dfrac{\alpha_{i - 1}}{\alpha_i} \right) \cdot v_{i - 1} + \left( - \dfrac{\alpha_{i + 1}}{\alpha_i} \right) \cdot v_{i + 1} + \cdots + \left( - \dfrac{\alpha_n}{\alpha_i} \right) \cdot v_n$$
and therefore the vector $v_i \in S$ is "dependent" on the other vectors.
In fact, the linear combination $\alpha_1 \cdot v_1 + \alpha_2 \cdot v_2 + \cdots + \alpha_n \cdot v_n = \textbf{0}$ is called the "non - trivial" linear combination.
For an infinite set $S \subseteq V$, it is said to be linearly dependent iff it is not linearly independent. Again, we need to negate the statement for linear independence of infinite set. The negation of the statement would be
"There exists a finite set $A \subset S$ such that $A$ is not linearly independent". And now, we do have the definition of linear dependence (not linear independence) for finite sets which can be used.
I hope your confusion about distinct elements will be cleared by this. And if you are still confused, try forming sets which are linearly dependent and independent in $\mathbb{R}^2$ and $\mathbb{R}^3$ which you can easily visualize. Also read some material on span of a set and how we can connect linear combination and span with linear dependence and independence.
Best Answer
The criteria you mention are only special cases. The set $\left\{\pmatrix{1 \\ 0 \\ 0}, \pmatrix{0 \\ 1 \\ 1}, \pmatrix{1 \\ 1 \\ 1}\right\}$ is linearly dependent but doesn't satisfy any of your criteria.
The actual definition of linear independence is the following:
A necessary and sufficient criterion for linear independence is that none of the vectors are a linear combination of any (finite) subset of the others. The set I give above fails this test because the last vector is a linear combination of the first two.