We've said in linear algebra lectures that when some vectors are linearly independent then the set containing those vectors is as well, but not the other way around. Why is that so? What's the difference btw those vectors being l.i and the set containing them being l.i.?
Difference between a linearly independent set and l.i. vectors
linear algebravector-spacesvectors
Related Solutions
Let $ S = \{(1, 0, 0), (0, 1, 0)\} $. Then, $ S $ is linearly independent, as is easily seen, on the other hand $ (0, 0, 1) \notin \textrm{span}\, S $, therefore $ S $ does not span $ \mathbb{R}^3 $.
On the other hand, your intuition is partly correct due to the following result:
Theorem. Let $ V $ be a vector space, $ L $ a linearly independent subset and $ S $ a subset that spans $ V $. Then, $ |L| \leq |S| $.
Proof. Let $ S = S_0 = \{ s_i : 1 \leq i \leq n \} $ and let $ L = \{ b_i : 1 \leq i \leq m \} $. We construct a sequence of spanning sets. Given $ S_k $, construct the set $ S_{k+1} $ as follows: $ S_k $ is a spanning set, therefore we may write $ b_k = \sum c_i {s_k}_i $ where $ {s_k}_i \in S_k $ and the $ c_i $ are members of the field of scalars. As $ L $ is linearly independent, there must be a vector on the right hand side which is not an element of $ L $, let one such vector be ${s_k}_j$. Therefore, removing $ {s_k}_j $ from the set $S_k$ and replacing it with $ b_k $ gives us a spanning set with the same number of elements as $ S_k $ (as ${s_k}_j$ can be expressed as a linear combination of the elements of this new set). Define this set to be $ S_{k+1} $.
With this construction, a new element of $ L $ is added to the sets $ S_i $ at each step, however the cardinality of the sets remains unchanged. The construction halts at $ S_m $, which contains all elements of $ L $, therefore $ L \subseteq S_m $ and $|L| \leq |S_m| = |S|$, which establishes the result.
Corollary. Let $ V $ have dimension $ n $ over its field of scalars and let $ L $ be a linearly independent subset of $ V $ which has $ n $ elements. Then, $ L $ is a basis of $ V $.
Proof. Let $ B $ be a basis for $ V $, then $ |B| = n $. Consider the set $ L' = L \cup \{v\} $ for any $ v \in V $ and $ v \notin L $. This set has $ n+1 $ elements. However, any linearly independent subset of $ V $ can have at most $ n $ elements by the above theorem, as $ B $ is a spanning subset. Therefore, $ L' $ is linearly dependent, and in particular $ v $ can be expressed as a linear combination of the elements of $ L $ (otherwise $ L $ would be linearly dependent), which establishes that $\textrm{span}\, L = V $. By definition of a basis, $ L $ is a basis of $ V $.
Therefore, if your linearly independent subset has as many elements as the dimension of your vector space. then it has to span your space.
To augment Lord Shark's answer, I just wanted to talk a little about the intuition behind it.
Intuitively, a set of vectors is linearly dependent if there are more vectors than necessary to generate their span, i.e. the smallest subspace containing them.
On the other hand, a set of vectors is affinely dependent if there are more vectors than necessary to generate their affine hull, i.e. the smallest flat (translate of a linear space) containing them.
A single vector $v$ in a vector space generates an affine hull of $\lbrace v \rbrace$, which is just the trivial subspace $\lbrace 0 \rbrace$ translated by $v$. But, if $v \neq 0$, the span is the entire line between $0$ and $v$, as $0$ must be part of any subspace. To generate that line as an affine hull, you could look at the list $v, 0$.
So, $v, 0$ are linearly dependent (e.g. $0 = 0 \cdot v + 5 \cdot 0$) as $0$ is not necessary to generate the span (just $v$ would have done fine), but both are necessary to generate the line as the affine hull, so they are affinely independent. To prove this, suppose $\lambda_1 + \lambda_2 = 0$ and,
$$\lambda_1 \cdot v + \lambda_2 \cdot 0 = 0.$$
Then $\lambda_1 \cdot v = 0$, which implies $\lambda_1 = 0$, since $v \neq 0$. Since $\lambda_1 + \lambda_2 = 0$, we therefore also have $\lambda_2 = 0$. This proves affine independence.
Best Answer
The important distinction is between the linear independence of a set of vectors and the linear independence of a list of vectors. For an easy example let's say we're working in $3$-dimensional space with basis vectors $\mathbf i$, $\mathbf j$ and $\mathbf k$. Then the list of vectors $(\mathbf j,\mathbf k,\mathbf i, \mathbf k)$ isn't linearly independent, simply because it contains $\mathbf k$ twice*.
But by definition a set can't contain duplicate elements. So the set $\{\mathbf j,\mathbf k,\mathbf i, \mathbf k\}$ is linearly independent because it is equal to the set $\{\mathbf i,\mathbf j,\mathbf k\}$
*The definition of linear independence of a list $(\mathbf v_i)$ is that there is a list of coefficients $(a_i)$ of the same length such that $\sum_ia_i\mathbf v_i=0$ and such that not all of the $a_i$ are $0$. In this case we are taking our coefficients to be $(0,1,0,-1)$, so $(\mathbf j,\mathbf k,\mathbf i, \mathbf k)$ isn't linearly independent because $0\mathbf j+1\mathbf k+0\mathbf i+(-1) \mathbf k=0$.