[Math] Linearly independent set can be completed to a basis

linear algebra

Suppose I have a linear independent set for a finite dimensional vector space $V$. How can I prove rigorously than it can be completed to a basis? More importantly, is the completion unique? I do not think that the answer to this question is yes: the following is my counterexample:

$((1,1), e_2)$ is a basis for $\mathbb{R}^2$, as is $(e_1, e_2)$. However, does $(e_2)$ count as a linearly independent set?

Best Answer

The completion is certainly not unique. Multiplying any of the new vectors by a nonzero constant will not affect the span or the linear independence, but will change the basis.

To prove that you can extend any linearly independent set $S$ to a basis, you proceed by an iterative argument. If $S$ spans you are done. Otherwise, the span of $S$ does not include some vector $v$. You claim that $S \cup \{v\}$ is linearly independent. Write down the condition for linear independence and observe for yourself that if linear independence fails, you can deduce that $v$ was in the span of $S$ (solve the non-zero equation for $v$ in terms of elements in $S$). Now you ask if $S \cup \{v\}$ spans, and if so, you are done. If not, take some vector $w$ not in the span and consider, $S \cup \{v\} \cup \{w\}$. Iterate this argument.To prove in general that this iteration eventually terminates in a spanning set, you actually need to use Zorn's Lemma. However, if you have a finite spanning set $B$ then you can pick the elements $v,w, \cdots$ from $B$ and the most number of steps our iteration might take is to exhaust all the elements of $B$.