Why is the empty set linearly independent

linear algebralogicvector-spaces

For context, I am reading P.R Halmos's Finite-Dimensional Vector Spaces's section on linear dependence. The book wrote a lot of explanation for why the empty set is linearly independent around the definition of linear dependence

Here's the definition provided in the text:

Definition. A finite set $\{x_i \}$ of vectors is linearly dependent if there exists a corresponding set $\{a_i \}$ of scalars, not all zero, such that $$\sum_i a_i x_i = 0 $$ If, on the other hand, $\sum_i a_i x_i = 0 $ implies that $a_i = 0 $ for each $ i $, the set $\{x_i \}$ is linearly independent

And the explanation for why the empty set is linearly independent as I've understood is as follows: Since there is no indices $ i $ at all for an empty set, you cannot assign to some of them a non-zero scalar, thus it's not linearly dependent.

But what I'm confused about is that the negation of "some scalars are non-zero" is "all scalars are zero". Then I can use the same argument to say that since there is no indices $ i $ at all for an empty set, you cannot assign to all the vectors a zero scalar, thus it's not linearly independent.

Especially when the text, for sake of intuition, tries to rephrase the definition of linear independence to "If $\sum_i a_i x_i = 0 $ then there is no index $ i $ for which $ a_i \neq 0 $". Here, equivalently, we can say "If $\sum_i a_i x_i = 0 $ then for all indices $ i $ , $ a_i = 0 $". I feel like this is just playing with words and did not address the problem

Best Answer

Let's phrase things differently:

Let $V$ be a vector space over the field $F$, and $S$ a subset of $V$.

$S$ is linearly independent if, $\forall \{a_i\}_{i=1}^n \subseteq F$, $\forall \{v_i\}_{i=1}^n \subseteq S$ (each distinct), then

$$\sum_i a_i v_i = 0 \implies a_i = 0 \; \forall i$$

Consequently, the negation: $S$ is linearly dependent if $\exists \{a_i\}_{i=1}^n \subseteq F$ and $\exists \{v_i\}_{i=1}^n \subseteq S$ (each distinct) such that

$$\sum_i a_i v_i = 0 \text{ and } \exists i \text{ such that } a_i \ne 0$$

Notice what's going on here: to have linear dependence, we need to be able to find

  • a specific vector, or set thereof
  • corresponding scalar(s)

such that $\sum a_i v_i = 0$ and the $a_i$ are not all zero.

But there's a problem with that if $S$ is the empty set -- you can't find any vectors in there!

So you can't conclude linear dependence. Hence the result follows.

Related Question