[Math] Subset of a finite set is finite

elementary-set-theory

We define $A$ to be a finite set if there is a bijection between $A$ and a set of the form $\{0,\ldots,n-1\}$ for some $n\in\mathbb N$.

How can we prove that a subset of a finite set is finite? It is of course sufficient to show that for a subset of $\{0,\ldots,n-1\}$. But how do I do that?

Best Answer

The proof is essentially the pigeonhole principle, and it is proved by induction.

Let us denote $[n]=\{0,\ldots,n-1\}$. What we want to prove is in two parts:

  1. If $A\subseteq[n]$ then either $A=[n]$ or there is a bijection between $A$ and $[m]$ for some $m<n$.

  2. If $m<n$ then there is no bijection from $[n]$ into $[m]$. Equivalently we want to prove that if $f\colon[n]\to[n]$ is injective then it is a surjective. Also we may want to prove that if $f\colon[n]\to[m]$ then $f$ is not injective, or $f\colon[m]\to[n]$ then $f$ is not surjective.

The first part is not very difficult, we define by induction $f(0)=\min A$; and $f(k+1)=\min A\setminus\{f(0),\ldots,f(k)\}$. Either $f$ is a bijection between $A$ and $[n]$ or the induction had to stop before and $f$ is a bijection between $[m]$ and $A$ for some $m<n$. Note that this is not begging the question because $A$ is a set of natural numbers, and it is a subset of $[n]$ so it cannot contain more elements than $[n]$ (more in the actual sense, not in the sense of cardinality).

The second part is tricky because it seems so obvious. This is where the pigeonhole principle is actually proved.

We prove this part by induction. For $n=0$ this is obvious because $[0]=\varnothing$.

Assume that for $[n]$ it is true that if $f\colon[n]\to[n]$ is injective then it is surjective. We want to show this is also true for $[n+1]$.

Let $f\colon[n+1]\to[n+1]$ be an injective function. There are two cases:

  1. If $f(k)\neq n$ for all $k<n$ then restricting $f$ to $[n]$ is an injective function from $[n]$ into $[n]$. By the induction hypothesis we have to have that the restriction of $f$ to $[n]$ is surjective, therefore $f(n)\notin[n]$ (otherwise $f$ is not injective) and therefore $f(n)=n$ and so $f$ is surjective as well.

  2. If $f(k)=n$ for some $k<n$, by injectivity this $k$ is unique. We define $g$ as follows: $$g(x)\begin{cases} f(n) & x=k\\ n & x=n\\ f(x) & x\neq k,n\end{cases}$$ It follows that $g$ is also injective, and for all $k< n$ we have to have $g(k)\neq n$ (because $g(n)=n$ and $g$ is injective). Therefore by the previous case we know that $g$ is surjective. It follows that $f$ is also surjective, as wanted.


A word on the axiom of choice and finiteness. The above proof shows that finite sets are Dedekind-finite. There are other ways of defining finiteness, all which are true for finite sets, but may also be true for infinite sets. For example "every surjection is a bijection" might fail for infinite Dedekind-finite sets; or "every linearly ordered partition is finite"; etc.

Assuming the axiom of choice, or actually the axiom of countable choice, is enough to conclude, however, that all Dedekind-finite sets are finite. Therefore the above forms of finiteness are equivalent once more. (The reverse implication is false, by the way, it is consistent that the axiom of countable choice fails, but every Dedekind-finite set is finite.)

Related Question