A linear projection $P$ onto a subspace $\mathcal{M}$ has the properties that (a) the range of $P$ is $\mathcal{M}$, (b) projecting twice is the same as projecting once: $P^{2}=P$.
Orthogonal projection is something peculiar to an inner product space, and it is the same as closest point projection for a subspace. There can be many projections onto a subspace, but only one orthogonal projection. You would have seen the first examples of this in Calculus where you were asked to find the closest-point projection of a point $p$ onto a line or plane by finding a point $q$ on the line or plane such that $p-q$ is orthogonal to the given line or plane.
The orthogonal projection of a point $p$ onto a closed subspace $\mathcal{M}$ of a Hilbert space is the unique point $m\in \mathcal{M}$ such that $(p-m) \perp\mathcal{M}$. That is $(x-P_{\mathcal{M}}x) \perp \mathcal{M}$ uniquely determines $P_{\mathcal{M}}$, and this function is automatically linear. Orthogonal projection onto a subspace $\mathcal{M}$ is a closest point projection; that is,
$$
\|x-m\| \ge \|x-P_{\mathcal{M}}x\|,\;\;\; m \in M,
$$
with equality iff $m=P_{\mathcal{M}}x$.
For your case, the orthogonal projection $Px$ of $x$ onto the subspace spanned by $\{ e_{n}\}_{n=1}^{N}$ is the unique $y=\sum_{n}\alpha_{n}e_{n}$ such that $(x-\sum_{n}\alpha_{n}e_{n})\perp \mathcal{M}$. Equivalently,
$$
(x-\sum_{n}\alpha_{n}e_{n}, e_{m})=0,\;\;\; m=1,2,3,\cdots,N,
$$
or, using the orthonormality of $\{ e_{n} \}$,
$$
(x,e_{m}) = \sum_{n}\alpha_{n}(e_{n},e_{m})=\alpha_{m}.
$$
So the orthogonal projection $P$ onto the subspace $\mathcal{M}$ spanned by $\{ e_{n}\}_{n=1}^{N}$ is
$$
Px = \sum_{n=1}^{N}(x,e_{n})e_{n}.
$$
By design, one has $(x-Px)\perp\mathcal{M}$. In particular, $(x-Px)\perp Px$ because $Px\in\mathcal{M}$, which gives the orthogonal decomposition $x=Px+(I-P)x$, and
$$
\|x\|^{2}=\|Px\|^{2}+\|(I-P)x\|^{2}.
$$
So, both $P$ and $I-P$ are continuous. But $\mathcal{N}(I-P)=\mathcal{M}$ because $Px=x$ iff $x\in\mathcal{M}$, which guarantees that $\mathcal{M}=(I-P)^{-1}\{0\}$ is closed.
Best Answer
@AdamHughes answer works just fine, I am giving here a complete proof of the next proposition: If $(X, \|\cdot \|$) is a normed space, then any finite dimensional subset is closed. Here is the proof
Proof Let $M \subset X$ have dimension $k \in \mathbb{N}$, then there exist $\{ e_1, \cdots, e_k \}$ such that $$ M= \text{span}\{ e_1, \cdots, e_k \} $$ Take now any Cauchy sequence $\{ y_n \}_n \subset M$ such that $y_n \to x \in X$. Clearly for each $n$, we have $\{\lambda_1(n), \cdots \lambda_k(n) \} \subset \mathbb{C}$ such $$ y_n = \lambda_1(n)e_1+ \cdots + \lambda_k(n) e_k. $$ Therefore if $m<n$ $$ y_n-y_m = (\lambda_1(n)-\lambda_1(m) )e_1+ \cdots + (\lambda_k(n) - \lambda_k(m) )e_k, $$ considering $\mathbb{C}^k$ as a normed space with the special norm $\|(z_1, \cdots, z_k)\|_1= \sum_{j=1}^{k}|{z_j}|$, a standard results (used usually to prove that all norms are equivalent on finite dimension) tell us that there exist a constant $C>0$ such that $$ \|(\lambda_1(n)-\lambda_1(m) , \cdots , \lambda_k(n) - \lambda_k(m) )\|_1 \leq \frac{1}{ C} \|y_n-y_m\|. $$ But since $\{ y_n \}_n$ since is Cauchy in $X$, the sequence $\{ (\lambda_1(n) , \cdots , \lambda_k(n) )\}_n\subset \mathbb{C}^k$ is also Cauchy, and thus converges to some $(\lambda_1 , \cdots , \lambda_k) \in \mathbb{C}^k$, i.e. $\lambda_j(n) \to \lambda_j \in \mathbb{C}$ as $n \to \infty$. Hence, if $y =\lambda_1e_1+ \cdots + \lambda_k e_k$, clearly $y \in M$, moreover \begin{align*} \lim_{n \to \infty} \|y - y_n\| & = \lim_{n \to \infty} \| (\lambda_1(n)-\lambda_1 )e_1+ \cdots + (\lambda_k(n) - \lambda_k )e_k\| \\ & \leq \lim_{n \to \infty} | \lambda_1(n)-\lambda_1 |\|e_1\|+ \cdots + \lim_{n \to \infty} | \lambda_k(n)-\lambda_k |\|e_k\| \\ & = 0 + \cdots + 0 = 0 \end{align*} Which gives that indeed $y_n \to x=y \in M$, and that is why $M$ must be closed. $\blacksquare$