A linear projection $P$ onto a subspace $\mathcal{M}$ has the properties that (a) the range of $P$ is $\mathcal{M}$, (b) projecting twice is the same as projecting once: $P^{2}=P$.
Orthogonal projection is something peculiar to an inner product space, and it is the same as closest point projection for a subspace. There can be many projections onto a subspace, but only one orthogonal projection. You would have seen the first examples of this in Calculus where you were asked to find the closest-point projection of a point $p$ onto a line or plane by finding a point $q$ on the line or plane such that $p-q$ is orthogonal to the given line or plane.
The orthogonal projection of a point $p$ onto a closed subspace $\mathcal{M}$ of a Hilbert space is the unique point $m\in \mathcal{M}$ such that $(p-m) \perp\mathcal{M}$. That is $(x-P_{\mathcal{M}}x) \perp \mathcal{M}$ uniquely determines $P_{\mathcal{M}}$, and this function is automatically linear. Orthogonal projection onto a subspace $\mathcal{M}$ is a closest point projection; that is,
$$
\|x-m\| \ge \|x-P_{\mathcal{M}}x\|,\;\;\; m \in M,
$$
with equality iff $m=P_{\mathcal{M}}x$.
For your case, the orthogonal projection $Px$ of $x$ onto the subspace spanned by $\{ e_{n}\}_{n=1}^{N}$ is the unique $y=\sum_{n}\alpha_{n}e_{n}$ such that $(x-\sum_{n}\alpha_{n}e_{n})\perp \mathcal{M}$. Equivalently,
$$
(x-\sum_{n}\alpha_{n}e_{n}, e_{m})=0,\;\;\; m=1,2,3,\cdots,N,
$$
or, using the orthonormality of $\{ e_{n} \}$,
$$
(x,e_{m}) = \sum_{n}\alpha_{n}(e_{n},e_{m})=\alpha_{m}.
$$
So the orthogonal projection $P$ onto the subspace $\mathcal{M}$ spanned by $\{ e_{n}\}_{n=1}^{N}$ is
$$
Px = \sum_{n=1}^{N}(x,e_{n})e_{n}.
$$
By design, one has $(x-Px)\perp\mathcal{M}$. In particular, $(x-Px)\perp Px$ because $Px\in\mathcal{M}$, which gives the orthogonal decomposition $x=Px+(I-P)x$, and
$$
\|x\|^{2}=\|Px\|^{2}+\|(I-P)x\|^{2}.
$$
So, both $P$ and $I-P$ are continuous. But $\mathcal{N}(I-P)=\mathcal{M}$ because $Px=x$ iff $x\in\mathcal{M}$, which guarantees that $\mathcal{M}=(I-P)^{-1}\{0\}$ is closed.
Here is an approach without an explicit characterisation of the dual space of $\ell^p$. Let $\varphi$ be a linear continuous functional on $\ell^p$, $\left(a_n\right)_{n\geqslant 1}$ be an element of $\ell^p$. Let $c_n=a_n\operatorname{sgn}(a_n)\operatorname{sgn}\left(\varphi\left(e_n\right)\right)$, where $\operatorname{sgn}(x)=1$ if $x\gt0$, $-1$ if $x\lt 0$ and $0$ for $x=0$. Let $x_N:=\sum_{n=1}^Nc_ne_n$. Then the sequence $\left(x_N\right)_{N\geqslant 1}$ converges in $\ell^p$ to some $x$ (indeed, $\left\lVert x_{M+N}-x_N\right\rVert_p^p=\sum_{n=N+1}^{N+M}\lvert a_n\rvert^p$).
As a consequence, the sequence $\left(\varphi\left(x_N\right)\right)_{N\geqslant 1}$ is bounded and it follows that $\sum_{n\geqslant 1}a_n\left\lvert \varphi\left(e_n\right)\right\rvert$ is convergent (indeed, $\varphi(x_N)=\sum_{n=1}^Nc_n\varphi(e_n)=\sum_{n=1}^N\lvert a_n \varphi(e_n)\rvert$). In other words, we proved
$$\tag{*}
\left(a_n\right)_{n\geqslant 1}\in\ell^p\Rightarrow \sum_{n\geqslant 1}\left\lvert a_n\varphi\left(e_n\right)\right\rvert<+\infty.
$$
This implies that the sequence $\left(\varphi\left(e_n\right)\right)_{n\geqslant 1}$ converges to $0$. Indeed, if not, there is a $\delta\gt 0$ and a sequence $(n_k)$ of integers growing to infinity such that $\left\lvert \varphi\left(e_{n_k}\right)\right\rvert\gt \delta$ for all $k$. Let $\left(b_k\right)\in \ell^p\setminus \ell^1$ and define $a_n$ by $a_{n_k}=b_k$ and $a_n=0$ if $n$ is not of the form $n_k$ for some $k$ to get a contradiction with $(*)$.
Best Answer
Partial Answer: Here is an attempt that I started writing.
Your statement is exactly right: the definition of weak convergence is such that $x_N$ "converges weakly to zero" when $\langle x_N, y \rangle \to 0$ for all $y \in H$.
Now, $\{e_1,e_2,\dots\}$ is an orthonormal basis, which means that we can write $y = \sum_{n=1}^\infty y_ne_n$. In terms of these coefficients $y_n$, we have $$ \langle x_N,y \rangle = \frac{1}{\sqrt{N}}\sum_{n=1}^N y_n. $$ We want to show that this sequence of sums converges to zero.
Note that $\left| \frac{1}{\sqrt{N}}\sum_{n=1}^N y_n\right| \leq \frac{1}{\sqrt{N}}\sum_{n=1}^N |y_n|$, and $\|y\|^2 = \langle y,y \rangle = \sum_{n=1}^\infty |y_n|^2$. With all that, we see that it suffices to show the following:
Proof: Suppose for the purpose of contradiction that the limit is non-zero. By the definition of a limit, it follows that there exists an $\epsilon > 0$ and infinitely many integers $N_1<N_2<\dots$ for which $$ \frac1{\sqrt{N_k}}\sum_{n=1}^{N_k} a_n \geq \epsilon \implies S_k :=\sum_{n=1}^{N_k} a_n \geq \epsilon \sqrt{N_k}. $$ It follows that for $k = 1,2,\dots$, we have $$ S_{k+1} - S_k = \sum_{n=N_k+1}^{N_{k+1}} a_n \geq \epsilon (\sqrt{N_{k+1}} - \sqrt{N_k}). $$ Now, we note that $\sum_{n=1}^N a_n^2 \geq \frac 1N \left(\sum_{n=1}^N a_n\right)^2$ (as can be seen by Cauchy Schwarz). Thus, we have $$ \begin{align} \sum_{n=N_k+1}^{N_{k+1}} a_n^2 &\geq \frac 1{N_{k+1} - N_k}\left(\sum_{n=N_{k+1}}^{N_{k+1}} a_n\right)^2 \\ & \geq \epsilon^2 \frac{(\sqrt{N_{k+1}} - \sqrt{N_k})^2}{N_{k+1} - N_k} = \epsilon^2 \left(\frac{2 \sqrt{N_{k+1}}}{\sqrt{N_{k+1}}+ \sqrt{N_k}} - 1\right) \end{align} $$
Another idea: write $\beta_N := \langle x_N,y \rangle$. Note that $x_{N+1} = \frac{1}{\sqrt{N+1}}(\sqrt{N}x_N + e_{N+1})$. It follows that $$ \beta_{N+1} = \frac 1{\sqrt{N+1}}(\sqrt{N}\beta_N + y_{N+1}) = \sqrt{\frac{N}{N+1}} \beta_N + \frac 1{\sqrt{N+1}}y_{N+1} $$