A linear projection $P$ onto a subspace $\mathcal{M}$ has the properties that (a) the range of $P$ is $\mathcal{M}$, (b) projecting twice is the same as projecting once: $P^{2}=P$.
Orthogonal projection is something peculiar to an inner product space, and it is the same as closest point projection for a subspace. There can be many projections onto a subspace, but only one orthogonal projection. You would have seen the first examples of this in Calculus where you were asked to find the closest-point projection of a point $p$ onto a line or plane by finding a point $q$ on the line or plane such that $p-q$ is orthogonal to the given line or plane.
The orthogonal projection of a point $p$ onto a closed subspace $\mathcal{M}$ of a Hilbert space is the unique point $m\in \mathcal{M}$ such that $(p-m) \perp\mathcal{M}$. That is $(x-P_{\mathcal{M}}x) \perp \mathcal{M}$ uniquely determines $P_{\mathcal{M}}$, and this function is automatically linear. Orthogonal projection onto a subspace $\mathcal{M}$ is a closest point projection; that is,
$$
\|x-m\| \ge \|x-P_{\mathcal{M}}x\|,\;\;\; m \in M,
$$
with equality iff $m=P_{\mathcal{M}}x$.
For your case, the orthogonal projection $Px$ of $x$ onto the subspace spanned by $\{ e_{n}\}_{n=1}^{N}$ is the unique $y=\sum_{n}\alpha_{n}e_{n}$ such that $(x-\sum_{n}\alpha_{n}e_{n})\perp \mathcal{M}$. Equivalently,
$$
(x-\sum_{n}\alpha_{n}e_{n}, e_{m})=0,\;\;\; m=1,2,3,\cdots,N,
$$
or, using the orthonormality of $\{ e_{n} \}$,
$$
(x,e_{m}) = \sum_{n}\alpha_{n}(e_{n},e_{m})=\alpha_{m}.
$$
So the orthogonal projection $P$ onto the subspace $\mathcal{M}$ spanned by $\{ e_{n}\}_{n=1}^{N}$ is
$$
Px = \sum_{n=1}^{N}(x,e_{n})e_{n}.
$$
By design, one has $(x-Px)\perp\mathcal{M}$. In particular, $(x-Px)\perp Px$ because $Px\in\mathcal{M}$, which gives the orthogonal decomposition $x=Px+(I-P)x$, and
$$
\|x\|^{2}=\|Px\|^{2}+\|(I-P)x\|^{2}.
$$
So, both $P$ and $I-P$ are continuous. But $\mathcal{N}(I-P)=\mathcal{M}$ because $Px=x$ iff $x\in\mathcal{M}$, which guarantees that $\mathcal{M}=(I-P)^{-1}\{0\}$ is closed.
Define the operator $S : H\to H$ by $Sx = \sum_k\langle x,e_k\rangle x_k$. From
\begin{align*}
\left\|\sum\langle x,e_k\rangle x_k\right\|
&\le\left\|\sum\langle x,e_k\rangle (x_k-e_k)\right\| + \left\|\sum\langle x,e_k\rangle e_k\right\|\\
&\le\sum |\langle x,e_k\rangle| \|x_k-e_k\| + \left\|\sum\langle x,e_k\rangle e_k\right\|\\
&\le \left(\sum |\langle x,e_k\rangle|^2\right)^{1/2}\left(\sum \|x_k-e_k\|^2\right)^{1/2} + \left(\sum |\langle x,e_k\rangle|^2\right)^{1/2}\\
&\le 2\left(\sum |\langle x,e_k\rangle|^2\right)^{1/2},
\end{align*}
we see that $\sum_{k=1}^n\langle x,e_k\rangle x_k$ is a Cauchy sequence. So, $S$ is well defined. Moreover, if we set $\delta := \left(\sum_k\|x_k-e_k\|^2\right)^{1/2} < 1$, then
$$
\|(S-I)x\| = \left\|\sum_k\langle x,e_k\rangle (x_k-e_k)\right\|\le\delta\|x\|.
$$
So, $\|S-I\| < 1$ which implies that $S$ is invertible. Now it should be easy for you to show that $(x_k)_k$ is dense in $H$. For this, note that $Se_k = x_k$.
Best Answer
$\newcommand{nrm}[1]{\left\lVert{#1}\right\rVert}$\begin{align}\nrm{P_nTP_nh-Th}&=\nrm{P_nTP_nh-P_nTh+P_nTh-Th}\le\\&\le \nrm{P_nT(P_nh-h)}+\nrm{P_nTh-Th}\le\\&\le \nrm{P_n}\nrm T\nrm{P_nh-h}+\nrm{P_nTh-Th}\le\\&\le \nrm T\nrm{P_nh-h}+\nrm{P_nTh-Th}\to0\end{align}