If $A,B$ are positive, commuting operators, then $AB$ is positive. This is because the unique positive $\sqrt{A}$ must also commute with $B$ and, hence,
$$
\langle ABx,x\rangle = \langle \sqrt{A}Bx,\sqrt{A}x\rangle =
\langle B\sqrt{A}x,\sqrt{A}x\rangle \ge 0.
$$
This result is useful in what follows.
Suppose $A$ is selfadjoint. Let $P=\frac{1}{2}(|A|+A)$ and $N=\frac{1}{2}(|A|-A)$, where $|A|$ is the unique positive square root of $A^2$. Then $PN=NP=0$ and $A=P-N$. This is the desired decomposition of $A$, and the trick is to show that $P,N$ are positive operators.
Let $E$ be the orthogonal projection onto $\mathcal{N}(|A|+A)$. Then $(|A|+A)E=0$ gives $E(|A|+A)=0$ by taking adjoints. And $(|A|+A)(|A|-A)=0$ gives $E(|A|-A)=|A|-A$. Hence,
$$
2EA=E(|A|+A)-E(|A|-A) = A-|A| \\
|A| = (I-2E)A \\
2E|A| = E(|A|+A)+E(|A|-A)=|A|-A \\
A = (I-2E)|A|.
$$
These two equations are consistent because $(I-2E)^2=I-4E+4E=I$ establishes $I-2E$ as its own inverse. Taking adjoints of the above equations shows that $E$ commutes with $A$ and with $|A|$, which is useful in what follows. Now the operators $P$ and $N$ may be written as
$$
P=\frac{1}{2}(|A|+A)=\frac{1}{2}(|A|+(I-2E)|A|)=(I-E)|A|, \\
N=\frac{1}{2}(|A|-A)=\frac{1}{2}(|A|-(I-2E)|A|)=E|A|
$$
Because $E$ commutes with $A$, then $E$ must also commute with $A^2$ and, hence, also with $|A|=(A^2)^{1/2}$. By the result of the first paragraph, $P=(I-E)|A|$ and $N=E|A|$ are positive.
Your approach seems perfectly fine and probably what the authors intended.
Note that there is no need for us to find the "best" (i.e. largest) $\alpha_i$ such that $I - \sum_i E_i$ is positive definite. Note that $\lambda$ is an eigenvalue of $M$ if and only if $1 - \lambda$ is an eigenvalue of $I - M$. Thus, for a positive semidefinite matrix $M$, $I - M$ will be positive definite if and only if all eigenvalues of $M$ are at most equal to $1$. Moreover, if $M$ is positive semidefinite, then the largest eigenvalue of $M$ is equal to $\|M\|$, the "spectral norm" of $M$. Now, note that
$$
\left\|\sum_i P_i \right\| \leq \sum_{i} \|P_i\| = \sum_i 1 = m.
$$
Thus, if we take $\alpha_i = \frac 1m$ for all $i$, we find that
$$
\left\|\sum_i E_i \right\| = \left\|\frac 1m \sum_i P_i \right\| = \frac 1m \left\|\sum_i P_i \right\| \leq 1,
$$
So that $I - \sum_i E_i$ is positive semidefinite, as desired.
For another perspective, note that for a Hermitian matrix $M$, $I - M$ is positive definite iff for all $|\phi\rangle$, we have
$$
\langle \phi|I - M|\phi \rangle \geq 0 \implies\\
\langle \phi|I|\phi \rangle - \langle \phi|M|\phi \rangle \geq 0 \implies\\
\langle \phi |M|\phi \rangle \leq \langle \phi |\phi \rangle.
$$
On the other hand, note that $\langle \phi |P_i |\phi\rangle \leq \langle \phi |\phi \rangle$, so that
$$
\left\langle \phi \left| \sum_i P_i \right| \phi \right\rangle \leq
\sum_i \langle \phi |P_i|\phi \rangle \leq m \cdot \langle \phi|\phi \rangle.
$$
By a similar argument to the previous approach, we can conclude that taking $\alpha_i = \frac 1m$ ensures that $I - \sum_i E_i$ is positive semidefinite.
If you were interested in finding the maximal values of $\alpha_i$ (say, in the sense of maximizing $\sum_i \alpha_i$) such that $E_{m+1}$ is still positive semidefinite, you could do so via a dual semidefinite program. In particular, if $\alpha$ and $b$ are the column-vectors $\alpha = (\alpha_1,\alpha_2,\dots,\alpha_m)$ and $b = (1,1,\dots,1)$, then we aim to solve the optimization problem
$$
\max_{\alpha \in \Bbb R^n} b^\top \alpha \quad \text{subject to } \quad
\sum_{i=1}^m \alpha_i P_i \preceq I.
$$
Suffice it to say, such problems are highly non-trivial and typically done with computer assistance.
This of course is much simpler in the case where the $|\psi_i\rangle$ are mutually orthogonal, which would imply that the $E_i$ are mutually orthogonal projections, which would imply that the constraint is equivalent to $\alpha_i \leq 1$ for all $i$.
Best Answer
Not all Hermitian operators are positive (just take the simple Hermitian $1\times 1$ matrix $H= (-1)$ as a counter example). The completeness has also nothing to do with the positivity.
However, an operator of the form $$E= M^\dagger M$$ is trivially positive. We can show that for any vector $\bf v$, we have that $$\langle \mathbf v , E \mathbf v\rangle = \langle \mathbf v , M^\dagger M \mathbf v\rangle = \langle M \mathbf v , M \mathbf v\rangle = \langle \mathbf w, \mathbf w\rangle \geq 0 $$ where $\mathbf w = M \mathbf v$.