Yes, the stopped process of a predictable process is again a predictable process. It can be shown by a monotone class argument and the fact that the predictable sigma-field is generated by sets of the form
$$
A\times \{0\}, \;A\in\mathcal{F}_0\quad\text{and}\quad A\times (s,t],\;A\in\mathcal{F}_s,\;0\leq s<t.
$$
Also I'm pretty sure that your definition of $M_n$ should be $\int_0^n h_s\,\mathrm{d} X_s$.
To show the above result we let
$$
\mathcal{H}=\{(X_t)_{t\geq 0}\mid \text{if } (X_t)_{t\geq 0} \text{ is predictable, then }(X_t^{\tau})_{\geq 0} \text{ is also predictable for every s.t. }\tau \},
$$
and
$$
\mathcal{K}=\{(X_t)_{t\geq 0}\mid X_t(\omega)=1_{A\times \{0\}}(\omega,t),\; A\in\mathcal{F}_0\text{ or } X_t(\omega)=1_{A\times (s,r]}(\omega,t),\;0\leq s<r,\; A\in\mathcal{F}_s\},
$$
i.e. $\mathcal{K}$ is the elementary predictable processes. Then $\mathcal{H}$ is vector space containing all constant functions and it is closed under monotone bounded convergence (i.e. if $(f_n)\subseteq \mathcal{H}$ is a bounded increasing sequence, then $f=\sup_n f_n\in\mathcal{H}$) and $\mathcal{K}$ is stable under multiplication. Then we are in the scope of the monotone class theorem.
Now if we can show that $\mathcal{K}\subseteq \mathcal{H}$, then every bounded predictable process is in $\mathcal{H}$. For a general predictable process $X=(X_t)_{t\geq 0}$, we let $X^n=X\wedge n\vee -n$ and hence $X^n$ are bounded predictable processes for each $n\in\mathbb{N}$. In particular $X^n\in\mathcal{H}$ for each $n$ and therefore $(X^n)^{\tau}$ is predictable for each $n$, where $\tau$ is any stopping time. Now we get that $X^{\tau}$ is predictable because
$$
X^{\tau}_t(\omega)=\lim_{n\to\infty} (X^n)^{\tau}_t(\omega),\quad (\omega,t)\in \Omega\times [0,\infty).
$$
What is left is to show that $\mathcal{K}\subseteq\mathcal{H}$.
Best Answer
$\newcommand{E}{\mathbb E}$ In the proof of Proposition 1.3 on page 52 (the proposition before the one you've shown) the book defines $H \cdot X$ to be the process $Y$ where $$ Y_0 = X_0, \quad Y_n = Y_{n-1} + H_n (X_n - X_{n-1}). $$ This means we actually have $$ (H \cdot X)_n = X_0 + \sum_{k=1}^n H_k (X_k - X_{k-1}) $$ from which we get $\E[(H \cdot X)_n] = \E[X_0]$. I agree that this is not the standard convention though, and even this book changes convention when it formally defines the stochastic integral. In fact on page 138 it stresses that $K \cdot M$ vanishes at 0.