I will present a more detailed argument of the cadlag-adapted property of $\prod_{s \leq t} (1+\Delta X_s)\exp(-\Delta X_s)$.
For adaptedness, fix a time point $T$ and consider the above term at $T$. Note that for each $s \leq T$, the process $(1+\Delta X_s)\exp(-\Delta X_s)$ is $\mathcal F_T$ measurable, because $X_s$ and $X_{s^-}$ are both measurable and continuous functions of adapted processes are adapted.
Now, we want to show that the infinite product is adapted : for this, we use an exercise from chapter $1$ of the same book, that states that a cadlag function on a compact interval will , for any $\epsilon >0$ have only finitely many jumps of size greater than $\epsilon$.
With this in mind, we define for $\epsilon > 0$ the process $X^{T}_{\epsilon}=\prod_{s \leq T : |\Delta X_s| > \epsilon} (1 + \Delta X_s)\exp(-\Delta X_s)$. Then $X^T = \lim_{\epsilon \to 0} X^T_{\epsilon}$.
Why is $X_{\epsilon}^T$ measurable? Let me go in detail here : we'll define $X^T_{\epsilon , n}$ random variables now in a logical way. First throw out the set on which cadlag is violated. On the rest, for each $\omega$ in this good set, there is a finite set $\{k_{1,\omega},...,k_{m_{\omega},\omega}\}$ (sorted ascending) depending on $\omega$ such that the set of jumps is bigger than $\epsilon$. $X^T_{\epsilon,n}(\omega)$ is the product of $(1+\Delta X_s)\exp(-\Delta X_s)$ over the first $n$ elements in the jump set.
Now clearly $X^T_{\epsilon,n}$ is $\mathcal F_T$ measurable, and then we can take $n \to \infty$ to get $X^T_{\epsilon}$ being measurable, and finally $\epsilon \to 0$ to finish it.
The cadlag was easier and I had got things wrong. We have the process $M_t = \prod_{s \leq t} (1+\Delta X_s)\exp(- \Delta X_s)$ which we want to show is cadlag. I don't know why I was trying to show each term was cadlag, because it most certainly is not.
In fact, we just go by definition. Let $t_n \downarrow t$. Then we know that $|\frac{M_{t_n}}{M_t}| = \prod_{t < s \leq t_n} (1+\Delta X_s)\exp(- \Delta X_s)$. First , consider for each $\epsilon > 0$ the product $Y^{\epsilon}_n = \prod_{t < s \leq t_n : |\Delta X_s| > \epsilon} (1+\Delta X_s)\exp(- \Delta X_s)$. This product is finite for each $\omega$, so clearly as $n \to \infty$ this product goes to $1$ for each $\omega$, that is $\lim_{n \to \infty} Y^n_{\epsilon} = 1$ a.s. so $\lim_{\epsilon \to 0} \lim_{n \to \infty} Y^{\epsilon}_n = 1$ a.s. Now, we have $\lim_{n \to \infty} |\frac{M_{t_n}}{M_t}| = \lim_{n \to \infty}\lim_{\epsilon \to 0} Y^{\epsilon}_n$.
Note that for each $x$ we have $(1+x)e^{-x} \leq 1$. Therefore, increasing the number of terms in the product defining $M_t$ decreases it. This leads to monotonicity in $Y_{\epsilon}^n$ : as $\epsilon \to 0$ the number of terms increases, so $Y_{\epsilon}^n$ increases. Similarly, as $n$ increases the number of terms decreases so the product increases. Finally, using the usual limit exchange theorem for monotone sequences, it follows that both double limits exist and are equal.
I want you to see if you can argue left limit existence similarly.
We are trying to show that $Z_s$ is a semimartingale. The product of two semimartingales is a semimartingale, therefore we just need to show that the two processes $\exp(X_t - \frac 12[X,X]_t^c)$, and the infinite product discussed above, are semimartingales.
The fact that the former is a semimartingale follows from the fact that $X_t - \frac 12[X,X]_t^c$ is a semimartingale, and $\exp$ is a $C^2$ function so the Ito formula applies.
That the latter is a semimartingale, and furthermore a quadratic pure jump one, is proved using theorem $26$, for which we only need to show that the infinite product is of finite variation on compacts.
To show that the infinite product is of finite variation on compacts, the argument breaks the infinite product into two parts , one of which consists of only finitely many terms a.s. and the other is $V_t$.
The finite part is of finite variation, because that is a monotone bounded process (as $s$ goes up , the number of terms increases and each term is $\leq 1$, with boundedness following from the finiteness of terms). That $V_t$ is of finite variation is explained in the text. The product of two processes of finite variation is of finite variation as well, so this follows.
The confusion is that $V_t$ being of finite variation implies that $Z_t$ is. This is not true : once $V_t$ is of finite variation, then its product with that finite part is a semimartingale (which is the entire jump part), and the rest follows from the fact that a product of semimartingales is one too. That , and Theorem $26$ is why we do this entire maneuver.
Then for the Ito formula we write $f(K_t,S_t) = e^{K_t}S_t$ where $K$ and $S$ are the two parts as specified. As mentioned earlier, theorem $26$ tells us that $S$ is a quadratic pure jump process i.e. $[S,S]^c = 0$.
Now, look at the proof of theorem $28$, which tells you that if $[X,X]^c = 0$ for a semimartingale $X$, then $[X,Y]^c = 0$ for any semimartingale $Y$. Thus, since $[S,S]^c = 0$ we get $[K,S]^c = 0$.
Next, we look at the integration steps. Note that we want $Z_0 = 1$ so that's why we subtract $1$.
The first equality $Z_t - 1 = ...$ follows from the Ito multidimensional rule (Theorem $33$), with $f$, $K_t$ and $S_t$ as given. The key point is that in the multidimensional rule we have the cross variations $d[X^i,X^j]^c_s$, and since $[K,S]^c = [S,S]^c = 0$(This is the only point in the entire computation that $[K,S]^c = 0$ is used) we know that these particular terms vanish. That is why there's only one integral with a $[K,K]^c_s$ term. The rest of the terms are straightforward from the formula, and usual partial differentiation rules.
For the second equality, nothing changes except expressing $dK_s$ and $d[K,K]^c_s$ in terms of $dX_s$ and $d[X,X]^c_s$, each of which are straightforward applications of the Ito Formula, along with the fact that $d[X,[X,X]^c] = 0$ and $d[[X,X]^c,[X,X]^c] = 0$. (Which is proved in the edit).
Now, the fact that $Z_{s} = Z_{s^-}(1+\Delta X_s)$ : note that $Z_{s^-} = e^{K_{t^-}} S_{t^-}$, and therefore $\frac{Z_s}{Z_s^-} = e^{K_t - K_{t^-}}(\frac{S_t}{S_{t^-}}) = e^{\Delta K_t}\frac{S_t}{S_{t^-}}$ (The ratios can be taken as the definitions indicate a.s. positivity of the stochastic processes involved).
Now, $\Delta K_t = \Delta X_t - \frac 12 \Delta [X,X]^c_t = \Delta X_t$ since the other term is zero by continuity of the stochastic process. Note that $S_{t^-} = \prod_{0 < s \boxed{\color{blue}{<}} t} (1+ \Delta X_s)\exp(- \Delta X_s)$, so the ratio $\frac{S_t}{S_{t^-}}$ is just the term for $t$, which is $(1+ \Delta X_t)\exp(- \Delta X_t)$.
So the answer is just $e^{\Delta X_t} (1+\Delta X_t)\exp(-\Delta X_t) = (1 + \Delta X_t)$, as desired.
In a similar way, it is clear that $Z_{s^-}\Delta X_s = Z_{S^-} \Delta K_s$ (from definition of $K$). The rest follows from simplification.
It is possible that there are still doubts, so kindly clarify. It seems that most of your confusions were clarified by referring to previous theorems, maybe the author should have done this more often.
Furthermore, to obtain more intuition behind the Doleans-Dade exponential, I suggest you look at the case where $X$ is a continuous semimartingale (for which there's no $S_t$ so just $e^{K_t}$ would have been $Z_t$) which will help you understand the continuous situation better. The exponential martingale that comes out of the Doleans-Dade exponential is key in the Girsanov theorem.
EDIT : The explanation that $[X,[X,X]^c] = 0$ follows from the fact that $[X,X]^c$ is continuous, adapted and has finite variation on compacts(being continuous), hence is a pure jump semimartingale. It follows from Theorem $28$ that :
$$
[X,[X,X]^c]_t = \sum_{0 < s \leq t} \Delta X_s \Delta [X,X]^c_t = 0
$$
because the process $[X,X]^c_t$ doesn't jump, it is a continuous process so that $\Delta$ term is zero (Note : this feels like a contradiction : how can a process be "pure jump" and continuous? Turns out this is confusing but true. See George Lowther's blog post here). Similar stuff follows for $[[X,X]^c,[X,X]^c] = 0$.
Best Answer
Quadratic variation of a semi-martingale is non-decreasing and right-continuous. (Or at least there exists such a version of it). You know that a function is of bounded variation if and only if it is the difference of two non-decreasing functions. So $[X,X]_t$ is of bounded variation and right-continuous. Then its quadratic variation must be the sum of the jumps squared over the interval the Q.V. is computed.
If you impose further that the semi-martingale is continuous, then its Q.V. is continuous as well. The argument above gives in that case a Q.V. of $0$.