Both (1) and (1b) are correct. The OP has it right that (in this model) there might be a changepoint at $t+1$, and $x_{t+1}$ depends on whether there is a changepoint. This does not imply any problems with (1) as the possible values of $r_{t+1}$ are fully "covered" by $P(x_{t+1} \mid r_t, x_{1:t})$. $P(x_{t+1} | r_t, x_{1:t})$ means the conditional distribution of $x_{t+1}$ conditional on $(r_t, x_{1:t})$. This conditional distribution averages over "everything else", including $r_{t+1}$, conditional on $(r_t, x_{1:t})$. Just like one could write, say, $P(x_{t+1000} | x_t)$, which would take into account all possible configurations of changepoints as well as values of $x_i$s occurring between $t$ and $t+1000$.
In the remainder, I first derive (1) and then (1b) based on (1).
Derivation of (1)
For any random variables $A,B,C$, we have
\begin{equation}
P(A \mid B) = \sum_c P(A \mid B, C=c)\,P(C=c \mid B),
\end{equation}
as long as $C$ is discrete (otherwise the sum needs to be replaced by an integral). Applying this to $x_{t+1},x_{1:t},r_t$:
\begin{equation}
P(x_{t+1} \mid x_{1:t}) = \sum_{r_t} P(x_{t+1} \mid r_t, x_{1:t})\,P(r_t \mid x_{1:t}),
\end{equation}
which holds no matter what the dependencies between $r_t$, $x_{1:t}$, $x_{t+1}$ are, that is, no model assumptions have yet been used. In the present model, $x_{t+1}$ given $r_t,x^{(r)}_t$ is assumed* to be conditionally independent of the values of $x$ from the runs before $x^{(r)}_t$. This implies $P(x_{t+1} \mid r_t, x_{1:t}) = P(x_{t+1} \mid r_t, x^{(r)}_t)$. Substituting this into the previous equation, we get
\begin{equation}
P(x_{t+1} \mid x_{1:t}) = \sum_{r_t} P(x_{t+1} \mid r_t, x^{(r)}_t)\,P(r_t \mid x_{1:t}), \qquad \qquad \qquad (1)
\end{equation}
which is (1) in OP.
Derivation of (1b)
Let us consider the decomposition of $P(x_{t+1} \mid r_t, x^{(r)}_t)$ over possible values of $r_{t+1}$:
\begin{equation}
P(x_{t+1} \mid r_t, x^{(r)}_t) = \sum_{r_{t+1}} P(x_{t+1} \mid r_{t+1}, r_t, x^{(r)}_t)P(r_{t+1} \mid r_t, x^{(r)}_t).
\end{equation}
Since it is assumed* that whether a changepoint occurs at $t+1$ (between $x_t$ and $x_{t+1}$) does not depend on the history of $x$, we have $P(r_{t+1} \mid r_t, x^{(r)}_t) = P(r_{t+1} \mid r_t)$. Furthermore, since $r_{t+1}$ determines whether $x_{t+1}$ belongs into the same run as $x_t$, we have $P(x_{t+1} \mid r_{t+1}, r_t, x^{(r)}_t)=P(x_{t+1} \mid r_{t+1}, x^{(r)}_t)$. Substituting these two simplifications into the factorization above, we get
\begin{equation}
P(x_{t+1} \mid r_t, x^{(r)}_t) = \sum_{r_{t+1}} P(x_{t+1} \mid r_{t+1}, x^{(r)}_t)P(r_{t+1} \mid r_t).
\end{equation}
Substituting this into (1), we get
\begin{equation}
P(x_{t+1} \mid x_{1:t}) = \sum_{r_t} \left(\sum_{r_{t+1}} P(x_{t+1} \mid r_{t+1}, x^{(r)}_t)P(r_{t+1} \mid r_t)\right)\,P(r_t \mid x_{1:t}), \qquad (1b)
\end{equation}
which is OP's (1b).
* Remark on the model's conditional independence assumptions
Based on quickly browsing the paper, I would personally like the conditional independence properties to be more explicitly stated somewhere, but I suppose that the intention is that $r$ is Markovian and the $x$:s associated to different runs are independent (given the runs).
Best Answer
This is based on the assumption that $x$ is conditionally independent of $D$, given $\theta$. This is a reasonable assumption in many cases, because all it says is that the training and testing data ($D$ and $x$, respectively) are independently generated from the same set of unknown parameters $\theta$. Given this independence assumption, $p(x|\theta,D)=p(x|\theta)$, and so the $D$ drops out of the more general form that you expected.
In your second example, it seems that a similar independence assumption is being applied, but now (explicitly) across time. These assumptions may be explicitly stated elsewhere in the text, or they may be implicitly clear to anyone who is sufficiently familiar with the context of the problem (although that doesn't necessarily mean that in your particular examples - which I'm not familiar with - the authors were right to assume this familiarity).