I have posted a related (but broader) question and answer here which may shed some more light on this matter, giving the full context of the model setup for a Bayesian IID model.
You can find a good primer on the Bayesian interpretation of these types of models in Bernardo and Smith (1994), and you can find a more detailed discussion of these particular interpretive issues in O'Neill (2009). A starting point for the operational meaning of the parameter $\theta$ is obtained from the strong law of large numbers, which in this context says that:
$$\mathbb{P} \Bigg( \lim_{n \rightarrow \infty} \frac{1}{n} \sum_{i=1}^n X_i = \theta \Bigg) = 1.$$
This gets us part-way to a full interpretation of the parameter, since it shows almost sure equivalence with the Cesàro limit of the observable sequence. Unfortunately, the Cesàro limit in this probability statement does not always exist (though it exists almost surely within the IID model). Consequently, using the approach set out in O'Neill (2009), you can consider $\theta$ to be the Banach limit of the sequence $X_1,X_2,X_3$, which always exists and is equivalent to the Cesàro limit when the latter exists. So, we have the following useful parameter interpretation as an operationally defined function of the observable sequence.
Definition: The parameter $\theta$ is the Banach limit of the sequence $\mathbf{X} = (X_1,X_2,X_3,...)$.
(Alternative definitions that define the parameter by reference to an underlying sigma-field can also be used; these are essentially just different ways to do the same thing.) This interpretation means that the parameter is a function of the observable sequence, so once that sequence is given the parameter is fixed. Consequently, it is not accurate to say that $\theta$ is "unrealised" --- if the sequence is well-defined then $\theta$ must have a value, albeit one that is unobserved (unless we observe the whole sequence). The sampling probability of interest is then given by the representation theorem of de Finetti.
Representation theorem (adaptation of de Finetti): If $\mathbf{X}$ is an exchangeable sequence of binary values (and with $\theta$ defined as above), it follows that the elements of $\mathbf{X}|\theta$ are independent with sampling distribution $X_i|\theta \sim \text{IID Bern}(\theta)$ so that for all $k \in \mathbb{N}$ we have:
$$\mathbb{P}(\mathbf{X}_k=\mathbf{x}_k | \theta = c) = \prod_{i=1}^k c^{x_i} (1-c)^{1-x_i}.$$
This particular version of the theorem is adapted from O'Neill (2009), which is itself a minor re-framing of de Finetti's famous representation theorem.
Now, within this IID model, the specific probability $\mathbb{P}(X_i=1|\theta=c) = c$ is just the sampling probability of a positive outcome for the value $X_i$. This represents the probability of a single positive indicator conditional on the Banach limit of the sequence of indicator random variables being equal to $c$.
Since this is an area of interest to you, I strongly recommend you read O'Neill (2009) to see the broader approach used here and how it is contrasted with the frequentist approach. That paper asks some similar questions to what you are asking here, so I think it might assist you in understanding how these things can be framed in an operational manner within the Bayesian paradigm.
How do we justify blending two interpretations of probability in Bayes theorem as if they are equivalent?
I presume here that you are referring to the fact that there are certain limiting correspondences analogous to the "frequentist interpretation" of probability at play in this situation. Bayesians generally take an epistemic interpretation of the meaning of probability (what Bernardo and Smith call the "subjective interpretation"). Consequently, all probability statements are interpreted as beliefs about uncertainty on the part of the analyst. Nevertheless, Bayesians also accept that the law-of-large-numbers (LLN) is valid and applies to their models under appropriate conditions, so it may be the case that the epistemic probability of an event is equivalent to the limiting frequency of a sequence.
In the present case, the definition of the parameter $\theta$ is the Banach limit of the sequence of observable values, so it necessarily corresponds to a limiting frequency. Probability statements about $\theta$ are therefore also probability statements about a limiting frequency for the observable sequence of values. There is no contradiction in this.
Best Answer
You can find a good primer on the Bayesian interpretation of these types of models in Bernardo and Smith (1994). In that work they take an "operational" approach where model parameters are interpreted as limiting quantities that are functions of the observable sequence. You can also find a more detailed discussion of these particular interpretive issues in O'Neill (2009), which extends the operational interpretation to ensure that the parameter exists and corresponds to a limiting quantity under all possible sequence values.
Before getting to the interpretational side, it is important to note where the IID model comes from in Bayesian analysis. Given an infinite sequence $\mathbf{x}$ we can define the limiting empirical distribution $F_\mathbf{x}: \mathbb{R} \rightarrow [0,1]$ as the Banach limit that extends the following Cesàro limit:
$$F_\mathbf{x}(x) \equiv \lim_{n \rightarrow \infty} \frac{1}{n} \sum_{i=1}^n \mathbb{I}(x_i \leqslant x) \quad \quad \quad \quad \quad \text{for all } x \in \mathbb{R}.$$
Now, an important result connecting the probability of the observable values to the underlying model parameters is the celebrated "representation theorem" from de Finetti (later extended by Hewitt and Savage). The version I show here is an adapted version shown in O'Neill (2009) (p. 242, Theorem 1). For this version, we show the decomposition of the marginal distribution of the sample vector $\mathbf{x}_n = (x_1,...,x_n)$. As with all versions of the theorem, the exchangeability of the underlying sequence leads to the IID model and the parameter-observation connection.
This theorem essentially says that if the observable sequence $\mathbf{x}$ is exchangeable, then we have the following IID model:
$$\begin{align} x_1,...,x_n | F_\mathbf{x} &\sim \text{IID } F_\mathbf{x}, \\[6pt] F_\mathbf{x} &\sim \pi. \\[6pt] \end{align}$$
Now, in many applications, we will make the additional assumption that the observations obey some other invariance constraints that lead us to a particular parametric family of distributions. In this case, it may be possible to index the empirical distribution $F_\mathbf{x}$ by a parameter vector $\theta \in \Theta$ (i.e., we have a mapping $F_\mathbf{x} \mapsto \theta$ that defines the index and the model is restricted to empirical distributions corresponding to a value of $\theta$). In this case, we would write the IID model as:$^\dagger$
$$\begin{align} x_1,...,x_n | \theta &\sim \text{IID } f_\theta, \\[6pt] \theta &\sim \pi. \\[6pt] \end{align}$$
So, as you can see, the setup for the Bayesian IID model occurs when we have an exchangeable sequence of observable values, and we then see that the model "parameter" is an index to the empirical distribution for the observable sequence (which can be defined through the Banach limit extending the above Cesàro limit). This "index" is a function of the empirical distribution, which is in turn a function of the observable sequence, so there exist mappings $\mathbf{x} \mapsto F_\mathbf{x} \mapsto \theta$.
Interpretation of the parameters: In the above setup, there exists a mapping $\mathbf{x} \mapsto \theta$, and so it is natural to take this as the "definition" of the parameter $\theta$. Under this approach, the parameter $\theta$ has an "operational" meaning as a quantity that is fully determined by the observable sequence (i.e., it is a limiting quantity on the observed sample as $n \rightarrow \infty$). Note that this interpretation relates closely to the strong law of large numbers.
$^\dagger$ I am using a slight abuse of notation here by taking $\pi$ as a generic reference to a prior distribution for whatever parameter is under use. Note that the prior for $\theta$ would be a simple mapping of the prior for $F_\mathbf{x}$.