Let $(X_n)_{n\in\mathbb N_0}$ be a stationary time-homogeneous Markov chain with $X_0\sim f\mu$ for some measure space $(E,\mathcal E,\mu)$ and $\mathcal E$-measurable $f:E\to[0,\infty)$ with $$\int f\:{\rm d}\mu=1.$$ We know that the evolution of $(X_n)_{n\in\mathbb N_0}$ is uniquely determined by its transition kernel, which is a Markov kernel $\kappa$ on $(E,\mathcal E)$ with $$\operatorname P\left[X_1\in B\mid X0\right]=\kappa(X_0,B)\;\;\;\text{almost surely for all }B\in\mathcal E\tag1.$$ Since we know that $X_0,X_1\sim f\mu$, are we able to determine $\kappa$ explicitly?
Determine the transition kernel of a stationary time-homogeneous Markov chain
markov chainsmarkov-processprobability theory
Related Solutions
Since $X_{\tau} = Y_{\tau}$ on $\{\tau<\infty\}$ it holds that
$$\bar{Y}_n = 1_{\{\tau \geq n\}} Y_n + 1_{\{\tau \leq n-1\}} X_n.$$
Using that $\{\tau \leq n-1\} \in \mathcal{F}_{n-1}^Z$ we find from the pull out property of conditional expectation that
$$\mathbb{E}(f(\bar{Y}_n) \mid \mathcal{F}_{n-1}^Z) = 1_{\{\tau \geq n\}} \mathbb{E}(f(Y_n) \mid \mathcal{F}_{n-1}^Z) + 1_{\{\tau \leq n-1\}} \mathbb{E}(f(X_n) \mid \mathcal{F}_{n-1}^Z)$$
for any bounded measurable function $f$. Since $X$ and $Y$ are, by assumption, independent it follows (see the lemma below) that
$$\mathbb{E}(f(\bar{Y}_n) \mid \mathcal{F}_{n-1}^Z) = 1_{\{\tau \geq n\}} \mathbb{E}(f(Y_n) \mid \mathcal{F}_{n-1}^Y) + 1_{\{\tau \leq n-1\}} \mathbb{E}(f(X_n) \mid \mathcal{F}_{n-1}^X).$$
By assumption, $X$ and $Y$ are both Markov chains with transition kernel $\kappa$, and so
$$\begin{align*} \mathbb{E}(f(\bar{Y}_n) \mid \mathcal{F}_{n-1}^Z) &= 1_{\{\tau \geq n\}} \int f(y) \, \kappa(Y_{n-1},dy) + 1_{\{\tau \leq n-1\}} \int f(y) \, \kappa(X_{n-1},dy) \\ &= \int f(y) \, \kappa(\bar{Y}_{n-1},dy). \end{align*}$$
Since $n \in \mathbb{N}$ is arbitrary, this shows that $(\bar{Y}_n)_{n \in \mathbb{N}}$ is a Markov chain with transition kernel $\kappa$.
Lemma Let $Z \in L^1(\mathbb{P})$ be a random variable which is measurable with respect to a $\sigma$-algebra $\mathcal{A}$. If $\mathcal{G},\mathcal{H}$ are further $\sigma$-algebras such that $\mathcal{H}$ is independent from $\sigma(\sigma(Z),\mathcal{G})$, then $$\mathbb{E}(Z \mid \sigma(\mathcal{G},\mathcal{H})) = \mathbb{E}(Z \mid \mathcal{G}).$$
If I understand correctly, then outside the image of $X_n$, we cannot say anything about $P_{n+1}$, and so uniqueness may not hold there.
We can see this with a discrete example. Suppose $E = \{e_1, e_2\}$ and $(X_n)_{n\ge 0}$ is a time-homogeneous Markov chain in which $e_1$ always transitions to $e_2$, and $e_2$ always stays put. Then for $n>0$, we just always have $X_n = e_2$. So we can change $P_n$ for $n>1$ to anything where $e_2$ still always transitions to $e_2$ (but $e_1$ does not always transition to $e_1$), and this doesn't actually affect the distribution of $X_n$ in any way.
Best Answer
No, you cannot determine a kernel just from knowledge of the distributions of $X_0$ and $X_1$. You must know the joint distribution of $(X_0,X_1)$ to determine the kernel.
For example, consider $X_0$ and $X_1$ uniformly distributed on $\{1,\ldots,n\}$ and let $\sigma$ be any deterministic permutation. Then $\sigma$ induces a kernel sending $i$ to the dirac measure at $\sigma(i)$, and this kernel sends $X_0$ to $X_1$.
If you want a continuous version of this example, simply consider any permuton (i.e. a continuous analog of a random permutation).