If $X_1,X_2,X_3$ are mutually independent may we deduce the distribution of $(X_1+X_2,X_2+X_3)$ from that of $(X_1,X_2)$ and $(X_2,X_3)$

independenceprobability theorystochastic-processes

Consider the following situation: $X_1,X_2,X_3$ are mutually independent, and we know the distributions of $(X_1,X_2)$ and $(X_2,X_3)$ – and thus of course also the distributions of the individual random variables $X_1,X_2 $ and $X_3$.

May we derive the distribution of $(X_1+X_2,X_2+X_3)$ from this?

It is known that since $X_1 $ and $X_2 $ are independent the distribution of $X_1+X_2$ is just the convolution of the distributions of the two summands.
My first thought then was the following: the distribution of a vector valued random variable is the product of the distributions of the coordinate random variables if the coordinates are mutually independent. Thus we will know the distribution of $(X_1+X_2,X_2+X_3)$ if it is true that $X_1+X_2$ and $X_2+X_3$ are independent.
But this is not generally true, see eg here.

Is it true that $(X_1,X_2)$ and $(X_2,X_3)$ are independent so that the distribution of their sum is the convolution of their distributions? Consider then $P[(X_1,X_2)\in A,(X_2,X_3) \in B]$ for $A,B \in \mathcal {B } (\mathbb R^2 )$ Does it generally "factor"? I doubt this is true…

The context is that I have a stochastic process $\{X_t \} $ with independent increments and I have the distributions of $X_0 $ and any vector of increments, $(X_{t_1} – X_0,…,X_{t_n } -X_{t_{n-1 } } )$ for any $t_1<…<t_n $. I would like to get the distributions of $(X_{t_1 } ,…,X_{t_n } )$ and since $(X_{t_1 } ,…,X_{t_n })=(X_{t_1} – X_0,…,X_{t_n } -X_{t_{n-1 } })+(X_0,…,X_{t_{n-1 } } -X_{t_{n-2 } } )$[NOT CORRECT SEE EDIT], I get the situation described above.

Thanks in advance!

EDIT: Of course it's not correct that $(X_{t_j } -X_{t_{j-1 } })+(X_{t_{j-1 } }-X_{t_{j-2 } })=X_{t_j }$. Instead we add all the coordinates to the left of $(X_{t_j } -X_{t_{j-1 } }) $ to it, $X_{t_j }=(X_{t_j } -X_{t_{j-1 } })+ \sum _{k=1 }^{j-1 }(X_{t_k } -X_{t_{k-1 } })$ if we assume $X_0=0 $. But the situation remains the same.

Best Answer

Simply marginalising, you can derive the distribution $\mu_j$ of each $X_j$ separately from your input. However, by independence, this lets you recreate the distribution of $(X_1,X_2,X_3)$ (it's simply $\nu:=\mu_1\otimes \mu_2\otimes \mu_3$). However, once you have this distribution, you can definitely recreate $(X_1+X_2,X_2+X_3)$ by applying the map $(f,g)$ where $f(x_1,x_2,x_3)=x_1+x_2$ and $g(x_1,x_2,x_3)=x_2+x_3$.

Now, what the measure $(f,g)(\nu)$ (the push-forward of $\nu$ under this map) looks like in general probably varies quite a bit, depending on the exact nature of your increments. In the classical set-up of Brownian Motion, you need to appeal to the linear stability of Gaussian distributions to say something intelligent.

Related Question