Show existence of a probability distribution

probabilityprobability distributionsprobability theoryrandom variables

Let us fix any three numbers in $[0,1]$ and summing up to $1$. I denote them by $p_1, p_2, p_3$.

Could you help to show that, for every possible vector of reals $U\equiv (U_0, U_1, U_2)\in \mathbb{R}^3$, there exists a random vector $\epsilon\equiv (\epsilon_0, \epsilon_1, \epsilon_2)$ continuously distributed on $\mathbb{R}^3$ such that the following equalities hold:
$$
\begin{cases}
p_1=Pr(\epsilon_1-\epsilon_0\geq U_0-U_1, \epsilon_1-\epsilon_2\geq U_2-U_1)\\
p_2=Pr(\epsilon_2-\epsilon_0\geq U_0-U_2, \epsilon_1-\epsilon_2\leq U_2-U_1)\\
p_3=Pr(\epsilon_1-\epsilon_0\leq U_0-U_1, \epsilon_2-\epsilon_0\leq U_0-U_2)
\end{cases}
$$

(Note that the distribution of $\epsilon$ can vary across values of $U$)


This question is related to a problem of identification in econometrics.

I have tried to prove it by construction as follows (I also highlight where I'm stacked).

Step 1: Consider any $U$.

Step 2: Suppose I'm able to show that there always exists a random vector vector
$$
\begin{bmatrix}
\eta_A\\
\eta_B\\
\eta_C\\
\end{bmatrix}\sim \mathcal{N}(\mu_U, \Sigma_U)
$$

such that
$$
\begin{cases}
p_1=Pr(\eta_A\geq U_0-U_1, \eta_B\geq U_2-U_1; \mu_U, \Sigma_U)\\
p_2=Pr(\eta_C\geq U_0-U_2, \eta_B\geq U_1-U_2; \mu_U, \Sigma_U)\\
p_3=Pr(\eta_A\geq U_1-U_0, \eta_C\geq U_2-U_0; \mu_U, \Sigma_U)
\end{cases}
$$

(I'm actually not able to show that).

Step 3: This is the hardest step. How do I show that there exists $\epsilon\equiv (\epsilon_0, \epsilon_1, \epsilon_2)$ such that
$$
\begin{bmatrix}
\epsilon_1-\epsilon_0\\
\epsilon_1-\epsilon_2\\
\epsilon_2-\epsilon_0\\
\end{bmatrix}\sim \mathcal{N}(\mu_U, \Sigma_U)
$$

?
The issue here is that I'm considering differences of random variables.

Best Answer

First off, note that we only ever look at the differences between the various $\epsilon_i$'s. That means that we can safely set $\epsilon_0 = 0$ without changing anything. The same goes for $U_0$.

We get:

$$\begin{cases} Pr(Y=1) &=& Pr(&\epsilon_1 &\geq& -U_1 &,& \epsilon_1 - \epsilon_2 &\geq& U_2 - U_1&)\\Pr(Y=2) &=& Pr(&\epsilon_2 &\geq& -U_2 &,& \epsilon_2 - \epsilon_1 &\geq& U_1 - U_2&)\\Pr(Y=0) &=& Pr(&-\epsilon_1 &\geq& U_1 &,& -\epsilon_2 &\geq& U_2&) \end{cases}$$

Let's inspect the last two of these a bit closer. If $\epsilon_2 \geq -U_2$ and $-\epsilon_1 \geq U_1$, then $\epsilon_2 - \epsilon_1 \geq U_1 - U_2$; so that means we're in the case $Y=2$. We'll just have to make sure that the conditions for $Y=0$ cannot hold at the same time!

Well, we're in luck. We just need to make sure that $-\epsilon_2 < U_2$, which means that $\epsilon_2 > -U_2$. Given our assumptions, that just means that $\epsilon_2 \neq -U_2$, which is easy enough to accomplish.

We can do the same for the first and third cases combined. We find that if $\epsilon_1 \geq -U_1$ and $=-\epsilon_2 \geq U_2$, we have the case that $Y=1$ and should not also have that the conditions for $Y=0$ are satisfied. We get that $\epsilon_1 \neq -U_1$.

Combining the first two cases is also interesting. We get that we should not have $\epsilon_1 - \epsilon_2 = U_2 - U_1$.

We have to look at 3 cases. If both $\epsilon_1 > -U_1$ and $\epsilon_2 > -U_2$, their difference will decide which case we're in. If both $\epsilon_1 < -U_1$ and $\epsilon_2 < -U_2$, we're in the case $Y=0$. There is a region where we have no valid case (unless $U_1 = U_2$) but that's easy enough to compensate for.

Have a look at the interactive graph at https://www.desmos.com/calculator/xtmn9xowwi where you can play around with the values. Just take a subset of the 2-dimensional plane that contains part of each of the 3 regions, and set the appropriate probability to each.

Related Question