Joint density of independent gamma distributions

density functiongamma distributionprobabilityprobability distributions

Let $X,Y$ be independent r.v such that $X \sim \Gamma(\alpha, \lambda)$ and $Y \sim \Gamma (\beta, \lambda)$. Which is the joint density of $(X,X+Y)$? And the conditional density $g_{X\mid X+Y}(x\mid z)$ of $X$ given $X+Y = z$?

I know that if $X,Y$ are independent Gamma distribution then $X+Y \sim \Gamma (\alpha + \beta, \lambda)$ but how should I compute the joint density? Of course $X$ and $X+Y$ are not independent at this point.. thanks

EDIT:

Based on your answer I managed to get back at some theory and this is what I've come to so far:

basically we have $\phi : \mathbb{R}^2 \rightarrow \mathbb{R}^2$ which maps $(X,Y)^T \mapsto (X,X+Y)^T$. It is easy to see that this is the linear transformation associated to the matrix
$$A =\begin{bmatrix}1 & 0\\1 & 1\end{bmatrix}.$$
Now we find the inverse of transformation, as you stated
$$
\begin{cases}
x = u\\
y= z-u\\
\end{cases}
$$

finding that $\phi^{-1}(x,y) = (u,z-u)$ and so the joint density should be $f(\phi^{-1}(x,y))|\det A^{-1}|$ (with last term equals to $1$ and this is OK).

The only doubt that now I have is: density $f$ refers to which random variable? How this get substituted at this point? thanks, my main interest is always about understanding the whole process, and not so much the ending result.

Ok I got it.

If we rewrite the joint density of initial vector $(X,Y)$ we know $X,Y$ are independent and therefore the joint density is the product of marginals:

$$f(x,y)= \frac{\lambda^{\alpha + \beta}}{\Gamma(\alpha + \beta)}x^{\alpha-1} e^{-\lambda x} y^{\beta-1}e^{-\lambda y}$$

and then we get

$$f(u,z-u)= g(u,z) = \frac{\lambda^{\alpha+ \beta}}{\Gamma(\alpha + \beta)}u^{\alpha-1}(z-u)^{\beta-1}e^{-\lambda z}$$

This of course iff $z \ge u$ since it is a density. Correct?

Best Answer

setting

$$ \begin{cases} z=x+y \\ u=x \end{cases}$$

that is

$$ \begin{cases} x=u \\ y=z-u \end{cases}$$

The Jacobian is evidently 1 thus the joint distribution is

$$f_{UZ}(u,z)=\frac{\lambda^{\alpha+\beta}}{\Gamma(\alpha)\Gamma(\beta)}u^{\alpha-1}(z-u)^{\beta-1}e^{-\lambda z}\cdot\mathbb{1}_{(0;\infty)}(u)\cdot\mathbb{1}_{(u;\infty)}(z)$$

$(U,Z)$ bivariate support is an infinite triangle (over or under the line $U=Z$ as you set the axes) because

$0<Y<\infty$ or

$0<Z-U<\infty$

that means $Z>U$

Now I think you can proceed by yourself