The product of two gamma densities on the same variable is proportional to another gamma density ... as long as the sum of the two shape parameters in the product is > 1. When that's not true, it's not the case that the product is proportional to a density.
consider:
$$g(x) = f(x;\alpha_1,\beta_1)\cdot f(x;\alpha_2,\beta_2)$$
where $f(x;\alpha,\beta) = \frac{\beta^\alpha}{\Gamma(\alpha)} x^{\alpha-1} e^{- \beta x }\,,\quad x>0,\,\alpha,\beta>0$
$$g(x) = \frac{\beta_1^{\alpha_1}}{\Gamma(\alpha_1)} x^{\alpha_1-1} e^{- \beta_1 x }\frac{\beta_2^{\alpha_2}}{\Gamma(\alpha_2)} x^{\alpha_2-1} e^{- \beta_2 x }$$
$$\quad\quad = \frac{\beta_1^{\alpha_1}}{\Gamma(\alpha_1)}\frac{\beta_2^{\alpha_2}}{\Gamma(\alpha_2)} x^{\alpha_1-1}x^{\alpha_2-1} e^{- \beta_1 x }e^{- \beta_2 x }$$
$$\quad\quad = \frac{\beta_1^{\alpha_1}}{\Gamma(\alpha_1)}\frac{\beta_2^{\alpha_2}}{\Gamma(\alpha_2)} x^{(\alpha_1+\alpha_2-1)-1} e^{- (\beta_1+ \beta_2) x }$$
Clearly, for that to be proportional to a $\text{gamma}(\alpha_1+\alpha_2-1,\beta_1+ \beta_2)$ density, its parameters must obey the constraints on the parameters (i.e. $\alpha_1+\alpha_2-1>0$, or $\alpha_1+\alpha_2>1$).
What happens when it's not the case that $\alpha_1+\alpha_2-1>0$?
To simplify things consider $\beta_1+ \beta_2=1$, and let $\delta=\alpha_1+\alpha_2-1$. The gamma pdf is:
$$\frac{1}{\Gamma(\delta)} x^{\delta-1} e^{- x }\,,\quad x>0\,.$$
Now in the case where $\delta\leq 0$, the integral that defines $\Gamma(\delta)$ doesn't converge (though the function is generally extended to negative values via analytic continuation, that's no longer the value of the integral).
Which is to say, we no longer have something proportional to a valid pdf.
When $\alpha_1+\alpha_2>1$, the multiplicative constant out the front of the gamma density is
$$\frac{\beta_1^{\alpha_1}\beta_2^{\alpha_2}}{(\beta_1+\beta_2)^{\alpha_1+\alpha_2-1}}\frac{\Gamma(\alpha_1+\alpha_2-1)}{\Gamma(\alpha_1)\Gamma(\alpha_2)}\,.$$
which can be simplified a little, but which I won't pursue further here as I don't think that's what you're after.
As shown in this article (pp. 5-6), instead of focusing immediately on $\text{N}(0,1)$ the desired result can be obtained as a corollary to the following theorem:
Theorem: If $Y\sim\text{N}(\eta,1)$ with any real $\eta$, then for any real $c>0,\ \ \mathbb{V}(Y\mid -c<Y<c) < 1.$
Then
Corollary: If $X\sim\text{N}(\mu,1)$ with any real $\mu$, then for any real $a<b,\ \ \mathbb{V}(X\mid a<X<b) < 1.$
Proof of corollary:
$$\begin{align}\mathbb{V}(X\mid a<X<b)&=\mathbb{V}\left(X-{a+b\over 2}\ \ {\LARGE\mid}\ \ a<X<b\right)\\
&=\mathbb{V}\left(X-{a+b\over 2}\ \ {\LARGE\mid}\ \ a-{a+b\over 2}<X-{a+b\over 2}<b-{a+b\over 2}\right)\\
&=\mathbb{V}\left(X-{a+b\over 2}\ \ {\LARGE\mid}\ \ -{b-a\over 2}<X-{a+b\over 2}<{b-a\over 2}\right)\\
&< 1
\end{align}$$
where the last line follows from the theorem, with $Y=X-{a+b\over 2},\ \ \eta=\mu-{a+b\over 2}$ and $c={b-a\over 2}.$
Proof of theorem: (See p. 6 of the linked article.) Sketch: Because the normal distribution is a member of the exponential family of distributions, it is straighforward to show that
$$\begin{align}\mathbb{V}(Y\mid -c<Y<c)&={d\over d\eta}\mathbb{E}(Y\mid -c<Y<c)\\
&=1-{d\over d\eta}h(\eta)\\
\end{align}$$
where
$$h(\eta)={\phi(\eta-c)-\phi(\eta+c)\over \Phi(\eta+c)-\Phi(\eta-c)}.
$$
The result follows upon observing that $h$ is a continuous odd function, increasing in $\eta$.
Aside: The above-linked article conjectures that $\mathbb{V}(X\mid\alpha<X<\beta)<\mathbb{V}(X)$ holds for any distribution if $0<\mathbb{P}(\alpha<X<\beta)<1$. I show, however, that the conjecture can fail spectacularly for Lognormal distributions.
Best Answer
I think you've misunderstood the definition of stochastically greater. Exercise 1.49 states that if $F_X(t) \le F_Y(t)$ then $X$ is stochastically greater than $t$, because this suggests that realizations of $X$ will be greater than realizations of $Y$. This is why I confused the parametrization in my previous version of my response.
With this in mind, the goal is to show that, for any fixed $x_0, \alpha > 0$, the function $$G(\beta) = \int_{x = 0}^{x_0} \frac{x^{\alpha-1} e^{-x/\beta}}{\beta^\alpha \Gamma(\alpha)} \, dx$$ is decreasing in $\beta$.
To this end, we perform a scaling transformation: $$x = \beta y, \quad dx = \beta \, dy$$ to obtain $$G(\beta) = \int_{y=0}^{x_0/\beta} \frac{(y \beta)^{\alpha-1} e^{-y}}{\beta^\alpha \Gamma(\alpha)} \, \beta \, dy = \int_{y=0}^{x_0/\beta} \frac{y^{\alpha-1} e^{-y}}{\Gamma(\alpha)} \, dy.$$ And now, we observe that the integrand is no longer a function of $\beta$, and that the value of $G(\beta)$ depends only on the upper limit of integration. And since $x_0 > 0$, it immediately follows that $G(\beta_1) < G(\beta_2)$ if and only if $\beta_1 > \beta_2$ for all $\beta_1, \beta_2 > 0$.