Minimal Sufficient Statistic for $f(x) = e^{-(x-\theta)}, \; \theta < x < \infty, \; x \in \mathbb{R}$

statistical-inferencestatisticssufficient-statistics

My question comes from Exercise 6.9(b) of Statistical Inference by Casella and Berger:

6.9: Find a minimal sufficient statistic for $\theta$
(b) $f(x|\theta) = e^{-(x-\theta)}, \quad \theta < x < \infty, \quad -\infty < \theta < \infty$.

This exercise appears to be a straighforward application of the following theorem:

Theorem 6.2.13. Let $f(\mathbf{x}|\theta)$ be the pmf or pdf of a sample $\mathbf{X}$. Suppose there exists a function $T(\mathbf{x})$ such that, for every two sample points $\mathbf{x}$ and $\mathbf{y}$, the ratio $f(\mathbf{x}|\theta)/f(\mathbf{y}|\theta)$ is constant as a function of $\theta$ if and only if $T(\mathbf{x}) = T(\mathbf{y})$. Then $T(\mathbf{X})$ is a minimal sufficient statistic for $\theta$.

A little algebra shows that
$$\frac{f(\mathbf{x}|\theta)}{f(\mathbf{y}|\theta)} = \exp\left(\sum_{i=1}^{n} (y_i – x_i) \right) \frac{I_{(\theta,\infty)}(\mathbf{x}_{(1)})}{ I_{(\theta,\infty)}(\mathbf{y}_{(1)})}$$

for all $\mathbf{x},\mathbf{y} \in (\theta,\infty)^n$. Now it's clear that the desired implication "$f(\mathbf{x}|\theta)/f(\mathbf{y}|\theta)$ is constant as a function of $\theta$ if and only if $T(\mathbf{x}) = T(\mathbf{y})$" depends solely upon the above ratio of indicator functions. If $\mathbf{x}_{(1)} = \mathbf{y}_{(1)}$, then $\frac{I_{(\theta,\infty)}(\mathbf{x}_{(1)})}{ I_{(\theta,\infty)}(\mathbf{y}_{(1)})} = 1$ for all $\theta \in (-\infty, \mathbf{y}_{(1)})$ (and undefined on $[\mathbf{y_{(1)}},\infty)$), so the ratio is constant in $\theta$ for all $\theta$ where it is defined. And if $\mathbf{x}_{(1)} < \mathbf{y}_{(1)}$, then
\begin{align*}
\frac{I_{(\theta,\infty)}(\mathbf{x}_{(1)})}{ I_{(\theta,\infty)}(\mathbf{y}_{(1)})} &=
\begin{cases}
1 & \text{ if } \theta \in (-\infty,\mathbf{x}_{(1)}) \\[2pt]
0 & \text{ if } \theta \in [\mathbf{x}_{(1)},\mathbf{y}_{(1)})
\end{cases}
\end{align*}

(and is undefined for $\theta \geq \mathbf{y}_{(1)}$), so in this case the ratio clearly depends on $\theta$. But if $\mathbf{y}_{(1)} < \mathbf{x}_{(1)}$, then $\frac{I_{(\theta,\infty)}(\mathbf{x}_{(1)})}{ I_{(\theta,\infty)}(\mathbf{y}_{(1)})} = 1$ for all $\theta < \mathbf{y}_{(1)}$ (and undefined everywhere else). If the ratio is constant as a function of $\theta$ if and only if $\mathbf{x}_{(1)} = \mathbf{y}_{(1)}$, then we can straightforwardly conclude that $T(\mathbf{X}_{(1)})$ is a minimal sufficient statistic. But in the case where $\mathbf{y}_{(1)} < \mathbf{x}_{(1)}$, is the ratio of indicator functions considered to be constant as a function of $\theta$? Why or why not? Any feedback would be appreciated.

Edit 2/23/22: As @Henry mentioned in the comments, the situation is clarified if we change the part of the theorem that says

the ratio $f(\mathbf{x}|\theta)/f(\mathbf{y}|\theta)$ is constant as a function of $\theta$

to the new statement

there exists some $k(\mathbf{x},\mathbf{y}) > 0$ (a strictly positive function which does not depend on $\theta$) such that $f(\mathbf{x}∣\theta)=k(\mathbf{x},\mathbf{y})f(\mathbf{y}∣\theta)$ for all $\mathbf{x}, \mathbf{y}$ in the sample space and all $\theta \in \Theta$.

I would just like some "official" confirmation of this in the form of a theorem in a textbook. Any relevant references would be greatly appreciated.

Best Answer

As shown in your edit, the amended statement of the theorem in Casella and Berger makes your theorem slightly more general. This particular statement can be found in Theorem 2.3(iii) in Mathematical Statistics by Shao. The statement is given below:

Theorem 2.3(iii) (Shao) Let $\mathcal{P}$ be a family of distributions on $\mathbb{R}^k$. Suppose that $\mathcal{P}$ contains p.d.f.s $f_P$ with respect to a $\sigma$-finite measure and that there exists a sufficient statistic $T(X)$ such that, for any possible values $x$ and $y$ of $X$, $f_P(x) = f_P(y)\phi(x,y)$ for all $P$ implies $T(x)=T(y)$, where $\phi$ is a measurable function. Then $T(X)$ is minimal sufficient for $P\in\mathcal{P}$.


Shao, Jun. Mathematical statistics. Springer Science & Business Media, 2003.