Solved – Confidence interval for sum of parameters

confidence intervaleconometricsinference

Consider a sequence of independent and identically distributed random variables $\{Y_i, W_i, X_i, U_i\}_{i=1}^n$ and suppose we have two statistical models
$$
Y_i=h_1(X_i, U_i; \beta)
$$
$$
W_i=h_2(X_i, U_i; \gamma)
$$
where:

  • $\beta, \gamma$ are scalar parameters

  • $h_1$ is a function known by the researcher up to $\beta$

  • $h_2$ is a function known by the researcher up to $\gamma$

  • $\beta, \gamma, \{U_i\}_{i=1}^n$ are unknown by the researcher

Suppose that under some assumptions (not relevant for my question) we are able to construct a $95\%$ confidence interval for $\beta$ and a $95\%$ confidence interval for $\gamma$, respectively denoted by $C_{n,\beta,95}= [a,b]$ and $C_{n,\gamma,95}= [c,d]$ with $a,b,c,d$ being some real numbers.

Now suppose that the researcher is also interested on the parameter $\theta\equiv \beta+\gamma$. Which interval do I get by computing
$$
[a+c, b+d]
$$
? Can we say that $C_{n,\theta,95}\subseteq [a+c, b+d]$?

Best Answer

No.

Think about it this way: if the sampling distribution of both of your parameters were $N(\mu=0,\sigma^2=1)$ (97.5th percentile at about 1.96) and if they are independent, then the sampling distribution of the sum is a $N(\mu=0,\sigma^2=2)$ distribution. This has its 97.5th percentile at about $\sqrt{2} \times 1.96 \approx 2.8$, i.e. not at $1.96+1.96$.

If the sampling distributions of the parameter estimates are independent and well approximated by normal distributions, then you get the standard error of their sum as $\sqrt{\text{SE}(\hat{\beta})^2 + \text{SE}(\hat{\gamma})^2}$. If they are (non-independent) jointly normally distributed, then you need to add a term involving the correlation (see Wikipedia). A lot of software will estimate the covariance matrix of multiple parameters for you, if they you fit the two models together (and would even give you a confidence interval for the sum using e.g. the delta-method).

Related Question