Solved – Identification assumptions and causal relationships

assumptionscausalityidentifiability

I'm new to econometrics and I'm having a hard time answering if the following statement is true or false:

"In regression studies, making adequate identification assumptions is sufficient for identifying causal relationships between the variables of interest"

After some reading, I've come to this answer:

-Strucutural conditional expectation allows us to draw a causal inference

-If we cannot collect data on some variables, we can use identification assumptions to recover the structural conditional expectation

-So, if we make the adequate identification assumptions, we can draw a causal inference -> the statement is true.

Could someone please shed some light on this?

Best Answer

"Making adequate identification assumptions is sufficient for identifying causal relationships" is either tautologically true or obviously wrong. It is true if by "adequate identification assumptions" you mean "assumptions that identify a causal effect".

If you mean "adequate" in the sense of "substantively adequate", then of course making such identification assumptions does not always guarantee that you can identify a causal effect. This is what most discussions in the social sciences are about: People question the validity of identification assumptions, state that they should be weakened, and then argue that for this reason, the effect is not identified.

To give a short definition of causal identification, it means you can write a causal parameter in terms of the probability distribution of the observable variables.

E.g., when you think the causal effect of $X$ on $Y$ is a constant $\beta$, one may identify it with $cov(X, Y)/(var(X))$, the coefficient of a linear regression of Y on X, assuming $E[\epsilon|X]=0$, where $\epsilon$ is the structural error representing all other causes of Y other than X.

Or, when you define the causal parameter in potential outcomes, for example as the ATE $E[Y^{1} - Y^{0}]$, then you sometimes can identify it as $E[Y|X = 1] - E[Y|X = 0]$, assuming $E[Y^{x}|X] = E[Y^{x}]$ for $x \in {1,2}$. The latter assumption is a generalization of the error term assumption: $X$ should be mean-independent from all variables that affect $Y$ save for $X$.

Related Question