Propensity Scores – What Is the Problem with Propensity Score Matching?

econometricsmatchingpropensity-scorestreatment-effect

In estimation of treatment effects a commonly used method is matching. There are of course several techniques used for matching but one of the more popular techniques is propensity-score matching.

However, I sometimes stumble upon contexts where it is said that the use of propensity scores for matching is controversial and that critics have indicated that other procedures might be preferable. So I was just wondering if anyone was familiar with this criticism and perhaps could explain it or provide references.

So in short, the question I am asking is: why is it problematical to use propensity scores for matching?

Best Answer

It's true that there are not only other ways of performing matching but also ways of adjusting for confounding using just the treatment and potential confounders (e.g., weighting, with or without propensity scores). Here I'll just mention the documented problems with propensity score (PS) matching. Matching, in general, can be a problematic method because it discards units, can change the target estimand, and is nonsmooth, making inference challenging. Using propensity scores to match adds additional problems.

The most famous critique of propensity score matching comes from King and Nielsen (2019). They have three primary arguments: 1) propensity score matching seeks to imitate a randomized experiment instead of a block randomized experiment, the latter of which yields far better precision and control against confounding, 2) propensity score matching induces the "propensity score paradox", where further trimming of the units increases imbalance after a point (not shared by some other matching methods), and 3) effect estimation is more sensitive to model specification after using propensity score matching than other matching methods. I'll discuss these arguments briefly.

Argument (1) is undeniable, but it's possible to improve PS matching by first exact matching on some variables or coarsened versions of them and doing PS matching within strata of the variables or by using the PS just to create a caliper and using a different form of matching (e.g., Mahalanobis distance matching [MDM]) to actually pair units. Though these should be standard methods, researchers typically just apply PS matching without these other beneficial steps. This increases reliance on correct specification of the propensity score model to control confounding since balance is achieved only on average but not exactly or necessarily in various combinations of variables.

Argument (2) is only somewhat tenable. It's true that the PS paradox can occur when the caliper is successively narrowed, excluding more units, but researchers can easily assess whether this is happening with their data and adjust accordingly. If imbalance increases after tightening a caliper, then the caliper can just be relaxed again. In addition, Ripollone et al. (2018) found that while the PS paradox does occur, it doesn't always occur in the typically recommended caliper widths that are most often used by researchers, indicating that the PS paradox is not as problematic for the actual use of PS matching as the paradox would otherwise suggest.

Argument (3) is also only somewhat tenable. King and Nielsen demonstrated that if, after PS matching, you were to use many different models to estimate the treatment effect, the range of possible effect estimates would be much larger than if you were to use a different form of matching (in particular, MDM). The implication is that PS matching doesn't protect against model dependence, which is often touted as its primary benefit. The effect estimate still depends on the outcome model used. The problem with this argument is that researchers typically don't try hundreds of different outcome models after matching; the two most common are no model (i.e., a t-test) or a model involving only main effects for the covariates used in matching. Any other model would be viewed as suspicious, so norms against unusual models already protect against model dependence.

I attempted to replicate King and Nielsen's findings by recreating their data scenario to settle an argument with a colleague (unrelated to the points above; it was about whether it matters whether the covariates included were confounders or mediators). You can see that replication attempt here. Using the same data-generating process, I was able to replicate some of their findings but not all of them. (In the demonstration you can ignore the graphs on the right.)

Other critiques of PS matching are more about their statistical performance. Abadie and Imbens (2016) demonstrate that PS matching is not very precise. De los Angeles Resa and Zubizarreta (2016) find in simulations that PS matching can vastly underperform compared to cardinality matching, which doesn't involve a propensity score. This is because PS matching relies on the theoretical properties of the PS to balance the covariates while cardinality matching uses constraints to require balance, thereby ensuring balance is met in the sample. In almost all scenarios considered, PS matching did worse than cardinality matching. That said, as with many simulation studies, the paper likely wouldn't have been published if PS matching did better, so there may be a selection effect here. Still, it's hard to deny that PS matching is suboptimal.

What should you do? It depends. Matching typically involves a tradeoff among balance, generalizability, and sample size, which correspond to internal validity, external validity, and precision. PS matching optimizes none of them, but it can be modified to sacrifice some to boost another (e.g., using a caliper decreases sample size and hampers generalizability [see my post here for details on that], but often improves balance). If generalizability is less important to you, which is implicitly the case if you were to be using a caliper, then cardinality matching is a good way of maintaining balance and precision. Even better would be overlap weighting (Li et al., 2018), which guarantees exact mean balance and the most precise PS-weighted estimate possible, but uses weighting rather than matching and so is more dependent on correct model specification. In many cases, though, PS matching does just fine, and you can assess whether it is working well in your dataset before you commit to it anyway. If it's not leaving you with good balance (measured broadly) or requires too tight of a caliper to do so, you might consider a different method.


Abadie, A., & Imbens, G. W. (2016). Matching on the Estimated Propensity Score. Econometrica, 84(2), 781–807. https://doi.org/10.3982/ECTA11293

de los Angeles Resa, M., & Zubizarreta, J. R. (2016). Evaluation of subset matching methods and forms of covariate balance. Statistics in Medicine, 35(27), 4961–4979. https://doi.org/10.1002/sim.7036

King, G., & Nielsen, R. (2019). Why Propensity Scores Should Not Be Used for Matching. Political Analysis, 1–20. https://doi.org/10.1017/pan.2019.11

Li, F., Morgan, K. L., & Zaslavsky, A. M. (2018). Balancing covariates via propensity score weighting. Journal of the American Statistical Association, 113(521), 390–400. https://doi.org/10.1080/01621459.2016.1260466

Ripollone, J. E., Huybrechts, K. F., Rothman, K. J., Ferguson, R. E., & Franklin, J. M. (2018). Implications of the Propensity Score Matching Paradox in Pharmacoepidemiology. American Journal of Epidemiology, 187(9), 1951–1961. https://doi.org/10.1093/aje/kwy078

Related Question