It's true that there are not only other ways of performing matching but also ways of adjusting for confounding using just the treatment and potential confounders (e.g., weighting, with or without propensity scores). Here I'll just mention the documented problems with propensity score (PS) matching. Matching, in general, can be a problematic method because it discards units, can change the target estimand, and is nonsmooth, making inference challenging. Using propensity scores to match adds additional problems.
The most famous critique of propensity score matching comes from King and Nielsen (2019). They have three primary arguments: 1) propensity score matching seeks to imitate a randomized experiment instead of a block randomized experiment, the latter of which yields far better precision and control against confounding, 2) propensity score matching induces the "propensity score paradox", where further trimming of the units increases imbalance after a point (not shared by some other matching methods), and 3) effect estimation is more sensitive to model specification after using propensity score matching than other matching methods. I'll discuss these arguments briefly.
Argument (1) is undeniable, but it's possible to improve PS matching by first exact matching on some variables or coarsened versions of them and doing PS matching within strata of the variables or by using the PS just to create a caliper and using a different form of matching (e.g., Mahalanobis distance matching [MDM]) to actually pair units. Though these should be standard methods, researchers typically just apply PS matching without these other beneficial steps. This increases reliance on correct specification of the propensity score model to control confounding since balance is achieved only on average but not exactly or necessarily in various combinations of variables.
Argument (2) is only somewhat tenable. It's true that the PS paradox can occur when the caliper is successively narrowed, excluding more units, but researchers can easily assess whether this is happening with their data and adjust accordingly. If imbalance increases after tightening a caliper, then the caliper can just be relaxed again. In addition, Ripollone et al. (2018) found that while the PS paradox does occur, it doesn't always occur in the typically recommended caliper widths that are most often used by researchers, indicating that the PS paradox is not as problematic for the actual use of PS matching as the paradox would otherwise suggest.
Argument (3) is also only somewhat tenable. King and Nielsen demonstrated that if, after PS matching, you were to use many different models to estimate the treatment effect, the range of possible effect estimates would be much larger than if you were to use a different form of matching (in particular, MDM). The implication is that PS matching doesn't protect against model dependence, which is often touted as its primary benefit. The effect estimate still depends on the outcome model used. The problem with this argument is that researchers typically don't try hundreds of different outcome models after matching; the two most common are no model (i.e., a t-test) or a model involving only main effects for the covariates used in matching. Any other model would be viewed as suspicious, so norms against unusual models already protect against model dependence.
I attempted to replicate King and Nielsen's findings by recreating their data scenario to settle an argument with a colleague (unrelated to the points above; it was about whether it matters whether the covariates included were confounders or mediators). You can see that replication attempt here. Using the same data-generating process, I was able to replicate some of their findings but not all of them. (In the demonstration you can ignore the graphs on the right.)
Other critiques of PS matching are more about their statistical performance. Abadie and Imbens (2016) demonstrate that PS matching is not very precise. De los Angeles Resa and Zubizarreta (2016) find in simulations that PS matching can vastly underperform compared to cardinality matching, which doesn't involve a propensity score. This is because PS matching relies on the theoretical properties of the PS to balance the covariates while cardinality matching uses constraints to require balance, thereby ensuring balance is met in the sample. In almost all scenarios considered, PS matching did worse than cardinality matching. That said, as with many simulation studies, the paper likely wouldn't have been published if PS matching did better, so there may be a selection effect here. Still, it's hard to deny that PS matching is suboptimal.
What should you do? It depends. Matching typically involves a tradeoff among balance, generalizability, and sample size, which correspond to internal validity, external validity, and precision. PS matching optimizes none of them, but it can be modified to sacrifice some to boost another (e.g., using a caliper decreases sample size and hampers generalizability [see my post here for details on that], but often improves balance). If generalizability is less important to you, which is implicitly the case if you were to be using a caliper, then cardinality matching is a good way of maintaining balance and precision. Even better would be overlap weighting (Li et al., 2018), which guarantees exact mean balance and the most precise PS-weighted estimate possible, but uses weighting rather than matching and so is more dependent on correct model specification. In many cases, though, PS matching does just fine, and you can assess whether it is working well in your dataset before you commit to it anyway. If it's not leaving you with good balance (measured broadly) or requires too tight of a caliper to do so, you might consider a different method.
Abadie, A., & Imbens, G. W. (2016). Matching on the Estimated Propensity Score. Econometrica, 84(2), 781–807. https://doi.org/10.3982/ECTA11293
de los Angeles Resa, M., & Zubizarreta, J. R. (2016). Evaluation of subset matching methods and forms of covariate balance. Statistics in Medicine, 35(27), 4961–4979. https://doi.org/10.1002/sim.7036
King, G., & Nielsen, R. (2019). Why Propensity Scores Should Not Be Used for Matching. Political Analysis, 1–20. https://doi.org/10.1017/pan.2019.11
Li, F., Morgan, K. L., & Zaslavsky, A. M. (2018). Balancing covariates via propensity score weighting. Journal of the American Statistical Association, 113(521), 390–400. https://doi.org/10.1080/01621459.2016.1260466
Ripollone, J. E., Huybrechts, K. F., Rothman, K. J., Ferguson, R. E., & Franklin, J. M. (2018). Implications of the Propensity Score Matching Paradox in Pharmacoepidemiology. American Journal of Epidemiology, 187(9), 1951–1961. https://doi.org/10.1093/aje/kwy078
Best Answer
I'm reverting to 'answering' your queries as this statement is too long for a comment.
Econometricians have written extensively about using IVs as a control for endogeneity while, at the same time, acknowledging the difficulties in, first, finding an appropriate IV measure to use as well as its many weaknesses as a solution. My view is that introducing IVs into a model creates as many problems as it intends to solve.
So, if you choose to drop the use of IVs, that leaves you with a choice between PSM, Heckman and 2SLS approaches. 2SLS is given extensive, theoretical treatment in Wooldridge's classic book Econometric Analysis of Cross Section and Panel Data. It also happens to be a method I don't have a lot of experience with and, therefore, don't have much to recommend. Definitely check it out.
There are many criticisms of PSM as a tool for matching. One of the most cogent is by Gary King, Harvard Distinguished Professor, titled Why Propensity Scores Should Not Be Used for Matching here ... https://gking.harvard.edu/files/gking/files/psnot.pdf. King proposes using Mahalanobis distance instead of PSM, convincingly demonstrating its superiority over PSM. If you, in fact, want to match your data, then King's recommendation is to be preferred.
That still leaves Heckman selection bias to discuss. In the absence of additional information about the challenges you face and the issues you are trying to solve, my obvious preference is to leverage Heckman's method over the other approaches. Here's a link to his original (1979) paper ... https://faculty.smu.edu/millimet/classes/eco7321/papers/heckman02.pdf. Essentially, Heckman's method creates an additional variable (parameter) that acts like a 'weighting' factor for selection bias in downstream models, adjusting the remaining observations back to the full population proportions.
Note that his method is not hard to code or program in any software language. Here's a link to one of the clearest programming solutions from SAS Support ... https://support.sas.com/resources/papers/proceedings14/SAS207-2014.pdf. This SAS solution can be translated into the software of your choice.
Sparse or rare data introduces a new issue from the ones discussed up until now. The problem with the standard models, such as logistic regression, is that they don't fit the tails of the logistic curve well at all. This means that for variables (both dependent and independent) that are sparsely observed, sparsity can cause an error warning of 'quasi-' or 'complete separation of points' and convergence failure of maximum likelihood estimation. For an excellent review of convergence and estimation problems, see Allison's paper Convergence Failures in Logistic Regression ... https://pdfs.semanticscholar.org/4f17/1322108dff719da6aa0d354d5f73c9c474de.pdf. As Allison notes, one of the most common solutions is to drop the offending feature. Collapsing features to create a new, combined, more robust feature is a second, less satisfactory solution.
However, when it comes to sparsely observed target or dependent variables, dropping or collapsing information is not an option. Fortunately, there are workarounds to this challenge as well. Here's a link to one of the best solutions as proposed (again) by Gary King https://gking.harvard.edu/category/research-interests/methods/rare-events. And here's another link to a PhD dissertation that reviews the literature wrt rare event data, A Comparison of Different Methods for Modelling Rare Events Data here ... https://lib.ugent.be/fulltxt/RUG01/002/163/708/RUG01-002163708_2014_0001_AC.pdf.
Let me know, first, if you have any questions and, second, if you have any trouble accessing any of these references. If you do, we'll figure out another way for you to obtain them.