[Math] How to assess research “impact” for tenure/promotion committees

advicecareer

Over the last several years, the college-level promotion & tenure committee at my university has increasingly been seeking to apply "objective" criteria for assessing the impact of candidates' research. Journal impact factors have been a favored metric, and we have tried to argue that these are not a reliable measure, for all the usual well-documented reasons. (The AMS statements on "The Culture of Research and Scholarship in Mathematics" have been helpful, if not entirely convincing.) Now they want to use more individual metrics, such as h-indices and g-indices. We would like to discourage this, but it's hard to argue convincingly to non-mathematicians about the flaws of such measures, and at the end of the day, they're demanding SOME numerical measure that they can use to compare candidates, both internally and to faculty at peer institutions.

So, I'd like to hear how other math departments have dealt with such pressures. In particular, how can you articulate standards in such a way as to maintain high expectations, but also to minimize the damage to candidates who might be doing well-respected research, but for whatever reasons this results in relatively few papers, or papers with relatively few citations? And if we do end up having to use something like an h-index, is there any way to collect data from comparable math departments so we can at least say something about what is a "good" value for a particular index for mathematicians?

Best Answer

We have produced a list of top 10 journals for each area of mathematics represented in department plus a list of top 10 general subject journals so our candidates for tenure/promotion need to have publications in one of these journals. However I know for a fact that this has not stopped the administration from using impact factors, h-indices etc. Additionally tenure decisions seem to be more and more conditioned on having outside funding, NSF, NSA etc.