EDITORIAL

The measure of research merit

See allHide authors and affiliations

Science  05 Dec 2014:
Vol. 346, Issue 6214, pp. 1155
DOI: 10.1126/science.aaa3796
IMAGE: STACEY PENTLAND PHOTOGRAPHY

Each year, $1.4 trillion are invested in research by governments, foundations, and corporations. Hundreds if not thousands of high-profile prizes and medals are awarded to the best researchers, boosting their careers. Therefore, establishing a reliable predictor of future performance is a trillion-dollar matter. Last month, the Alexander von Humboldt Foundation convened an international assembly of leaders in academia, research management, and policy to discuss “Beyond Bibliometrics: Identifying the Best.” Current assessment is largely based on counting publications, counting citations, taking note of the impact factor of the journals where researchers publish, and derivatives of these such as the h-index. These approaches were severely criticized for numerous reasons, with shortcomings particularly apparent when assessing young scientists for prestigious, interdisciplinary awards. It is time to develop more appropriate measures and to use the scientific method itself to help in this endeavor.

“It is time to remedy a flawed bibliometric-based assessment for young scientists.”

IMAGE: VERALUBIMOVA/ISTOCKPHOTO.COM

The difficulty with assessing young scientists is well known. Their short career to date yields a brief publication record, making differences in the numbers of publications between candidates statistically questionable. Faced with the challenge of gauging the worth of limited publications, evaluators might turn to journal impact factors. Using this as a proxy for the importance of a paper is just plain wrong. As compared with a paper published in a higher-impact journal, there is no assurance that a paper published in a lower-impact journal is less important.

Citations are a better proxy for how much impact a paper is having, but for young scientists and interdisciplinary awards, this metric also has several limitations. For example, recent publications from young scientists have not yet accumulated citations. Altmetrics have been proposed as a possible solution: measuring downloads, page views, tweets, and other social media attention to published research. Analyses conducted by HighWire Press, the publisher of Science and many other academic journals, suggest that downloads of online papers poorly track eventual citations. This could indicate that some papers were found unworthy of being cited, or that some papers were influential, but just not cited because the author did not feel that the concept required a citation. Adding more context in referencing could reduce some ambiguity and encourage more appropriate referencing, but such proposals have not gained traction. Counting citations is also quantitatively inconsistent. If an author publishes a better method or an improved estimate for a physical parameter, other researchers who use those improvements are obligated to cite that paper. On the other hand, if a researcher publishes a novel idea, it can rapidly move from unknown to common knowledge such that its citation lifetime is exceptionally brief. Furthermore, citation counts scale with the publications in a field. The lowering of quality barriers by some open-access publishers has generated a citation explosion in some fields, boosting citation counts by publishing papers that otherwise might not have been published.

Consider a rather outrageous proposal. Perhaps there has been too much emphasis on bibliometric measures that either distort the process or minimally distinguish between qualified candidates. What if, instead, we assess young scientists according to their willingness to take risks, ability to work as part of a diverse team, creativity in complex problem-solving, and work ethic? There may be other attributes like these that separate the superstars from the merely successful. It could be quite insightful to commission a retrospective analysis of former awardees with some career track record since their awards, to improve our understanding of what constitutes good selection criteria. One could then ascertain whether those qualities were apparent in their backgrounds when they were candidates for their awards.

It is time to remedy a flawed bibliometric-based assessment for young scientists. After all, the future performance of a trillion-dollar enterprise is at stake.

Navigate This Article