Researchers forced to retract a scientific study can expect a reduction in their citation rate, but the drop-off is most severe in the case of high-profile authors who suffer retractions that result from misconduct.
That’s according to research from MIT Sloan Associate Professors Pierre Azoulay and Alessandro Bonatti and MIT Sloan PhD student Joshua Krieger.
“We are trying to document the diffusion of information and the dynamics of reputation,” Bonatti says. “At a big-picture level, we’re asking, ‘How well does our own profession work? Can you trust the scientific community to self-regulate, to spontaneously identify the good ideas and the bad ideas? Does our profession at large recognize mistakes, assign fault, and discount the work of those who make mistakes or cheat?'”
That appears to be the case. In their working paper [PDF], “The Career Effects of Scandal: The Effect of Scientific Retractions,” Azoulay, Bonatti, and Krieger find that an author’s prior work suffers up to a 20 percent “citation penalty,” compared to the work of non-retracted peers, in clear-cut cases of scientific misconduct.
These high-profile retraction events typically involve fraud, plagiarism, or—in the case of a recently retracted study in the journal Science examining how face-to-face conversations can affect a person’s views on same-sex marriage—an author’s inability to produce original data.
“Given conclusive evidence, there’s a punishment, and it is fairly drastic,” Bonatti says.
It’s especially drastic for high-status authors who have produced dozens if not hundreds of articles prior to the retraction. A retraction due to misconduct “is a more informative signal of an author’s bad quality” and “has a large[r] impact on an author’s reputation, independently of its initial level,” according to the paper. “The resulting loss of reputation is therefore largest for high-status authors.”
If work is retracted as a result of an honest mistake—such as contaminated samples or a flawed interpretation of results—the citation penalty for work published prior to the retraction event is a less-pronounced 10 percent. Here, the signal tends not to be as informative, so the reputation loss is tied to the initial level of uncertainty; according to Azoulay and Bonatti, this hits authors with an intermediate reputation harder than those with either a high or low profile.
To study the effect of a retraction on citation rates, Azoulay and Bonatti examined a set of 1,129 articles published in PubMed from 1977 to 2007 and subsequently retracted prior to 2009. They matched the authors of these articles to the faculty roster of the Association of American Medical Colleges from 1975 to 2006. Citation information came from Thomson-Reuters’ Web of Science, as PubMed doesn’t provide citation data. The distinction of misconduct versus honest mistake came from previously defined misconduct codes.
Whether retractions stem from misconduct or mistakes, the subsequent drop in citations of pre-retraction work is steady. There’s no snowball effect, Bonatti says, but there’s a penalty: “As an author, you’re less reliable than you used to be, so we will give you less credit.”
Retraction rate rising, but still low
The rate of retraction has risen steadily in the last 20 years. Previous scholarship suggests that it may be the result of less rigorous research amid competition for limited funding, or of the increased availability of software that can detect plagiarism.
Nonetheless, a retraction remains a rare event, affecting only one in 10,000 articles. And while a retraction may occur less often than reputation-damaging events such as a job or funding loss, a retraction constitutes an “information shock whose effect can be quantified in terms of citations of previous work," Bonatti says.
Bonatti cautions that the citation penalty as a result of a retraction is only one way to measure misconduct. It also doesn’t speak to the extent to which scientists may be getting away with misconduct. “But we can say that their papers receive significantly less credit,” he says.