Elisabeth Bik did not start out her career as a vigilante. In fact, for many years, she was a microbiologist, studying human microbiomes. But then, one evening in the early 2010s, she was reading some papers and noticed something odd about the images in them. “Somebody had used the same photo twice to represent two different experiments,” she remembers.
This one small discovery kicked off what would eventually become Bik’s new career: spotting manipulated images in scientific papers. Her analysis is sometimes subjective — relying on her eyes to spot manipulations, and her own criteria for what counts. But she has provided key insights on a variety of cases, including an investigation into a flawed and high-profile Alzheimer’s study, which resulted in an investigation.
Bik is just one of many data sleuths who examine data in scientific papers, looking for patterns and problems. o one knows exactly how much scientific misconduct is out there in published literature, but Retraction Watch, a blog that tracks papers pulled for a variety of issues, has documented an uptick in retracted papers in the last two decades.
Some of the sleuths who look for these flaws do their work anonymously, on forums like PubPeer, potentially to avoid legal or professional repercussions. Some journals employ in-house statistics screeners, many researchers do this work outside the typical science publishing ecosystem. They don’t work for academic journals; their evaluations are not a part of the peer review process.
But Elisabeth Bik’s story shows the personal risks that data sleuths like Bik face, but also the scope of the problem they’re trying to shine a light on. In 2016, Bik published a systematic review, analyzing images in more than 20,000 papers, and finding manipulations of photographic evidence in 4 percent of them. That figure is modest in some ways. It shows that data manipulation is relatively rare. But it also shows that manipulation is pervasive enough to call into question many research findings.
