Skip Navigation
 
 
 
 
 
Print This Page
Share this page: More
 

Scientific Integrity In The Age of Photoshop

Photoshop and the Internet have become invaluable tools for preparing research publications—as well as potential instruments of research misconduct

January 2011- When John Dahlberg was a graduate student in microbiology in the 1960s, publishing a photograph in a scientific journal article was costly. So before his mentor would submit the image, he closely scrutinized it and all of Dahlberg’s accompanying raw data. “He didn’t want to send the image unless he saw everything,” says Dahlberg. 

Today, thanks to digital photography and software such as Photoshop, scientists themselves can print, reproduce and edit their own photographs. These technologies are cheap and easy to use, but also—for the panicky or unscrupulous—tempting to abuse, says Dahlberg, who now directs the Division of Investigative Oversight in the Office of Research Integrity (ORI) at the U.S. Department of Health and Human Services.

“Now we find image manipulation in up to 70 percent of our cases,” says Dahlberg, whose office must be notified when a university launches an investigation into a scientific misconduct case involving federally funded research.

Those who investigate allegations of scientific misconduct—falsifying data, fabricating data or plagiarism—say there’s no evidence that such activity has increased. However, Dahlberg says, “we’ve seen a changing landscape in those cases,” thanks to technology.

In response, federal agencies, as well as several scientific journals, are fighting technology with technology, by using forensic software that can spot image alterations or plagiarism, while universities work to train students on proper research techniques in a wired world.


Shades of gray?

At Johns Hopkins, the Division of Research Integrity in the Office of Policy Coordination began conducting workshops on research misconduct in 2005 for any department or program requesting such training. The office is expanding its programs this month, when new NIH guidelines take effect requiring that all faculty, students and postdocs at grant-receiving institutions complete formal training in the responsible conduct of research.

Many scientific misconduct cases occur when the scientist has a murky understanding of the issue, says division director Sheila Garrity.

For instance, some scientists may not be clear on which uses of Photoshop are legitimate and which stray into misconduct. In general, says Garrity, it’s okay to make limited changes to improve the overall clarity of an image, such as adjusting the contrast to accentuate the bands of different proteins in a gel. In some cases it may be permissible to crop a photo. But it’s not okay to alter particular parts of an image, such as copying a protein band and pasting it somewhere else.

Another issue is plagiarism. The ease with which researchers can cut and paste a passage from another person’s work has eliminated barriers to this form of misconduct. “You’d think it would be black and white,” says Garrity. “But people have many questions.” For instance, following a plagiarism incident involving graduate students in one department, some said that they thought it was okay to copy a published passage without citation as long as they changed a few words here and there.

To correct such misperceptions, Garrity’s team prepared and delivered a day seminar on plagiarism, complete with case studies and a quiz.


Detection tools

While technology has given would-be perpetrators new modes for committing misconduct, it’s also part of the solution. At Johns Hopkins, the Division of Research Integrity will use information technology specialists (either internal employees or outside consultants) to investigate allegations of image manipulation. If plagiarism is alleged, the division will screen the manuscript or publication using plagiarism-detection software.

Elsewhere, the NIH is beginning to scan all grant proposals using plagiarism-detection software, says Chi Dang, vice dean for research at the school of medicine. If a scan reveals that a grant contains passages similar to an earlier grant, even by the same scientist, then the grant is automatically rejected.

At the Office of Research Integrity, Dahlberg and his colleagues have access to plagiarism-detection software, as well as a suite of other forensic tools for spotting imaging manipulation.

One set of software tools, known as forensic “droplets,” can detect various types of image alteration, such as erasure marks. One droplet can be used to search for similarities and differences in two black-and-white images, a technique that is useful, for instance, if one image is suspected to be a copy of another. The program color-codes the pixels in each image, and overlays one on the other. Features common to both images will appear in red. Unique features appear in black or white.

Another forensic approach is to apply techniques called contrast enhancement or histogram equalization; these Photoshop filters can reveal, for example, a faint frame around an object that is the signature of a cut-and-paste job. Still other forensic features show where portions of an image have been erased.

Some journals now also closely scrutinize images for manipulation. At the Journal of Cell Biology, staff use the adjustment features in Photoshop to examine all images in accepted manuscripts for signs of manipulation, according to executive editor Liz Williams.

Forensics can’t catch every case of image manipulation, acknowledges Dahlberg. Whenever a forensics expert develops a new tool, someone will devise a way to outsmart it. “There are probably really clever people out there that we don’t catch,” says Dahlberg. But one needs to start somewhere.

That’s another reason Garrity subscribes so strongly to prevention. Few scientists set out to commit research fraud, she says. And even fewer cannot be reformed once they truly understand the issue and its ethical implications. So investing in education efforts can make a difference. At Hopkins, allegations of scientific misconduct have declined from a high of 14 in 2007 to only five in 2010. Although there is no way to know what accounts for the decline, says Garrity, “the trends are encouraging.”

–Melissa Hendricks

Related Stories:
Peering more closely at peer review

Stemming Shady Science

Journal fever and the pressure to publish

 
 
 
 
 
 

© The Johns Hopkins University, The Johns Hopkins Hospital, and Johns Hopkins Health System. All rights reserved.

Privacy Policy and Disclaimer