Nieman Foundation at Harvard
HOME
          
LATEST STORY
From shrimp Jesus to fake self-portraits, AI-generated images have become the latest form of social media spam
ABOUT                    SUBSCRIBE
July 18, 2017, 12:50 p.m.
Audience & Social

I recently, shamefully fell for a photo plastered all over my timeline last week of Vladimir Putin sitting in a chair at G-20 as other world leaders, including Donald Trump, leaned in for what appeared to be an intense, whispered discussion. The photo was, as Gizmodo put it gently, totally fake.

Fake headlines of the Pope-endorsing-Trump variety are just one part of the ecosystem of fakery online. There’s faked audio to worry about. Faked video. And of course, faked images.

It turns out people aren’t very good at identifying manipulated images, according to new research published Tuesday in the journal Cognitive Research by researchers Sophie J. Nightingale, Kimberley A. Wade, and Derrick G. Watson from the University of Warwick.

Participants were slightly better than random at picking out untouched versus manipulated photos, classifying 62 percent of the images in the study correctly. Participants also weren’t great at picking out where exactly a photo had been changed, even when they did accurately identify a photo as manipulated: they were able to identify an average of 45 percent of manipulations presented.

The study first tested participants on whether or not they could identify a manipulated image by showing them images of people in real-world scenes taken from a Google search, and manipulated versions of those images. In a second experiment, the researchers tested whether participants could pinpoint the region of the photo that had been changed.

People don’t necessarily appear to be better at pinpointing “implausible” manipulations (such as a shadow in the wrong place) than “plausible” ones (such as removal or addition of something into the photo), the researchers found:

Recall that we looked at two categories of manipulations — implausible and plausible — and we predicted that people would perform better on implausible manipulations because these scenes provide additional evidence that people can use to determine if a photo has been manipulated. Yet the story was not so simple. In Experiment 1, subjects correctly detected more of the implausible photo manipulations than the plausible photo manipulations, but in Experiment 2, the opposite was true. Further, even when subjects correctly identified the implausible photo manipulations, they did not necessarily go on to accurately locate the manipulation. It is clear that people find it difficult to detect and locate manipulations in real-world photos, regardless of whether those manipulations lead to physically plausible or implausible scenes.

They concluded:

Future research might also investigate potential ways to improve people’s ability to spot manipulated photos. However, our findings suggest that this is not going to be a straightforward task. We did not find any strong evidence to suggest there are individual factors that improve people’s ability to detect or locate manipulations. That said, our findings do highlight various possibilities that warrant further consideration, such as training people to make better use of the physical laws of the world, varying how long people have to judge the veracity of a photo, and encouraging a more careful and considered approach to detecting manipulations. What our findings have shown is that a more careful search of a scene, at the very least, may encourage people to be skeptical about the veracity of photos. Of course, increased skepticism is not perfect because it comes with an associated cost: a loss of faith in authentic photos. Yet, until we know more about how to improve people’s ability to distinguish between real and fake photos, a skeptical approach might be wise, especially in contexts such as law, scientific publication, and photojournalism where even a small manipulation can have ethically significant consequences.

In its own writeup of the study, the Washington Post, made a fun quiz based on the images used in Nightingale’s experiment, which, if you’re curious about your own abilities, you can take here.

Show tags
 
Join the 60,000 who get the freshest future-of-journalism news in our daily email.
From shrimp Jesus to fake self-portraits, AI-generated images have become the latest form of social media spam
Within days of visiting the pages — and without commenting on, liking, or following any of the material — Facebook’s algorithm recommended reams of other AI-generated content.
What journalists and independent creators can learn from each other
“The question is not about the topics but how you approach the topics.”
Deepfake detection improves when using algorithms that are more aware of demographic diversity
“Our research addresses deepfake detection algorithms’ fairness, rather than just attempting to balance the data. It offers a new approach to algorithm design that considers demographic fairness as a core aspect.”