A new algorithm can spot fake photos by looking for inconsistent shadows that are not always obvious to the naked eye.
The technique, which will be published in the journal ACM Transactions on Graphics in September, is the latest tool in the increasingly sophisticated arms race between digital forensics experts and those who manipulate photos or create fake tableaus for deceptive purposes.
National security agencies, the media, scientific journals and others use digital forensic techniques to differentiate between authentic images and computerized forgeries.
James O'Brien, a computer scientist at the University of California, Berkeley, along with Hany Farid and Eric Kee of Dartmouth University, developed an algorithm that interprets a variety of shadows in an image to determine if they are physically consistent with a single light source.
In the real world, O'Brien explained, if you drew a line from a shadow to the object that cast the shadow and kept extending the line, it would eventually hit the light source. Sometimes, however, it isn't possible to pair each portion of a shadow to its exact match on an object.
"So instead we draw a wedge from the shadow where the wedge includes the whole object. We know that the line would have to be in that wedge somewhere. We then keep drawing wedges, extending them beyond the edges of the image," said O'Brien.
If the photo is authentic, then all of the wedges will have a common intersection region where the light source is. If they don't intersect, "the image is a phony," O'Brien said.
A Growing Toolbox
The new technique does have limits, though. For instance, it was designed for use with images in which there is a single dominant light source, not situations with lots of little lights or a wide, diffuse light.
One could also imagine a clever forger anticipating the use of the shadow detection software and making sure they created shadows that would pass the test. The researchers call this just one technique in a toolbox of methods that are being developed to catch forgers.
O'Brien says one of the motivations for developing their algorithm is to reduce the need to rely on subjective evaluation by human experts to spot forgeries, which can easily mistake forged photos for authentic photos and authentic photos for forged ones.
Take for example the iconic 1969 photo of NASA astronaut Buzz Aldrin posing on the surface of the moon.
"The shadows go in all kinds of different directions and the lighting's very strange ... but if you do the analysis [with our software], it all checks out," O'Brien said.
Our Trouble With Shadows
It's unclear why humans are so bad at detecting inconsistent shadows, especially since our visual systems are so attuned to other cues, such as color, size and shape, said UC-Berkeley vision researcher Marty Banks.
One idea, Banks said, is that shadows are a relatively unimportant visual cue when it comes to helping organisms survive.
"It's important to get the color right because that might be a sign that the fruit or meat you're going to eat is spoiled, and it's important to get size and position right so you can interact with things," said Banks, who did not participate in the research. "And then there are things where it just doesn't really matter. One of them is shadows, we believe."
After all, before the advent of photography, it was unlikely for people to encounter scenes where shadows are pointing in the wrong direction.
Analyzing shadows could also just be a more mentally taxing task, said Shree Nayer, a computer vision researcher at New York's Columbia University, who was also not involved in the research.
"This is a more complex second order effect," Nayer said, "and it's something we have a much harder time perceiving."
Man-Machine Collaboration
For now, at least, the team's method still requires some human assistance matching shadows to the objects that cast them.
"This is something that in many images is unambiguous and people are pretty good at it," O'Brien explained.
Once that is done, the software takes over and figures out if the shadows could have been created by a common light source.
In this way, the scientists say, their method lets humans do what computers can't (interpret the high-level content in images) and lets computers do what humans are poor at (test for inconsistencies).
"I think for the foreseeable future, the best approaches are going to be this hybrid of humans and machines working together," O'Brien said.
Columbia's Nayer said he could envision a day when computers won't need human assistance to perform such tasks, because of increasingly sophisticated models and machine learning algorithms.
Because their software requires relatively simple human assistance, O'Brien and his team say it could one day be useful not only to experts, but the general public as well.
"So you could imagine a plug-in for Photoshop or an interactive app in your web browser where you can do that, and it would flag any inconsistencies," O'Brien said.
Image: Jo Christian Oterhals/Flickr
- Carrots & Eye Health: How the Myth Began
- As Furs Fade in the West, Popularity Grows in the East (Op-Ed)
- Starry Sky Over Sequoia National Park: Stargazer's Serene Scene (Photos)
- Your Jetpack Is Ready Almost
This article originally published at LiveScience here
No hay comentarios:
Publicar un comentario