Using Photoshop and other image manipulation software that allows you to modify faces in photos has become common practice, but it is not always clear when it was done. Researchers from Berkeley and Adobe have created a tool that can not only determine when the face has been Photoshop, but can suggest how to undo it.
It should be noted right away that this project concerns only manipulation of Photoshop, in particular those made with the "Face Aware Liquify" function, which allows both subtle and serious modifications of many facial features. The universal detection tool is far away, but this is just the beginning.
Researchers (among them Alexei Efros, who just appeared on our AI + Robotics event) began by assuming that many image manipulations are done using popular tools such as Adobe, and as such a good place to start would be to search specifically for manipulations possible in these tools.
They created a script for taking portraits and manipulating them in a variety of ways: move your eyes slightly and highlight the smile, narrow cheeks and nose, things like that. They then massively delivered the originals and warped versions to the machine learning model, hoping to learn to distinguish them.
Find out this and quite well. When people were depicted with images and questions that were manipulated, they only did a little better than chance. But a trained neural network identified manipulated images for 99% of the time.
What does he see? Probably small patterns in the optical flow of the image, which people really can not see. The same small patterns also suggest exactly what manipulations were made, which suggests "undoing" manipulation, even if they never saw the original.
Because it is limited only to faces adapted by this Photoshop tool, do not expect that these studies will create a significant barrier to the forces of evil that illegally pinch faces to the left and right. But this is just one of many small beginnings in the growing field of digital forensics.
"We live in a world where it's increasingly difficult to trust the digital information we consume," said Richard Zhang from Adobe who worked on the project, "and I am looking forward to further exploring this research area."
You can read the article describing the project and check the team code on the project website.