To Fight Deepfakes, Researchers Built a Smarter Camera

WIRED | 5/28/2019 | Lily Hay Newman
cyanbytecyanbyte (Posted by) Level 3
Click For Photo: https://media.wired.com/photos/5ce877d7fd8c34483a00830e/191:100/pass/Deep-Fake-Camera-1150668356.jpg

One of the most difficult things about detecting manipulated photos, or "deepfakes," is that digital photo files aren't coded to be tamper-evident. But researchers from New York University's Tandon School of Engineering are starting to develop strategies that make it easier to tell if a photo has been altered, opening up a potential new front in the war on fakery.

Forensic analysts have been able to identify some digital characteristics they can use to detect meddling, but these indicators don't always paint a reliable picture of whatever digital manipulations a photo has undergone. And many common types of "post-processing," like file compression for uploading and sharing photos online, strip away these clues anyway.

Seal - Camera - NYU - Team - Processors

But what if that tamper-resistant seal originated from the camera itself? The NYU team demonstrates that you could adapt the signal processors inside—whether it's a fancy DSLR or a regular smartphone camera—so they essentially place watermarks in each photo's code. The researchers propose training a neural network to power the photo development process that happens inside cameras, so as the sensors are interpreting the light hitting the lens and turning it into a high quality image, the neural network is also trained to mark the file with indelible indicators that can be checked later, if needed, by forensic analysts.

Lily Hay Newman covers information security, digital privacy, and hacking for WIRED.

People - Security—you - Source - Image - Nasir

"People are still not thinking about security—you have to go close to the source where the image is captured," says Nasir Memon, one of the project researchers from NYU Tandon who specializes in multimedia security and forensics. "So what we’re doing in this work is we are creating an image which is forensics-friendly, which will allow better forensic analysis than a typical image. It's a proactive approach rather than just creating images for their visual quality and then hoping that...
(Excerpt) Read more at: WIRED
Wake Up To Breaking News!
Sign In or Register to comment.

Welcome to Long Room!

Where The World Finds Its News!