Robots Weekly 🤖: Fake Newsreel 🎥
You won’t believe your eyes! No, seriously. You won’t believe what you see, at least not at first. Or maybe there will always just be a little nagging voice in the back of your head preventing you from full confidence.
Consider this a PSA. If you thought the fake news thing was big, that was just the beginning. Because now it’s about more than fake Facebook pages and “psychographic” targeting snake oil. Now it’s about adding doubt to the last medium we thought we could trust: video.
I am typically optimistic about AI and try to avoid fear mongering and focusing on potential negative outcomes (everything has those) but this use case has me genuinely worried. Welcome to DeepFakes. (Google warning, be careful of the links you click if your filter settings are a little loose)
DeepFakes, put simply, are fake videos that appear to be real. Meaning they don’t look to be blatantly modified, tampered with, or faked. At least to the naked eye (there are some telltale signs you can look for). A group at Stanford recently unveiled a technique called Deep Video Portraits that as impressive as it is terrifying.
It’s like Face/Off, but scary. And doesn’t involve lots of weird face touching. Deep Video Portraits is “the first to transfer the full 3D head position, head rotation, face expression, eye gaze, and eye blinking from a source actor to a portrait video of a target actor”. The method is built on a Generative Adversarial Network, or GAN, that is a popular AI algorithm for media creation like this. I’ll cover GANs in a future post.
Say what? You sit down in front of your webcam and do whatever, set Obama as your target actor, and, bam!, the former president is saying you’re the coolest cat around. See how this might cause some problems?
That eye blinking mention is actually a really big deal because previous methods didn’t include blinks so they could be pretty easily detected. The researchers behind Deep Video Portraits do lay out some potential negative consequences of this system as well as some ways videos created using the method can be detected. Video is officially getting is Photoshop moment.
There is a Twitter bet amongst AI experts about when this tech will wreak havoc on an election.
Need to contact my bookie, but from an odds POV:
* before Nov 2018 – sporting
* end of 2018 – fair bet
* end of 2020 – easy pickings
My criteria would be a verified use of DL to fabricate a video pertaining to a political candidate that receives X views before being debunked
— Tim Hwang (@timhwang) June 7, 2018
I don’t think it will matter if the video does get debunked though. Fake news spreads faster and farther than a retraction ever will. So don’t believe everything you see and heed Obama’s advice.
The silver lining of that video? The audio is Jordan Peele doing his (quite good) Obama impression. These systems still require voice talent to appear plausible, that could change soon though. But that’s a post for another time.