‘Deep fake’ videos could upend an election — but Silicon Valley may have a way to combat them
WASHINGTON — Election officials and social media firms already flummoxed by hackers, trolls and bots are bracing for a potentially more potent weapon of disinformation as the 2020 election approaches — doctored videos, known as “deep fakes,†that can be nearly impossible to detect as inauthentic.
In tech company board rooms, university labs and Pentagon briefings, technologists on the front lines of cybersecurity have sounded alarms over the threat, which they say has increased markedly as the technology to make convincing fakes has become increasingly available.
On Tuesday, leaders in artificial intelligence plan to unveil a tool to push back — it includes scanning software that UC Berkeley has been developing in partnership with the U.S. military, which the industry will start providing to journalists and political operatives. The goal is to give the media and campaigns a chance to screen possible fake videos before they could throw an election into chaos.
The software is among the first significant efforts to arm reporters and campaigns with tools to combat deep fakes. It faces formidable hurdles — both technical and political — and the developers say there’s no time to waste.
“We have to get serious about this,†said Hany Farid, a computer science professor at UC Berkeley working with a San Francisco nonprofit called the AI Foundation to confront the threat of deep fakes.
“Given what we have already seen with interference, it does not take a stretch of imagination to see how easy it would be,†he added. “There is real power in video imagery.â€
The worry that has gripped artificial intelligence innovators is of a fake video surfacing days before a major election that could throw a race into turmoil. Perhaps it would be grainy footage purporting to show President Trump plotting to enrich himself off the presidency or Joe Biden hatching a deal with industry lobbyists or Sen. Elizabeth Warren mocking Native Americans.
The concern goes far beyond the small community of scientists.
“Not even six months ago this was something available only to people with some level of sophistication,†said Lindsay Gorman, a fellow at the Alliance for Securing Democracy, a bipartisan think tank. Now the software to make convincing fakes is “available to almost everyone,†she said.
“The deep-fakes problem is expanding. There is no reason to think they won’t be used in this election.â€
Facebook has launched its own initiative to speed up development of technology to spot doctored videos, and it is grappling over whether to remove or label deep-fake propaganda when it emerges. Google has also been working with academics to generate troves of audio and video — real and fake — that can be used in the fight.
A new California law, AB 730, which takes effect in January, will make it illegal to distribute manipulated audio or video of a candidate that is maliciously deceptive and “would falsely appear to a reasonable person to be authentic.†There is a bipartisan effort in Congress to pass similar legislation.
Such bans, though, are legally precarious and could prove difficult to enforce in part because the line between a malicious fake and a satirical video protected under the 1st Amendment is a difficult one to draw.
The urgency around the videos comes as artificial intelligence developers unveil demos of deep fakes that appear stunningly authentic.
The most well-known is a convincing video of former President Obama reciting an innocuous passage he never said. The technology records another person saying the words, then grafts the lip movements and sound onto an image of the target, using algorithms and huge databases of real footage to seamlessly pass off the words as authentic.
The resulting videos pose a major problem for disinformation experts, who have found many potential solutions fall short. A company like Facebook, for example, might not be able to distinguish between a deep fake and a run-of-the mill political video with real footage that has been legitimately and obviously altered for effect — maybe to highlight the candidate, or to make a satirical point.
“The technology to detect deep fakes is lagging behind,†said Robert Chesney, a University of Texas law professor who researches the deceptive videos. “A huge amount of money has been put forward to try to crack this nut.â€
The potential to weaponize the tools of artificial intelligence against American elections is unnerving to the AI Foundation, the nonprofit arm of a firm that develops and markets artificial intelligence applications. Among the company’s works-in-progress are online clones of current-day business and spiritual leaders that could live on forever.
A recent demo for reporters featured a video chat led by an artificial recreation of Deepak Chopra, the mindfulness luminary. The avatar exchanged some pleasantries and then responded to questions about coping with work stress by guiding the group in a short meditation.
“With these big commercial opportunities come significant risks,†said Lars Buttler, CEO of the AI Foundation. “We are focusing half of our energy on prevention/detection, in anticipation of what could go wrong.â€
And a lot can go wrong.
A recently altered video of House Speaker Nancy Pelosi, slowed to leave the false appearance that she was inebriated and disoriented, spread across the internet like a virus before fact-checkers could set the record straight.
The video was more “cheap fake†than deep fake, using crude editing technology that could easily be detected. But it foreshadowed how unprepared voters are to process altered videos.
“There is a real danger that video manipulation tools are getting so good that normal people on the street won’t be able to tell anymore what happened,†Buttler said. “We face the risk that at some point we will no longer be able to agree what objective reality is.â€
The foundation, which this year enlisted Twitter co-founder Biz Stone as a co-director, is hoping the detection tools it is developing will help avert that. Media and political professionals given access to its “Reality Defender 2020†portal will be invited to run video they wish to check through two algorithms developed by scientists on the front lines of artificial intelligence.
The UC Berkeley algorithm compares the subtle mannerisms of whatever politician is featured in the video in question with their actual mannerisms mined from an extensive trove of authentic video. The software can then assess whether two are in sync.
“Every person has a correlation between what they say and how they act otherwise,†Buttler said. “It is almost as unique as a fingerprint. If they are out of sync, it is a telltale sign. You can determine a mathematical correlation.†These differences are typically unnoticeable on their face, he said.
The videos are also run through a separate algorithm, developed in partnership with the FaceForensics project at the Technical University of Munich in Germany, which takes them apart pixel by pixel to look for signs they were altered.
Google has been working with the Munich project to create thousands of deep-fake videos that are used to strengthen such algorithms, enabling them to learn how to detect patterns that emerge inside the machinery of videos that are altered but are not visible to the viewer.
Whether the detection technology will be effective — and durable in the face of a threat that continues to evolve — remains to be seen. Those involved in combating deep fakes foresee a perpetual cat and mouse game, where architects of misinformation use detection technology to build ever-more evasive methods.
The plan with Reality Defender 2020 is to allow access only to legitimate media outlets and political campaigns. But that blueprint is fraught, as the technology risks being branded partisan if access is overly restricted and being compromised if made available to outlets and operatives that have murky affiliations.
And even if the detection technology turns out to be flawless, the reluctance of Facebook and other social media giants to take down even demonstrably false and misleading content threatens to limit its effectiveness.
That’s a major concern of Farid, the UC Berkeley scientist.
“I can do as much hard work as I can to detect deep fakes, but if at the end of day Facebook says, ‘We are OK with these,’ then we all have a problem,†he said. He is skeptical of the social media giant, even as it funds his lab’s detection work.
“I told them it is not enough just to work with academics to develop this technology and put out press releases and blog posts about it. They have to do something with it.â€
More to Read
Get the L.A. Times Politics newsletter
Deeply reported insights into legislation, politics and policy from Sacramento, Washington and beyond. In your inbox three times per week.
You may occasionally receive promotional content from the Los Angeles Times.