Deepfakes Extend the Concept of “Fake News,” and They’re Here to Stay

8,183 people reacted 6 5 min. read

By

Category: Points of View, Policy

Tags: ,

Hany Farid, an expert on digital forensics, speaks on deepfakes at the Palo Alto Networks Experience as part of events surrounding RSAC day three.
Hany Farid, an expert on digital forensics, speaks on deepfakes at the Palo Alto Networks Experience, an event surrounding RSAC 2020.

Fake news is not new, nor are its deadly consequences. What is new, thanks to the internet and social media, is their reach and frequency. Today, misinformation propagates around the world at the speed of light. From small- to large-scale fraud, to sowing civil unrest, interfering with democratic elections and inciting violence, misinformation campaigns today are leading to dangerous and deadly outcomes.

Add to this phenomenon the ability to create increasingly more compelling and sophisticated fake videos of anybody saying and doing anything, and the threat only increases. This is the landscape that awaits us in 2020 and beyond.

Advances in artificial intelligence have led to computer systems that are able to synthesize images of people who don’t exist, videos of people doing things they never did, and audio recordings of them saying things they never said

These so-called “deepfakes” are a dangerous addition to an already volatile online world in which rumors, conspiracies and misinformation spread often and quickly. By providing millions of images of people to a machine-learning system, the system can learn to synthesize realistic images of people who don’t exist.

It is likely that we have already seen the first seemingly nefarious use of this technology to create a fraudulent identity. Similar technologies can, in live-stream videos, convert an adult face into a child’s face, raising concerns that this technology will be used by child predators. With just hundreds of images of someone, a machine-learning system can learn to insert them into any video.

This face-swap deepfake can be highly entertaining, as in its use to insert Nicolas Cage into movies in which he never appeared. The same technology, however, can also be used to create non-consensual pornography or to impersonate a world leader.

Similar technologies can also be used to alter a video to make a person’s mouth consistent with a new audio recording of them saying something that they never said. When paired with highly realistic voice synthesis technologies, these lip-sync deepfakes can make a CEO announce that their profits are down, leading to global stock manipulation; a world-leader announce military action, leading to global conflict; or a presidential candidate confess complicity in a crime, leading to the disruption of an election.

What is perhaps most alarming about these deepfake technologies is that they are not only in the hands of sophisticated Hollywood studios. Software to generate fake content is widely and freely available online, putting in the hands of many the ability to create increasingly compelling and sophisticated fakes.

Coupled with the speed and reach of social media, convincing fake content can instantaneously reach millions. How do we manage a digital landscape when it becomes increasingly more difficult to believe not just what we read, but also what we see and hear with our own eyes and ears? How do we manage a digital landscape where if anything can be fake, then everyone has plausible deniability to claim that any digital evidence is fake?

To begin, the major social media platforms must more aggressively and proactively deploy technologies to combat misinformation campaigns, and more aggressively and consistently enforce their policies. For example, Facebook’s terms of service state that users may not use their products to share anything that is “unlawful, misleading, discriminatory or fraudulent.” This is a sensible policy – Facebook now needs to enforce their rules.

Second, researchers who are developing technologies that we now know can be weaponized should give more thought to how they can put proper safeguards in place so that their technologies are not misused.

Third, researchers need to continue to develop and deploy technologies to detect deepfakes. This includes technologies to detect fakes at the point of upload as well as control-capture technologies that can authenticate content at the point of recording.

Fourth, following House Intelligence Committee hearings, Congress should continue to ensure that they understand the quickly advancing deepfake technology and its potential threat to our society, democracy and national security.

And lastly, we each have to become better digital citizens. We have to move from the Information Age to the Knowledge Age. This will require us all to learn how and where to consume more trustworthy information, how to distinguish the real from the fake and how to interact more respectfully with each other online, even with those with whom we disagree.

Hany Farid, Ph.D., is a professor at the University of California at Berkeley, jointly appointed to the School of Information and the Department of Electrical Engineering and Computer Sciences. His research focuses on digital forensics, image analysis and human perception. Farid spoke on deepfakes and other security issues during RSA 2020 at The Palo Alto Networks Experience, a three-night takeover of The Virgin Hotel that brought together approximately 150 international security leaders for curated events and networking. This essay was originally published by Fox News.