You Won’t Believe Your Eyes

Author: Brett Beasley

Zuckerbergdeepfake Facebook’s Mark Zuckerberg — or is it?

Dr. Ian Goodfellow was true to his name; he didn’t set out to harm anyone. The young Stanford graduate was completing his Ph.D. at the University of Montreal when one night he ventured out to a pub called Les Trois Brasseurs with friends. After some heated debate and a few pints — he thinks he was drinking the amber — Goodfellow came up with one of the most groundbreaking ideas in the history of artificial intelligence. He went home and immediately began coding. His idea appeared in print in 2014 and Goodfellow went on to write a popular textbook on machine learning. He now thrives in the rarefied air of Silicon Valley, working on AI initiatives for the likes of Google and Apple.

To any average reader, Goodfellow’s research appears innocuous, nothing more than a bundle of jargon bristling with equations, charts and graphs. But today some experts say Goodfellow’s beer-fueled breakthrough at Les Trois Brasseurs poses a threat to the future of democracy as we know it.

Goodfellow christened his creation Generative Adversarial Networks, or GANs for short. It works by pitting two sets of algorithms called neural networks against each other. One acts as a forger, learning from existing files to generate false digital images, audio or video. The other acts as a detective trying to spot the fakes. The two go round and round in an algorithmic game of cops and robbers until the process generates a piece of digital fiction that is indistinguishable from fact. Named for its genesis in “deep learning,” this high-tech type of forgery is called a “deepfake.”

Deepfakes featuring Mark Zuckerberg, Donald Trump and Barack Obama have already begun to circulate online. We can easily imagine how deepfakes might be deployed to undermine democratic processes. Before an election, for example, a politician could fabricate a video of an opponent designed to inflict the most reputational damage possible. With instant distribution via the internet, a deepfake of this nature could spread across the globe long before anyone realized it was cooked up.

To stop the spread of deepfake-driven disinformation, many experts are working on mechanisms for early detection. One is Patrick Flynn, Notre Dame’s Duda Family Professor of Engineering. Flynn, chair of the computer science and engineering department, has partnered with the Defense Advanced Research Projects Agency (DARPA) to build a platform to identify deepfakes automatically.

The platform is known as Media Forensics, or MediFor. DARPA hopes that, by quickly spotting digital forgeries, it will “level the digital imagery playing field, which currently favors the manipulator.”

At last month’s kickoff conference for Notre Dame’s new Technology Ethics Center (ND-TEC), Flynn weighed in on the role professional ethics has played in the rise of deepfakes. He pointed out that digital tools that fashion deepfakes were birthed in the labs of computer scientists and engineers, people with positive aims and general standards of appropriate and inappropriate use. But now the tools have left the lab. They are being wielded by people without the same rulebook. It has led Flynn and some of his colleagues to question whether it is always best to publish their research. They are now more guarded, wary of the technologies they invent and the tools they develop falling into the wrong hands.

The damage done by deepfakes is not limited to the people they dupe. In fact, many technologists worry more that people will simply stop trusting anything they see. They will opt out of the search for reliable information altogether as part of what Aviv Ovadya, founder of the Thoughtful Technology Project, calls “reality apathy.” When reality apathy sets in, Ovadya warns, “People stop paying attention to news and that fundamental level of informedness required for functional democracy becomes unstable.”

Other experts warn that the real challenge is not simply to cure the deepfake disease but to root out the underlying chronic conditions that enable it. Jessica Silbey, a legal scholar who also holds a Ph.D. in comparative literature, says we can diagnose these conditions by thinking of deepfakes as a kind of storytelling. She points out that deepfakes fall into a particular genre; they tend to be consumeristic and addictive, and they exploit our craving for instantaneity and our base impulse to degrade or shame others. That is what drives us to compulsively share, send, like, click or watch them, turning us into vectors of disinformation. For Silbey, we need more than technical solutions; we need “a new culture of the internet.”

It’s a paradox: Once hailed as the “information superhighway,” the internet has become more like a disinformation dystopia. It has made us smarter in some ways, but it has also tended to make us easier to fool. While experts look at the problem from many angles, most agree they can’t solve the problem in isolation. Mark McKenna, Notre Dame’s John P. Murphy Foundation Professor of Law and acting director of ND-TEC, says the first step forward is to build connective tissue between ethics, law, politics, media, computer scientists and tech companies. That’s why ND-TEC plans to focus mainly on “multi- and interdisciplinary research.”

If you ask Ian Goodfellow about the deepfake disease, he looks not forward to further innovation but backward to our history. He points out, “It’s been a little bit of a fluke, historically, that we’re able to rely on videos as evidence that something really happened.”

We once relied on a complex set of institutions, publications and professional standards to separate fact from fake in print media. It stands to reason that we will eventually develop similar antibodies to fight new digital strains of the disinformation disease. But in the meantime, it’s no surprise if we’re left feeling a little faked out.

Brett Beasley is the associate director of the Notre Dame Deloitte Center for Ethical Leadership.