Former fellows Pakinam Amer, Magnus Bjerg, and Jeff DelViscio were part of a team that used artificial intelligence to conjure a tragic alternate history of the Apollo 11 moon mission.
Fake news. Since the phrase first found its way into the popular lexicon years ago, it has taken on a life of its own. Whether deployed as a genuine rebuke of misinformation or as a blanket attack on the news media, it stands as a reminder that we can’t believe everything we read. And as the more recent rise of deepfakes has taught us, neither can we believe everything we see and hear.
Deepfake videos make use of artificial intelligence and digital trickery to create a kind of alternate reality, in which someone can appear to say and do things they never said or did. Last year, the technology caught the attention of Knight Science Journalism Fellows Pakinam Amer, Magnus Bjerg, and Jeff DelViscio, who teamed up with researchers from MIT and Harvard to deepfake one of the most iconic moments in American history: the Apollo 11 moon landing.
As part of a team that included Fran Panetta and Halsey Burgund, the fellows imagined an alternate history in which the moon mission ended in disaster, prompting President Richard Nixon to give a televised speech mourning the lives of the astronauts lost. The resulting video, “In Event of Moon Disaster,” produced by the MIT Center for Advanced Virtuality, recreates the somber speech with exquisite detail — even though Nixon never actually gave it.
Last fall the video went viral. It reached the front page of Reddit’s videos subreddit, where it was upvoted more than 35,000 times. And it won the special jury award for creative technology in digital storytelling at the 2019 International Documentary Film Festival Amsterdam. We reached out to Amer, Bjerg, and DelViscio to learn more about how the project came to be — and the lessons it can teach about misinformation in the media. (The conversation has been edited for clarity and concision.)
KSJ@MIT: How did you become involved with the “In Event of Moon Disaster”?
DelViscio: It all started with a Bose audio AR hackathon. MIT (and plenty of other universities and colleges) use hackathons to cruise for young engineering talent — valuable scouting paid for in free merch, caffeine, and cold pizza. Magnus and I immediately gravitated toward the other journalism fellows who showed up. Fran Panetta, a Nieman Fellow at the time, knew about the event. She was there with Halsey Burgund, an Open Doc Lab Fellow. We immediately decided to blow off the requirements of the hackathon in favor of pitching Bose executives directly on an augmented audio experience, using their Bose “Frames.” That night kicked off months of collaboration that ultimately produced nothing for Bose, but created near weekly scheming and brainstorming for what would become the four original members of Team “In Event of Moon Disaster.”
Bjerg: It is a great example of the many options you get as a Knight Fellow in a vibrant environment like Cambridge.
Amer: My experience is a little different since I joined the project at a later stage, as an affiliate of the MIT Center for Advanced Virtuality, which produced “In Event of Moon Disaster.” It’s sort of the center’s flagship work this year. When Fran Panetta approached me about it, I was immediately intrigued. The work reflects a broad concern over AI-assisted misinformation, but uses computing technologies to combat that misinformation, raise awareness, and empower the public in an imaginative and artful way.
KSJ: What was your contribution to the project?
Bjerg: I was mainly part of the early ideation phase where we came up with the concept of having Nixon do “the best speech that was never heard,” by using new AI tools to synthesize both his face movements and speech.
DelViscio: Magnus and I were the ones who pushed the group toward a science-focused project, and specifically one on the moon landing. We originally imagined two versions of the project. One was an audio AR experience where we would stream the Apollo 11 mission control and astronaut audio during the eight days, three hours, 18 min, 35 seconds it was going on. We never got around to making that one. The other project is the one we’re talking about now. I mentioned the existence of the “In Event of Moon Disaster” speech, I believe, and it was Magnus who came up with the idea of deepfaking it. Fran Panetta and Halsey Burgund have done an amazing job making it into reality.
Amer: My work was less about the technical aspects of the project and more about the research questions that had partly shaped how the project was translated into reality. I spent a lot of time talking to deepfake creators, technologists, detection companies, and thought leaders in the field of disinformation about some of the biggest questions around the technology, including how it works, how to protect against malicious use, how deepfakes can change the testimonial aspect of visual media, and, finally, the gate-keeping role of journalists and fact-checkers.
KSJ: Were you surprised by the way the project was received by the public?
DelViscio: I wasn’t surprised. I think we picked the right project for this particular moment in digital time. And a clip of the speech first took off on Reddit. We knew it was viral fuel for that community. The follow-on media coverage of the installation of the project at the International Documentary Festival Amsterdam was really nice to see, too. I think there’s a lot of curiosity about the machine learning methods that under-gird deepfakes — and a lot of fear and ignorance. This is a great moment to report deeply on the subject. Also, the 2020 election is nearly upon us, and who knows, deepfakes could play a part in it. I’m not calling it, but experts on the subject are definitely talking about this possibility in a real and present danger sense.
Bjerg: Several people that were shown the Nixon part of the film thought that they actually filmed a contingency speech back then and that we somehow dug it out of the archives. It amazes me how good this technology has actually become. But it also scares me how easily we can be tricked into accepting something as genuine.
Amer: Like Magnus, I was surprised by how many people believed the speech to be real, and scared by what the presence of such a technology means for journalists.
KSJ: Has your work on this project changed the way you think about misinformation in the media?
Amer: Definitely. The research that I did for this project and the experts I talked to opened my eyes to how far back disinformation and misinformation go. It contextualized the rise of deepfakes within a bigger trend that extends far beyond the technology and into how misinformation spreads in general, how it’s amplified and by whom, and why some people are more susceptible to it than others. I shared some of that perspective in two pieces that I penned for The Boston Globe last month.
DelViscio: For me, this project has presented an amazing way to bring a small, organic collaboration, forged in the MIT and Harvard Fellowship incubators, to the larger world. It’s helped educate me about the AI methods involved. I’m now really invested in the subject, in general, as one of my beats because I think its influence and uses will continue to grow in the coming months and years. So, that’s a yes.
KSJ: Do you think there will come a day when the public can truly be duped by a deepfake?
Amer: I think that day has already come. But like many of my peers, I’m still more wary of ‘shallow fakes’ which are manipulations of media where a video is shared on social media out of context, or perhaps slowed down or sped up and edited in a way to relay false information or mislead. Between this and AI-empowered videos, It’s the cheaper form of fake news, it’s certainly more pervasive and easier to make (and to believe, so far).
Bjerg: We have lived with Photoshop for decades and will surely find ways of navigation a world with this kind of technology as well. But we need to share knowledge of this without causing panic and complete unwillingness to believe anything. Because that is also a danger — that its mere existence will erode trust.
DelViscio: Yes. The public is already routinely duped by much less sophisticated methods of misinformation. Knowing about the existence of deepfakes should force us all to be more circumspect in terms of what we believe online. I suspect the growing sophistication of deepfakes will lead people who want to believe in misinformation to be even more totally convinced that that misinformation in is true. Or, we might all stop believing in anything we see on the web. Maybe we should ask Buzz Aldrin what he thinks of all of this…
Leave a Reply