Decker, a research fellow at Harvard’s Shorenstein Center, is working to understand disinformation—where it originates, how it spreads, and how it can be corralled.
It took less than a month for Gritty—the furry, wide-eyed mascot of the Philadelphia Flyers hockey team—to go from lovable monster to cautionary tale of the post-truth era.
Days after Gritty’s debut last September, members of the Antifa movement plastered his image on an anti-Trump banner, and leftist groups across the country began to cast the mascot in politically charged Twitter memes. It wasn’t long before alt-right groups attempted to co-opt the mascot for their own sake, disseminating images depicting Gritty in a Nazi uniform. Benjamin Decker, who spoke to Knight Science Journalism fellows in February, said the tug-of-war over Gritty shows how even the most “innocent and nascent things can be steamrolled into contentious political symbols.”
As a research fellow at Harvard’s Shorenstein Center on Media, Politics, and Public Policy, Decker studies how misleading information spreads on the Internet. But don’t call it “fake news.” Decker tries to avoid that phrase, which he says is so general that it can easily be turned around to gaslight credible conversations. Instead, he categorizes media manipulation into three specific types of information disorder: misinformation (content shared by people who don’t realize it’s deceptive); disinformation (content purposely pushed online in order to deceive); and malinformation (private information that’s leaked with malintent).
Decker is especially interested in disinformation—where it originates, how it spreads, and how it can be corralled. He and his colleagues spent the 2018 election cycle finding and cataloging trending pieces of content from the right, left, and center of the political spectrum. Disinformation, spread largely by far-right ideologues like men’s rights activists and white nationalists, proliferated in every realm of the internet—from open, anonymized, and secure networks to the dark web. (Unless you’re at DARPA, Decker said, the tools simply don’t exist to robustly monitor disinformation on the dark web.)
Identifying disinformation is one thing; defeating it is quite another. In recent years, mainstream websites have taken steps to remove hateful and false information from their platforms. Last year, for example, Apple, Facebook, and YouTube suspended the accounts of Alex Jones and Infowars. Twitter soon followed suit.
But it’s nearly impossible to completely eliminate specific ideas or content from a platform. Many groups use media manipulation to bypass a website’s algorithmic filters, Decker says. Sometimes, a group will A/B test a meme to see which version can successfully evade the terms of service. And attempts to “de-platform” bad actors can have unintended consequences. The outcast user will often go underground, using alternative platforms to share ideas. Some might migrate to anonymized chat forums like 4chan or to private chat messaging services like Discord.
Other times, extreme ideas flow in the opposite direction: from obscurity into the mainstream. Take QAnon, a conspiracy theory that holds that members of an alleged deep state plotted to sabotage Donald Trump during his presidency. The theory started small, sprouting from an anonymous 4chan user claiming to have top-secret government information. But when followers began wearing QAnon t-shirts to Trump rallies, media outlets began reporting on the conspiracy.
Ironically, those media outlets breathed life into the theory. Repetitive exposure to an idea, even one that seems crazy at first, reinforces the idea and pushes people towards radicalization. In fact, it might be better to refrain from reporting on certain memes and conspiracies in the first place, Decker says. His advice: “Don’t give it oxygen.”