Instagram is poised to be one of the worst breeding grounds for political fake news ahead of the 2020 election, and hardly anyone is paying attention.
The Facebook-owed platform has largely escaped the leery eye cast upon its parent company and Twitter, which have taken most of the blame for housing foreign disinformation campaigns in 2016. But Instagram was “perhaps the most effective platform” that the Internet Research Agency — Russia’s Kremlin-linked troll farm — used to target voters in the last election, according to a report commissioned by the Senate Select Committee on Intelligence. And as one of the fastest-growing social media platforms, Instagram has around 1 billion active users (more than three times as many as Twitter) and is a go-to source for political news among Gen Zers — most of whom will be old enough to vote next year.
By following just one conspiracy-minded account on Instagram, users can wind up viewing dozens more — all promoted by a platform that is designed to keep them on the app even if it means distributing propaganda to do so.
“Instagram has skated by,” said Paul Barrett, an adjunct law professor at New York University and author of the report “Disinformation and the 2020 Election.” “People don’t realize the potential for sliding into the swamp of disinformation on Instagram.”
Rewarding Fake News
I wanted to see firsthand how political disinformation performs on Instagram, so I created a new account and searched for profiles promoting QAnon — the viral, far-right conspiracy theory movement that baselessly claims there’s a deep-state cabal of Satanic, liberal pedophiles trying to take down Trump. It wasn’t hard to find them. The first QAnon profile I saw had nearly 60,000 followers, and after I clicked on it, Instagram took the reins and recommended dozens more for me to follow.
So, I followed them. As I did, more and more Instagram-recommended QAnon accounts appeared in their place, like a digital conveyor belt of hyperpartisan fringe content. Here’s a quick glimpse of what I saw next on my “Explore” page — the algorithmically curated collection of public posts that Instagram customizes for each user.
That’s outside the feed of posts directly from accounts I’d followed at Instagram’s suggestion, which featured images such as these:
The point here is not that Instagram has a QAnon problem. (Although, as The Atlantic first observed, it does.) The point is that Instagram, like other major platforms, rewards sensational clickbait with algorithmic promotion — thereby incentivizing people to produce more of it. It’s a mutually beneficial relationship: Greater user engagement ultimately means more ad revenue for Instagram and a larger audience for fake news creators. And by recommending such content, Instagram also legitimizes it, especially as more and more people turn to social media sites including Instagram for their news.
“As long as the algorithm tends to favor sensationalistic or outrageous or negative material, then — perhaps completely inadvertently — the platform is, in a sense, encouraging purveyors of disinformation to take advantage of it,” Barrett said.
Instagram has been secretive about how, exactly, its algorithm works, so it’s difficult to know much about it with any degree of certainty. But it’s easy to envision how the algorithm could take an undecided voter seeking information about a certain conspiracy theory, for example, and pull them into an echo chamber of political disinformation.
An Ideal Environment For Fake News
Russia’s Internet Research Agency garnered 187 million user engagements on Instagram, far exceeding its 77 million engagements on Facebook and 73 million on Twitter. Instagram’s ability to outperform its parent company in this way “may indicate its strength as a tool in image-centric memetic (meme) warfare,” the Senate report noted, warning that Instagram “is likely to be a key battleground” going forward.
Memes are an increasingly popular vehicle for disinformation: They’re punchy, easy to make, instantly impactful and appealing across generations. They’re also flourishing on Instagram, which was designed specifically to share images. The site is crawling with hoax memes pushing misleading or blatantly false messages about political leaders and their supporters, as well as democratic institutions — and they’re not going anywhere.
Social media companies “are motivated to spread all types of memes, good or bad,” meme expert Jennifer Grygiel told HuffPost’s “Between The Lines.” They’re reluctant to act as gatekeepers — even when it comes to malicious content — because they “are motivated by the bottom line, which is to drawing your time and attention,” Grygiel said.
Instagram told HuffPost in a statement that “fighting misinformation is critical” and that it had “learned many lessons from 2016.” But rather than banning posts that promote fake news altogether, the company has started demoting them on a case-by-case basis. It recently expanded its reporting tool to allow users to flag misinformation that they encounter on the platform. Those with the latest version of the app will see “false information” at the bottom of a list of reportable items, including nudity, pornography, self injury and other options. If Instagram determines that a post has been correctly flagged as misinformation, it won’t delete it. Instead, it will just limit the post’s reach by hiding it from the public “Explore” feed.
That’s why each of the following posts — which contain provably false information — were still online at the time of this article’s publication, more than a week after I reported them.
Thus far, Instagram has actually been more aggressive in its crackdown on vaguely “sexually suggestive” content than it has on flagrant political disinformation. Its refusal to remove verified falsehoods is perhaps best illustrated by its approach to doctored videos including “deepfakes” and “shallowfakes,” which present another growing challenge in the age of online disinformation. Not long after Facebook declined to take down a video of House Speaker Nancy Pelosi that was edited to make her seem drunk, a pair of deepfakes of Facebook CEO Mark Zuckerberg popped up on Instagram, in which Zuckerberg appeared to claim that he owns people and controls their future. Instagram left them up, along with deepfakes of several other public figures.
Experts have cautioned that the decision by major social media companies to allow maliciously distorted videos and other forms of fake news on their platforms could be disastrous for democracy. What if a deepfake falsely showing a presidential candidate doing something scandalous were to go viral on the eve of the election?
A Lack Of Transparency
Generally speaking, social media algorithms are designed to amplify content that will keep users engaged for as long as possible. We use Instagram, YouTube, Twitter and other sites for free in exchange for our attention, which is the commodity those platforms sell to advertisers. This could mean that while scrolling on Instagram you’ll see lots of cute animal photos, a bunch of workout videos or, depending on your presumed interests, a barrage of propagandist memes — whatever the algorithm predicts will keep you around longest.
Content promoting fake news often becomes popular on its own due to its sensational nature, but algorithms can easily boost it into virality. Tech companies are capable of preventing this from happening, although very few do so proactively.
Take YouTube, for instance. Toward the end of 2018 through early 2019, a string of news articles and reports detailed how the Google-owned site was profiting by driving people down rabbit holes of increasingly extreme disinformation, which was having a radicalizing effect on some viewers. The media hype caught the public’s attention, which sent YouTube into damage control mode. In late January it announced that it would adjust its algorithm to stop recommending videos promoting harmful misinformation; since then, views for such videos have been cut in half. This chain reaction provided a clear case study of how media coverage of a shadowy issue could spark public outrage, in turn threatening to spook big-spending advertisers and, ultimately, compelling a powerful tech giant to change its policies.
Instagram has managed to escape such furor because there’s been very limited reporting on its role as a disinformation network, which is partly due to the fact that it abruptly shut down its public API (Application Programming Interface) last year amid the fallout of Facebook’s data privacy scandal. Simply put, this has made it near-impossible for researchers to gather meaningful data about the spread of fake news and extremist content on the platform. Without such data, there’s less for journalists to report on, and consequently, less pressure on Instagram to make changes.
In lieu of media coverage, public pressure and fleeing advertisers, there’s really nothing that can hold online intermediaries accountable for the content they choose to host or amplify. Talk of tech regulation in the U.S. has been aggressively countered with concerns about censorship. And as it stands, Section 230 of the Communications Decency Act, a law passed in 1996, shields platforms from liability for things their users post. As a result, platforms are free to decide what content they will and won’t allow, and to implement policies that can be as vague and as inconsistently enforced as they wish.
For now, Barrett stressed, public awareness of the abundance of fake news on Instagram — and the company’s role in spreading it — is crucial.
“It’s really important for people to know what’s happening,” he said, “so that they’re skeptical of what they see on Instagram in the same way that they’re gradually starting to question some of what they see on Facebook.”