‘Fake news’ regains its megaphone after Las Vegas shooting

Google and Facebook blame ‘algorithm errors’ on widely read misinformation

Reporters wait for official statements on the shooting in Las Vegas: the incident  gave fake news writers and trolls the opportunity to spread misinformation anew by exploiting  how  Facebook and Google handle threads and posts. Photograph: Isaac Brekken/The New York Times
Reporters wait for official statements on the shooting in Las Vegas: the incident gave fake news writers and trolls the opportunity to spread misinformation anew by exploiting how Facebook and Google handle threads and posts. Photograph: Isaac Brekken/The New York Times

When they woke up and glanced at their phones on Monday morning, Americans may have been shocked to learn that the man behind the mass shooting in Las Vegas late on Sunday was an anti-Trump liberal who liked Rachel Maddow (recently described by the New Yorker as "Trump's TV nemesis") and MoveOn.org, that the FBI had already linked him to Islamic State, and that mainstream news organizations were suppressing that he had recently converted to Islam.

They were shocking, gruesome revelations. They were also entirely false – and widely spread by Google and Facebook. In Google's case, trolls from 4Chan, a notoriously toxic online message board with a vocal far-right contingent, had spent the night scheming about how to pin the shooting on liberals. One of their discussion threads, in which they wrongly identified the gunman, was picked up by Google's "top stories" module, and spent hours at the top of the site's search results for that man's name.

In Facebook’s case, an official “safety check” page for the Las Vegas shooting prominently displayed a post from a site called Alt-Right News. The post incorrectly identified the gunman and described him as a Trump-hating liberal. In addition, some users saw a story on a “trending topic” page on Facebook for the shooting that was published by Sputnik, a news agency controlled by the Russian government. The story’s headline claimed, incorrectly, that the FBI had linked the shooter with the “Daesh terror group”.

Google and Facebook blamed algorithm errors for these. A Google spokesman said, “This should not have appeared for any queries, and we’ll continue to make algorithmic improvements to prevent this from happening in the future.” A Facebook spokesman said, “We are working to fix the issue that allowed this to happen in the first place and deeply regret the confusion this caused.”

READ SOME MORE

But this was no one-off incident. Over the past few years, extremists, conspiracy theorists and government-backed propagandists have made a habit of swarming major news events, using search-optimised "keyword bombs" and algorithm-friendly headlines. These organisations are skilled at reverse-engineering the ways that tech platforms parse information, and they benefit from a vast real-time amplification network that includes 4Chan and Reddit as well as Facebook, Twitter and Google.

Misleading information

Even when these campaigns are thwarted, they often last hours or days – long enough to spread misleading information to millions of people. The latest fake news flare-up came at an inconvenient time for companies like Facebook, Google and Twitter, which are already defending themselves from accusations that they have let malicious actors run rampant on their platforms. On Monday, Facebook handed congressional investigators 3,000 ads that had been purchased by Russian government affiliates during the 2016 campaign season, and it vowed to hire 1,000 more human moderators to review ads for improper content. (The company would not say how many moderators currently screen its ads.) Twitter faces tough questions about harassment and violent threats on its platform, and is still struggling to live down a reputation as a haven for neo-Nazis and other poisonous groups. And Google also faces questions about its role in the misinformation economy.

Part of the problem is that these companies have largely abrogated the responsibility of moderating the content that appears on their platforms, instead relying on rule-based algorithms to determine who sees what. Facebook, for instance, previously had a team of trained news editors who chose which stories appeared in its trending topics section, a huge driver of traffic to news stories. But it disbanded the group and instituted an automated process last year, after reports surfaced that the editors were suppressing conservative news sites. The change seems to have made the problem worse – this year, Facebook redesigned the trending topics section again, after complaints that hoaxes and fake news stories were showing up in users’ feeds.

There is also a labelling issue. A Facebook user looking for news about the Las Vegas shooting on Monday morning, or a Google user searching for information about the wrongfully accused shooter, would have found posts from 4Chan and Sputnik alongside articles by established news organizations like CNN and NBC News, with no obvious cues to indicate which ones came from reliable sources.

More thoughtful design could help solve this problem, and Facebook has begun to label some disputed stories with the help of professional fact checkers. But fixes that require identifying "reputable" news organizations are inherently risky because they open companies up to accusations of favouritism. (After Facebook announced its fact-checking effort, which included working with the Associated Press and Snopes, several right-wing activists complained of left-wing censorship.)

Editorial judgment

The automation of editorial judgment, combined with tech companies’ reluctance to appear partisan, has created a lopsided battle between those who want to spread misinformation and those tasked with policing it. Posting a malicious rumour on Facebook, or writing a false news story that is indexed by Google, is a nearly instantaneous process; removing such posts often requires human intervention. This imbalance gives an advantage to rule-breakers, and makes it impossible for even an army of well-trained referees to keep up.

But just because the war against misinformation may be unwinnable doesn’t mean it should be avoided. Roughly two-thirds of American adults get news from social media, which makes the methods these platforms use to vet and present information a matter of importance.

Facebook, Twitter and Google are some of the world's richest and most ambitious companies, but they still have not shown that they're willing to bear the costs – or the political risks – of fixing the way misinformation spreads on their platforms. (Some executives appear resolute in avoiding the discussion. In a recent Facebook post, Mark Zuckerberg reasserted the platform's neutrality, saying that being accused of partisan bias by both sides is "what running a platform for all ideas looks like.")

The investigations into Russia’s exploitation of social media during the 2016 presidential election will almost certainly continue for months. But dozens of less splashy online misinformation campaigns are happening every day, and they deserve attention, too. Tech companies should act decisively to prevent hoaxes and misinformation from spreading on their platforms, even if it means hiring thousands more moderators or angering some partisan organisations.

Facebook and Google have spent billions of dollars developing virtual reality systems. They can spare a billion or two to protect actual reality.

– (New York Times service)