The moments after Donald Trump was shot by a would-be assassin in Pennsylvania on July 13 were confusing for those on the ground. For those following the events unfurl on X (formerly known as Twitter), things were no clearer, as misinformation flooded the platform.
Despite X owner Elon Musk and CEO Linda Yaccarino’s claims that X would be a nimbler, more reliable source of information during breaking news events, it was neither. The platform’s AI-powered summaries of trending posts—a feature that was introduced after Musk laid off staff who handled misinformation—were a mess. Among the AI-generated summaries that appeared on people’s X feeds: speculation that the person shot wasn’t Trump, but vice president Kamala Harris; and that an actor from the movie Home Alone 2 was shot in the melee. (Both summaries may have been grounded in real-life events: Trump appeared in Home Alone 2 and Joe Biden mistakenly called Harris “Vice President Trump” recently.)
Handling a fast-moving situation like we saw over the weekend is a difficult task at the best of times. But on a social media platform that saw the bulk of its staff cut loose after Elon Musk took over in 2023, things have gotten even harder. “A situation like [the assassination attempt] is a nightmare for content moderation on social media,” says Melissa Ingle, a former senior data scientist at the company who was fired when Musk took over. “In many ways it’s the perfect storm—high visibility, politically charged, and with potentially dire real-world consequences.”
Making matters worse, Musk’s decision last year to end free access to its API—which lets third-party developers gather data and was a critical tool for researchers studying misinformation online—means it’s harder than ever for academic experts to get a high-level look at how and why bunk stories travel online. “Mainstream platforms have rolled back trust and safety teams and made it harder for researchers to access data,” says Joe Bodnar, an independent disinformation analyst. “At the same time, politics have become increasingly hostile. It’s a bad mix.”
While X came in for the most criticism for how it handled the raft of speculation following the shooting, including falling for trolls impersonating the shooter and sharing AI-altered images showing a Secret Service agent smiling next to a bloodied Trump, every platform had its own failings, says Bodnar. “It’s obviously not easy for platforms to surface credible information when there isn’t any,” he says. Yet despite that, some platforms fared better than others. Meta-owned Facebook seemed largely to highlight links to traditional news websites. Meta’s other social network, Threads, has a policy to limit the spread of any political content, though Ingle says the Trump assassination attempt was “the first time I saw political information break containment [on Threads] with misinformation and speculation rampant.”
But X still stood out from its rival mainstream networks, Bodnar says, as it seemingly “acted as megaphones for conspiracies.” And the misinformation started from the top at X: Musk himself seemed to hint, without evidence, that there may have been a “deliberate” failure to stop the shooter by the Secret Service.
According to a NewsGuard analysis, there were 308,000 mentions of the word “staged” on X between Saturday and Sunday, which marks a 3,924% increase over the previous two-day period. In addition, there were 83,000 mentions on X of the phrase “inside job”—a 3,228% increase over that same time span.
By late Sunday, X’s algorithm was still pushing conspiracies into people’s feeds, including a likely false claim emanating from the online message board 4chan that a police sniper was not granted permission to shoot at the would-be assassin, Thomas Matthew Crooks, until after he shot at Trump.
That one post highlights the larger challenge that social media platforms face when confronted with major news events like this. The drive to automate humans out of the process comes with costs. “No content moderation system could fully contain the spread of misinformation when an event like this occurs, but X appeared completely overwhelmed,” says Ingle. “This showcases the weakness of their minimal strategy to contain misinfo which amounts to mostly AI and a small amount of human moderation.”
Update, July 15, 2024: This article has been updated with information from a NewsGuard analysis.