When authorities identify shooters in incidents of national interest, some people scour social media for those shooters’ digital footprints, conjuring theories for their ideologies and motives. That occurred when former President Donald Trump was targeted Sept. 15 in another assassination attempt that followed one in July.
The suspect, Ryan Wesley Routh, allegedly pointed an AK-style rifle with a scope toward Trump, who was golfing on his golf course in West Palm Beach, Florida. Trump’s U.S. Secret Service protection fired at Routh, who was then arrested in a neighboring county after fleeing in an SUV, according to The Associated Press.
The FBI said Trump was the target of what “appears to be an attempted assassination” Sept. 15 at his golf club in West Palm Beach, Florida. This is the second recorded attempt on Trump’s life. In a press release, Trump said he is safe and well.
Accounts that appear to have belonged to Routh were removed on Meta and X, leading to speculation about the motives of Meta CEO Mark Zuckerberg and X CEO Elon Musk.
“Mark Zuckerberg’s Facebook just interfered in the 2024 election by wiping Ryan Routh’s account,” a Sept. 15 X post said. “Users can no longer access the page to see all his anti-Trump and pro-Kamala-Biden posts. Did the FBI order them to?”
Another Sept. 15 post says: “(Elon Musk), king of free speech, suspended Ryan Routh’s account but has yet to suspend the account of the NH Libertarians who called for VP Harris and other politicians to be shot multiple times today.”
Other social media posts shared similar claims.
But this is not an unusual practice for social media companies such as Meta and X. These companies remove not only accounts associated with people suspected of carrying out mass violence or terrorist attacks, but also content that glorifies such acts.
PolitiFact contacted X and Meta for comment but received no response.
Routh has been charged with two federal gun crimes, possession of a firearm by a convicted felon and possession of a firearm with an obliterated serial number. A bond hearing has been scheduled for Sept. 23, and a probable cause hearing or arraignment for Sept. 30.
What do we know about Routh’s social media accounts?
Routh’s X account was created in Jan. 2020, but his earliest posts, which included posts about the war in Ukraine, were from April 2022. X suspended his account Sept. 15.
His Facebook account seemed to be inactive since 2017 and was suspended immediately after his name was publicly released, Shayan Sardarizadeh, a senior journalist at BBC Verify wrote on X.
Sardarizadeh also wrote on X that, “When it comes to domestic US politics, Routh’s views seem to be incoherent. He said he initially supported Trump in 2016 and then turned against him.”
Sardarizadeh added that Routh then claimed to have, at various points, backed presidential Democratic and Republican candidates Bernie Sanders, Tulsi Gabbard, Andrew Yang, Nikki Haley, Vivek Ramaswamy, Joe Biden and Kamala Harris.
CNN reported that authorities are working to obtain search warrants on social media accounts that appear to be connected to Routh.
Social media platforms enforce policies for taking down dangerous individuals’ accounts
Meta, which owns Facebook and Instagram, states in its Violence and Incitement Policy:
We remove content, disable accounts and work with law enforcement when we believe there is a genuine risk of physical harm or direct threats to public safety. We also try to consider the language and context in order to distinguish casual or awareness-raising statements from content that constitutes a credible threat to public or personal safety. In determining whether a threat is credible, we may also consider additional information such as a person’s public visibility and the risks to their physical safety.
A Meta spokesperson referred us to its Dangerous Organizations and Individuals policy, which the company follows when an incident of mass shootings and major acts of violence takes place. Under the policy, Meta removes any accounts of the identified perpetrator.
This is the same protocol Meta followed during Donald Trump’s first assassination attempt.
“In an effort to prevent and disrupt real-world harm, we do not allow organizations or individuals that proclaim a violent mission or are engaged in violence to have a presence on our platforms,” the policy reads. “We assess these entities based on their behavior both online and offline, most significantly, their ties to violence.”
The company also prohibits content that glorifies, supports and represents dangerous organizations and individuals.
X also has similar policies under their safety and cybercrime policy. Its section on Perpetrators of Violent Attacks states:
We will remove any accounts maintained by individual perpetrators of terrorist, violent extremist, or mass violent attacks, as well as any accounts glorifying the perpetrator(s), or dedicated to sharing manifestos and/or third party links where related content is hosted. We may also remove Posts disseminating manifestos or other content produced by perpetrators.
Meta and X have enforced these policies in previous violent incidents. In 2019, a shooter livestreamed on Facebook his attack on a Christchurch, New Zealand, mosque that killed dozens of people. Facebook took down the shooter’s Facebook and Instagram accounts, as well as the video. “We’re also removing any praise or support for the crime and the shooter or shooters as soon as we’re aware,” a Facebook spokesperson said at the time.
X, formerly Twitter, also said at the time it suspended the account of a suspect.
After a Highland Park, Illinois, shooting in 2022, social media platforms removed the shooter’s accounts. Twitter, told Bloomberg that it proactively removed content violating its rules, “including posts glorifying violence.”
In 2022, links to mass shooting footage at a Buffalo supermarket circulated on Meta and Twitter. Guy Rosen, Meta’s chief information security officer, said during a May 17, 2022 press call that the company designated the event as a “(violent) terrorist attack.”
Rosen said that triggered an internal process to identify and remove accounts and content that violates the company’s policies, including copies of the video, the attacker’s manifesto, and “any content that praises or supports or represents the event or the shooter.”
On May 21, Reuters reported that Meta suspended the Facebook account of the suspected shooter involved in the attack against Slovak Prime Minister Robert Fico.
In another incident, after NBC News questioned X’s policies, the social media company suspended an anti-LGBTQ+ account that belonged to a man who on Aug. 23, 2023, shot and killed a California business owner over a Pride flag. (Police shot and killed the shooter that day.) CNN also reported in 2015 that Facebook and Twitter suspended the accounts of Bryce Williams, who shot two Virginia TV journalists.
In the past, the social media giants have faced criticism for how they moderate content following violent incidents. Victims’ relatives in the Uvalde, Texas, school shooting sued Meta in part for “knowingly promoting dangerous weapons to millions of vulnerable young people,” The Washington Post reported.
PolitiFact researcher Caryn Baird contributed to this report.
This article was originally published by PolitiFact, which is part of the Poynter Institute. See the sources here.