A Texas law under review by the Supreme Court could make it harder for tech companies to remove many kinds of violent, hate-filled content from their sites — including a racist manifesto linked to the suspect of last weekend’s mass shooting in Buffalo, N.Y.
The statute, which makes it illegal for large social media platforms to “censor” users or their posts based on “viewpoint,” is among a growing set of barriers facing companies such as Facebook, Google and YouTube as they try to police problematic messages. The Supreme Court may rule this week on a tech industry request to block the Texas law from taking effect.
Meanwhile, Twitter’s own anti-hate efforts face an uncertain future under the company’s planned purchase by Elon Musk, who has called himself a “free-speech absolutist” and said he opposes “censorship that goes far beyond the law.”
Musk has said he would still take down content that’s illegal or incites violence, and the Texas law includes exceptions for “unlawful expression” and “specific threats of violence” against people based on factors like race, religion or national origin. But companies including Facebook, Google and Twitter have used their hate policies to take down content that doesn’t clearly violate any U.S. laws, such as insults aimed at Black Americans, immigrants, Muslims, Jews or transgender people — and now, those efforts could become legally perilous.
Facebook, Twitter and the Amazon-owned streaming platform Twitch may have even violated the Texas law when they took down the white supremacist manifesto that the Buffalo shooting suspect is believed to have posted online, tech industry lawyer Chris Marchese said in an interview. He said the manifesto is “absolutely” covered under the law, known as HB 20.
“The manifesto is written speech and even though it is vile, extremist and disgusting speech it is nevertheless a viewpoint that HB 20 now protects,” said Marchese, the counsel at the industry group NetChoice. The group, which represents companies like Facebook, Google and Twitter, filed an emergency appeal Friday to Supreme Court Justice Samuel Alito seeking to block the Texas law, with a ruling expected as early as this week.
A federal appeals court last week allowed the Texas law to take effect immediately, even before judges finish weighing the merits of the statute.
Civil rights groups say the online companies need to do much more to scrub hate from their platforms — citing Buffalo as an example of the consequences of failure.
Minority communities in particular would suffer if online companies water down their content moderation policies or readmit people they have banned, NAACP President Derrick Johnson said in an interview.
“We cannot as a society allow for social media platforms — or broadcast or cable news — to be used as tools to further tribalism, diminishing democracy,” he said. “That is what happened leading up to World War II and Nazi Germany. We have too many lessons in the past we can look to to determine it is not healthy for communities, it is not safe it is not safe for individuals.”
The office of Republican Texas Attorney General Ken Paxton didn’t respond to requests for comment about how tech companies’ removal of the Buffalo suspect’s manifesto — along with a livestream of the shooting — would be litigated under HB 20. Attempts to contact Musk were also unsuccessful, even as he began to take flack for failing to comment publicly about the Buffalo shooting or social media’s role in the attack.
At the very least, the Texas law means that users will be able to sue platforms that try to block the spread of what the companies consider harmful messages — leaving it for a judge to decide whose interpretation of the statute is correct.
“It kind of doesn’t matter what any of us think of what counts as viewpoint or doesn’t,” said Daphne Keller, director of the Program on Platform Regulation at Stanford University’s Cyber Policy Center. “It only matters what a whole bunch of different local judges in Texas think.”
Under the law, social media platforms with 50 million or more active monthly users could face fines of $25,000 for each day they impede certain viewpoints protected by the law.
“You’re suddenly increasing the risk of lawsuits dramatically, and that’s the real problem with the law,” said Jeff Kosseff, a cybersecurity law professor at the U.S. Naval Academy who has written two books about online speech. Fear of lawsuits, he said, means that platforms would err on the side of leaving up content even if it might violate their own policies against hate speech or terrorism.
“So if you’re a rational platform trying to avoid defending an action, you’re not going to take [a post] down, or you’re going to be much more hesitant to take it down,” he said.
Before passing HB 20, Texas lawmakers voted down a Democratic amendment that would have allowed removal of material that “directly or indirectly promotes or supports” international or domestic terrorism, which could have applied to the Buffalo manifesto and livestream.
Texas Democratic state Rep. Jon Rosenthal, who introduced the amendment, said Wednesday that the Buffalo shooting shows the need for such a provision, while faulting Republicans for blocking the measure. “It’s very alarming what folks are willing to do to line up with their party instead of what’s right and just,” he told reporters on a press call. “And right now we’re seeing the effects of that. … Exactly what we talked about is exactly what we’re seeing right now.”
The mass shooting “is a tragic reason why tech companies need robust moderation policies — to ensure that content like this gets as little dissemination as possible,” said Matthew Schruers, president of the Computer and Communications Industry Association, which joined NetChoice’s appeal.
The Texas law — and a similar Florida law, SB 7072, championed by Republican Gov. Ron DeSantis that has been blocked by a federal judge — “ties hands of digital services and puts Americans at greater risk,” Schruers said. (Other Republican-controlled state legislatures have also introduced bills to prohibit alleged viewpoint censorship, including Michigan and Georgia.)
Paxton and other supporters of the Texas law argue it’s intended to protect individuals’ ability to express their political viewpoints — particularly for conservatives who allege that large tech companies have censored them. Those include former President Donald Trump, who was banned by the major social media platforms after a throng of his supporters attacked the Capitol on Jan 6, 2021.
Social media companies have spent years adjusting their approaches to hate speech and violence after past violent mass shootings, including a pair of attacks at two mosques in Christchurch, New Zealand, that left 51 people dead in 2019. The gunman in both attacks — who identified with white supremacist ideologies — livestreamed one shooting on Facebook and posted his manifesto online.
The major platforms signed onto the “Christchurch Call” after the incident, pledging to “eliminate terrorist and violent extremst content online.” It’s implemented by the Global Internet Forum to Counter Terrorism, which is funded by its founding members Facebook, Microsoft, Twitter and YouTube to fight online extremism.
Even with that pact and the companies’ content moderation policies in place, extremist videos still slip through, including a link to the Buffalo shooting suspect’s livestream shared on Facebook and clips of the video that surfaced on Twitter. Both platforms removed the content after POLITICO notified them.
Jonathan Greenblatt, the CEO of the Anti-Defamation League, said social media platforms have a responsibility to quickly remove racist, white supremacist and antisemitic speech that starts on their sites and can lead to off-line violence.
“It starts with crazy conspiracy theories about the ‘great replacement’ and it leads to 11 people being massacred in the synagogue in Pittsburgh,” Greenblatt said. “There’s a straight line from Pittsburgh to Buffalo. These things are not unrelated. They’re all actually very related.”