Two pivotal cases before the U.S. Supreme Court this week could decide whether social media companies can be held liable for promoting incendiary content, including terrorist activities, which have been allowed to widely circulate on the platforms.

The forthcoming rulings could overturn Section 230 of the 1996 Communications Decency Act and throw out longstanding federal protections that keep big tech companies from being sued over content published by independent users.

In both cases, the court’s nine justices could fundamentally determine whether the federal statute can still apply if algorithms used by the tech companies are targeting specific users with questionable content while also spreading terrorist influence to a massive digital audience.

The case being argued Tuesday, Gonzalez vs. Google, arose out of a lawsuit filed by the family of 23-year-old Nohemi Gonzalez — an American student who was among 130 people killed in a 2015 Islamic State attack in Paris.

The lawsuit, filed under the Antiterrorism Act, accuses YouTube owner Google of allowing barbaric videos to be posted to the popular platform, which go viral as algorithms recommend the content to random users. For many years, terror groups have recognized the power of social media as a useful recruiting tool, experts say.

A divided 9th Circuit Court of Appeals previously upheld Section 230, saying the statute protects big tech in cases where it has recommended inflammatory content — so long as the algorithm was being used in the same way for all other content.

In its ruling, however, the lower court acknowledged that Section 230 “shelters more activity than Congress envisioned it would” and suggested that U.S. lawmakers, not the courts, move to clarify the scope of the law.

Upon the decision, the Gonzalez family appealed to the U.S. Supreme Court, which agreed to hear the liability case last year.

Last month, Google filed a brief with the Supreme Court warning against “gutting” the statute, arguing the law’s removal would lead to additional censorship and hate speech on the Internet.

Efforts to remove or limit the law have been the subject of intense debate in Washington for the past several years.

In April, Rep. Marjorie Taylor Greene, R-Ga., introduced a bill to establish a new law protecting online platforms from liability for user-posted content.

The bill, titled the 21st Century Free Speech Act, would abolish the law that protects online platforms from liability for content posted by third parties, known as Section 230, and replace it with a “liability protection framework” that would require “reasonable, non-discriminatory access to online platforms” through a “common carrier” framework comparable to telephone, transportation and electric services.

For decades, social media companies have been immune from most civil actions in such cases, although Section 230 suggests that companies have protocols in place to remove objectionable material. Still, the law’s reach is much different today than it was in the early days of social media, when Internet business models were largely driven by new subscriptions.

“Now most of the money is made by advertisements, and social media companies make more money the longer you are online,” said Eric Schnapper, an attorney for victims in the Gonzalez case and Twitter vs. Taamneh, which the high court has agreed to take up on Wednesday. The latter case could determine whether Twitter, Facebook and Google can be held liable for aiding and abetting international terror groups who have turned to using the platforms to radicalize a new generation of young, impressionable militants.

The Twitter case stems from a federal lawsuit filed by the Taamneh family — relatives of Nawras Alassaf, a Jordanian national who was among 39 killed in a 2017 terrorist attack in Istanbul.

The family accuses Twitter and the other tech giants of inadequately responding to mutinous content despite being aware that the platforms were being deliberately used to spread disinformation campaigns that serve to glorify bloodshed while inflaming ethnic and religious tensions worldwide.

The cases before the high court this week are similar to a class-action lawsuit filed in Kenya in late 2022 that seeks more than $2 billion from Facebook over accusations the social media giant is profiting from content that promotes ethnic and political violence throughout Africa.


This content is published through a licensing agreement with Acquire Media using its NewsEdge technology.

Rating: 3.0/5. From 1 vote.
Please wait...