- On Tuesday, the Supreme Court heard oral arguments in Gonzalez v. Google, a case filed against Google by the family of an American student killed in a 2015 ISIS attack, which alleges that YouTube “aided and abetted” the attack by allowing its algorithm to recommend ISIS videos.
- Arguments considered whether Section 230 of the Communications Decency Act of 1996, a law that shields “interactive computer services” from being being held liable for the speech third-parties put on their platform, gives Google immunity for its algorithm’s content recommendations.
- Multiple justices pointed to the need for Congress to address gaps in the law that have come to light as the internet has developed, with Justice Kavanaugh saying the court is “not equipped” to deal with the potential implications of narrowing protections.
The Supreme Court heard oral arguments Tuesday in Gonzalez v. Google, which addresses the question of whether an “interactive computer service” like Google can be held liable for content recommended by its algorithms under Section 230 of the Communications Decency Act of 1996.
Section 230 protects internet companies from being held liable as the “publisher or speaker” of information provided by a third-party, simultaneously protecting their ability to restrict material that is “obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable.” The case, brought by the family of Nohemi Gonzalez, a 23-year old American student killed in a 2015 ISIS terrorist attack at a Paris bistro, alleges that YouTube “aided and abetted” the attack by allowing targeted recommendations of ISIS videos designed to recruit and radicalize members.
The lawyer representing Gonzalez, Eric Schnapper, created a distinction between recommending content and publishing it. Section 230, he argued, only grants immunity when companies fail to remove objectionable content. By recommending content, the company is taking action to encourage it, which the petitioners believe is outside the bounds of Section 230.
Justices pressed Schnapper on the scope of this interpretation, questioning how it would work when algorithms are so central to billions of searches made every day.
“You can’t present content without making choices,” Justice Kagan said. Roberts, Alito, and Kavanagh raised similar concerns on the potential for endless lawsuits based on algorithms prioritizing certain content, whether it is defamatory or otherwise objectionable.
This concern was at the heart of Google’s defense. The company’s lawyer, Lisa S. Blatt, said that “all publishing implies organization.”
Blatt noted that the internet would have never flourished in its early stages if websites had been faced with the threat of constant lawsuits based on how they arrange content.
Previously, the Ninth Circuit applied to Google what lower courts have deemed the “neutral tools” test, which grants immunity when algorithms serve content using neutral rules based on user input.
Justices sought to clarify this test, seeking to find the point where a company would not be immune from liability for the content it recommends.
Schnapper argued that even a “neutral” algorithm could result in liability. He said it matters what the defendant does with the algorithm—in this instance, allowing it to recommend ISIS videos—not how the algorithm works.
Malcom L. Stewart, Deputy Solicitor General of the Department of Justice, who represented the position of the United States, said that a company must recommend content that “violates applicable law” in the state where a lawsuit is brought to be held liable. If an algorithm is inherently discriminatory and uses “illicit criteria,” like in the hypothetical scenario of a job website such as Indeed showing higher paying jobs to white applicants, it also would not be protected.
“When a platform prioritizes content, it is their own conduct [and] subject to liability,” Steward said.
Blatt, on the other hand, said an algorithm that promotes objectionable content is still protected, because the speech belongs to the original poster, not the website.
Justice Barrett asked whether an algorithm that promotes exclusively pro-ISIS content would be protected, and Blatt affirmed that it would.
The test, Blatt said, is to see where the “harm” is generated. As long as it is not the website’s own conduct creating harm—e.g., creating a dating algorithm that refuses to match white people with black people—the website is protected.
The court appeared hesitant to make any significant determination on Section 230, instead angling their questions towards understanding whether it would be more beneficial for Congress to address gaps in the law that have come to light as the internet has developed.
“We’re not the nine greatest experts on the internet,” Justice Kagan said, yielding a laugh from observers.
Justice Kavanaugh similarly raised concerns about the implications a court ruling could have on the economy and on the functioning of the internet, citing amicus briefs filed in favor of Google. “We’re not equipped to account for this,” he said.
Congress has not yet explored certain new issues related to Section 230, like artificial intelligence-generated content, Justice Gorsuch noted.
Justice Barret questioned whether it was even necessary to address the Section 230 question if the petitioner’s lose on the charge of “aiding and abetting” terrorism in tomorrow’s case, Twitter v. Taamneh, which weighs whether tech companies can be held responsible under the Antiterrorism Act for hosting ISIS content on their platform.
The Twitter v. Taamneh case was filed by the family of Jordanian citizen Nawras Alassaf, who was killed in a January 2017 ISIS attack in Istanbul, against Twitter, Facebook, and Google.
Content created by The Daily Caller News Foundation is available without charge to any eligible news publisher that can provide a large audience. For licensing opportunities of our original content, please contact [email protected]