top of page

The Bipartisan Attack on the Internet

By Ella Gonzalez

New York City, New York

Facebook CEO Mark Zuckerberg before the House Financial Services Committee, 2019. (ALM)

In October, a case arrived at the Supreme Court that threatens to overturn Section 230’s legal shield and upend completely the way that internet and social media platforms have functioned for the past few decades.


In the lawsuit, Gonzalez v. Google LLC, Reynaldo Gonzalez alleged that the video platform YouTube (owned by Google) held liability for his daughter’s killing in the 2015 attack in Paris by the Islamic State, since the site’s algorithm had recommended videos that contributed to the radicalization of terrorists and aided ISIS. The lawsuit cuts to a core issue—to what extent is YouTube legally responsible for the decisions it makes to publish, promote, or remove videos made by its users? This question concerns one of the central tenets upholding the social media landscape of the modern internet.


Amid the swirling debate over internet political censorship and misinformation, an unlikely scapegoat has become the target of blame for all of social media’s ills. This target is Section 230, a legal provision of the Communications Decency Act of 1996 (CDA) that was and still is crucial to the development of the internet as we know it.


Section 230 came about during the writing of the CDA, which was intended to regulate “indecent” and “obscene” material on the internet. Some legislators raised concerns over the potential of the Act to stifle innovation in the newly-formed internet. The result of their efforts to offset this burden was Section 230, which was intended to free websites from liability in order to foster innovation. Much of the Communications Decency Act was struck down by the Supreme Court in 1997, but Section 230 remains, serving as a cornerstone of Internet speech and moderation.


The essence of Section 230 is expressed in its first clause, which states that “No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.” This means that any internet service that hosts third-party content—whether a social media site, or a comment board on a blog post—is not legally responsible for the content published by its users. Without this legal shield, maintaining a social platform with large numbers of users would quickly become impossible, as the site could easily become implicated in the publishing of illegal or harmful content by any one of its users.


The second provision of Section 230 clarifies the right of a site to moderate third-party content without assuming liability for the content or its moderation decisions. By eliminating liability for moderation, Section 230 empowers platforms to remove content that may be “objectionable” to their users without fear of legal backlash for potentially illegal content they failed to identify and remove.


These protections are crucial to the flourishing of online discourse seen in these past decades, and to the accessibility of information and content. They ensure that platforms are able to create a speech-friendly environment if they so choose—or to create a heavily moderated one, thereby granting the freedom to platforms to design their user experience as they see fit.


Section 230 is frequently referenced by Republicans seeking to correct the political bias and censorship they believe exists in social media moderation. A Texas law passed in the spring, H.B. 20, prohibits sites with more than 50 million active monthly users from censoring users based on their viewpoints. The law was summarily blocked by the Supreme Court in May, but was upheld by the Fifth Circuit in September. Federal Judge Oldham justified the ruling as protecting the free speech rights of individuals, saying that “Today we reject the idea that corporations have a freewheeling First Amendment right to censor what people say.”


An appeals court upheld the controversial Texas social media law. (Getty Images)

In reality, the law stifles discourse, compelling social media companies to conform to the government’s view of how they should moderate content on their platforms. This, in turn, means that the government seeks to have a say in what viewpoints are expressed online and how—a chilling overextension of authority into territory protected by the First Amendment. The case now proceeds to the Supreme Court, where it will put the platform moderation protections of Section 230 to the test.


Last year, Justice Clarence Thomas commented on Section 230 and moderation rights, and wrote about the dangers of giving social platforms such as Twitter “enormous control over speech.” This raises another question: do social media corporations have the First Amendment right to make moderation or censorship decisions at their own discretion, or should they be considered ‘common carriers’—is the service they provide so necessary that the government may mandate their impartiality?


On the other side of the political aisle, Democrats have floated the possibility of removing Section 230 for a variety of reasons. Previously, Democratic politicians targeted Section 230 by claiming, falsely, that its removal was needed to address illegal actions by social media sites and their users. Now, they have shifted towards citing the proliferation of misinformation as a reason to remove the liability shield and grant the government greater power over online speech.


In September, President Biden commented on supposed misinformation in Facebook political ads, saying that “ I, for one, think we should be considering taking away the exception that they [social media sites] cannot be sued for knowingly engaged on [sic], in promoting something that's not true.” This “exception,” of course, is the core of Section 230, and its removal would offer a devastating blow to free speech on the internet. To entrust the government with the authority to to set rules to combat misinformation is to cede to it vast discretionary power over speech, given the subjective nature of “misinformation” in such a polarized climate as exists in the United States today.


Indeed, this subject is especially pertinent in light of a report published on October 31 by the Intercept. The report details efforts by the Department of Homeland Security to work with social media companies to create strategies to address misinformation and disinformation on their platforms. These strategies were tailored, rather worryingly, to the needs and desires of the DHS; Facebook, for example, created a portal for the DHS to report disinformation and submit takedown requests. According to the report, the DHS plans to target “inaccurate information” surrounding “the origins of the COVID-19 pandemic and the efficacy of COVID-19 vaccines, racial justice, U.S. withdrawal from Afghanistan, and the nature of U.S. support to Ukraine.”


If the U.S. is seeking to control how content on social media sites informs its public perception, it seems that the protections of Section 230 are needed more than ever. Regardless of the flaws of individual social media platforms, it is crucial to the American public to have the means for free and open national discourse, in which misinformation can be targeted directly in a way that, instead of shutting down speech, fosters more discourse and development of ideas.

bottom of page