By Ella Gonzalez
New York City, New York
When Elon Musk acquired Twitter on October 28, he immediately brought politics to the forefront of the discussion. Alleging a liberal bias in the site’s management, Musk’s initial changes included promises of a revamped system of content moderation and reinstatement of banned accounts. Lofty goals of free speech, discourse, and elimination of censorship, however, contrasted sharply with the reality of Twitter’s first few weeks under its new leadership. Erratic and impulsive, Musk fired top executives, removed crucial functionalities, and implemented questionable new policies that caused brief bouts of—often comedic—dysfunction.
While Twitter has fallen short of Musk’s promise of a haven for free speech, his takeover has allowed him to obtain internal documents that reveal the internal workings of one of the world’s most popular social media platforms. In December, these documents were entrusted to a number of journalists who published their takeaways over the course of the month in a series of Twitter threads called the ‘Twitter Files’.
The first installment of the Twitter Files, published by Matt Taibbi, deals with the steps taken by Twitter’s executives and content safety teams to purge the platform of links to the infamous New York Post story that revealed information recovered from Hunter Biden’s laptop. Internal correspondence at Twitter shows that Twitter staff attempted to justify the removal of the story using the site’s ‘hacked materials policy.' Indeed, much of the thread shows little more than incompetence and contradictory application of moderation guidelines in order to suppress reporting that was unfavorable to Twitter’s majority-Democratic staff. The second installment was published on December 8 by journalist Bari Weiss, and details how Twitter developed systems to limit visibility of Twitter accounts without removal or suspension, by banning accounts from searches or algorithmic amplification. Techniques such as these fall under the colloquial term ‘shadowbanning’—the suppression of content produced by a user without informing the user of an official ban.
If we set party politics aside, however, we see that the primary issue illuminated by the Twitter files is neither general mismanagement nor a private company’s political bias. It’s the willingness of companies like Twitter to throw aside their duty toward transparency and use broad moderation powers to shape the political discourse. Even more importantly, it’s also the actions taken by both the Biden and Trump administrations to pressure the social media site into suppressing and removing speech that ran counter to their interests.
Taibbi’s first Twitter thread, published December 2, contained information that revealed early instances of state involvement in moderation. One tweet reads: “By 2020, requests from connected actors to delete tweets were routine. One executive would write to another: ‘More to review from the Biden team.’ The reply would come back: ‘Handled.’” Requests from the Democratic National Committee to remove certain accounts were also received and frequently honored. Such requests also came from the Trump White House. These had less impact due to Twitter’s political orientation, although they still merit condemnation.
On December 9, Taibbi put out another thread on Twitter’s decision to remove former president Donald Trump. Among the back-and-forth by Twitter executives is another significant revelation: Twitter took much of its moderation advice from the FBI, the Department of Homeland Security (DHS), and the Office of the Director of National Intelligence. Executives also met with officials at these same federal agencies, perhaps even on a weekly basis. One screenshot of a message from the former head of trust and safety at Twitter, Yoel Roth, shows “Weekly sync with FBI/DHS/DNI re: election security.” This partnership involved takedown requests for individual tweets that the FBI flagged as containing misinformation.
Further reporting on Twitter’s internal documents explores how Twitter and the state worked together to shape public sentiment in the early days of COVID-19, with little regard to actual fact or to the abundance of nuance such a complex subject demands. Tweets containing accurate information intending to ease fears of the novel coronavirus or to point out reasons for vaccine caution were removed or affixed with a ‘misleading’ label. Much of this is attributable to flaws in the site’s content moderation bots and poorly-designed systems used by contractors who were tasked with reviewing possible misinformation. Often, employees engaged in lengthy debates over what to do about individual tweets. Moderators sometimes would override incorrect reports, but often applied their own judgment to remove or flag legitimate information anyway.
Yet, this wasn’t enough for the Trump and Biden administrations. In the early days of the COVID-19 pandemic, the Trump White House met with numerous social media and tech companies, including Google, Facebook, and Microsoft, in addition to Twitter. One issue they sought to combat was “misinformation” about panic buying, despite the reality that panic buying was in fact occurring. In the summer of 2021, the Biden administration began to take issue with social media sites’ management of information. President Biden accused social media companies of “killing people” with vaccine misinformation. An email at Twitter summarizing a meeting with Biden officials states that “the Biden team was not satisfied with Twitter’s enforcement approach…they were very angry in nature.” Thus began a new phase of the federal government’s crusade against online speech questioning its official position on a variety of topics.
Why, one might ask, is this a problem? Shouldn’t the state protect the public, by serving as an ally to tech companies in the fight against misinformation? The novel partnership between the state and social media revealed by the Twitter Files presents two issues.
First, overly aggressive policing of misinformation leads to stagnation and forced consensus. Ideas once seen as unthinkable, such as the COVID-19 lab leak theory, have gained credibility as they are subjected to scientific scrutiny and rigor. The state is an especially biased arbiter of truth, given that it has significant interest in protecting its policies from criticism. The mass removal of dissident content leads its supporters to migrate to other platforms, where they often form more isolated echo chambers that breed extremism and conspiracy theories.
Secondly, to approve of the role of the government in monitoring discourse on Twitter sets a dangerous precedent. A recent report by the Cato Institute details the rise of a technique it calls “jawboning,” described as “informal pressure” used by government officials, which can include “bullying, threatening, and cajoling,” to “sway the decisions of private platforms and limit the publication of disfavored speech.” This involvement is anything but benign, and to condone it is to legitimize the role of the state in policing online speech. We may then opt to widen the scope of government involvement and eventually to enshrine it into law, thereby leading to the demise of First Amendment protections in the United States.
State involvement in online speech should prompt a crisis of purpose for the social platforms themselves. If we allow the state a role in online moderation of speech, we move further from the core tenets of independence and decentralization that created the modern internet and ushered in the Information Age. The rise of the internet has allowed citizens’ voices to be heard like never before. If we let ourselves be silenced in the name of our own safety, it may well all have been for naught.