Two Years of X Under Elon Musk

It has now been two years since Elon Musk’s controversial acquisition of X (formerly Twitter), an event that sent ripples across the digital landscape. On the night he completed the purchase, Musk declared, “The bird is freed,” signaling his intent to transform the platform into a haven for absolute free speech. While X still technically has terms of service that ban hate speech and misinformation, enforcement has become inconsistent at best and nonexistent at worst. Musk’s sweeping layoffs included cuts to the Trust and Safety team, significantly reducing the resources available to moderate harmful content. Musk also reinstated accounts that had previously been banned for promoting hate or spreading harmful ideologies. The topic most heavily covered in the news and media on X in the weeks following was hate speech. Reports indicated a dramatic increase: 

According to Brookings Institution, these troubling trends have been linked to coordinated trolling efforts originating from far-right platforms such as 4chan and the pro-Trump forum “The Donald.” This uptick starkly contradicts Musk's assurance to prevent Twitter from degenerating into a "free-for-all hellscape." 

This troubling evolution begs a critical question: Without safeguards, how does the platform address hate speech and misinformtation?

Reporting Mechanisms 

One illuminating conversation I had with a peer offered a perspective to this question. They shared that their X page was not directly filled with that information, so they personally didnt have a problem with it. As for other people that may encounter those harmful speech, they argued that as long as there were reporting mechanisms in place, content moderation didn’t necessarily require centralized oversight. On its surface, this seems plausible—users can report hateful content, and the system responds.  

Hoever, this argument overlooks the fundamental limitations of relying on individuals to counteract systemic problems.  Reporting hate speech isn’t enough when extremist ideologies are allowed to proliferate unchecked. If platforms enable hate speech among and between extremist groups, they foster and empower communities built on division and harm. 

Take, for example, the case of Imane Khelif, the gold medal-winning boxer who became the target of a cyberbullying campaign on X. Public figures—including politicians and cultural icons—made libelous claims about her gender identity. For an average user, reporting each hateful comment would be tedious but manageable. For someone of Khelif’s level of visibility, it's practically impossible. Without robust moderation, the sheer volume of hate speech threatens the viability of having an online presence at all.

This illustrates a critical flaw in X’s approach: it places the burden of combating hate speech on individuals, rather than on the platform itself. For many users, this makes maintaining an online presence unsafe or unsustainable.  

Two Years of Musk's X

How has this conversation evolved over the past two years? Has Musk’s leadership on X fueled the far-right and worsened the problem of hate speech?

The consensus? X has done little to address these issues and, in many ways, has exacerbated them. Over the past two years, there have been widespread reports of hate speech and misinformation running rampant. Researchers and watchdog organizations have noted alarming increases in slurs, harassment, and extremist content. As one observer succinctly put it: “It’s extremely toxic.”  

One of the most striking shifts is how toxicity has become a viable livelihood on the platform. Engagement metrics—retweets, likes, and views—are now the currency of success, and controversy is the most efficient way to cash in. As one observer put it, people post patently false or inflammatory content purely to generate hate-clicks, turning outrage into profit.  

In fairness, Musk has indeed touted measures aimed at improving transparency and trust since the takeover:

  • X released its first global transparency report, which included a “post violation rate” metric indicating users are less likely to encounter content that violates platform rules.

  • X suspended nearly 464 million accounts for violations against platform manipulation and spam.

While these measures represent progress, critics argue they are insufficient. Eirliani Abdul Rahman, co-founder of YAKIN and a former member of Twitter’s Trust and Safety Council, described the report as “laudatory but insufficient.” Notably, Brazil temporarily implemented a nation-wide ban of the platform outright, and users have begun migrating to alternatives like Threads and Bluesky.

The concern is clear: the looser enforcement of Twitter's terms could transform it into a breeding ground for online abuse and intolernace against minorities and other vulnerable groups. This is even more vital given the current political environment and spread of misinformation. 

Section 230

Who then can be held responsible and accountable for hateful conduct online?

This question remains unsatisfyingly unanswered under current legislation. As it stands, under Section 230 of the 1996 Communications Decency Act, tech companies are at liberty to decide their terms of use and service, meaning they have full authority in responding to issues like hate crime—with some limited exceptions under federal law, including when they directly create illegal content, fail to warn users about illegal activity, or breach a contract. This legal backdrop, coupled with historical judicial hesitancy to encroach on free speech rights—even in the face of blatant hate speech—grants companies an outsized role in addressing online abuse. Elon Musk, who openly identifies as a free-speech absolutist, appears to have little intention of curbing hate speech on Twitter. Following his acquisition, Musk has dismantled key teams within the organization: the curation team, which fights disinformation; the human rights team, which safeguards journalists and activists; and the ethical AI team, which combats algorithmic bias.

A platform like Twitter, left unchecked, becomes fertile ground for hostility and intolerance to thrive. Stripping away its protective frameworks and guidelines doesn’t make Elon Musk a champion of free speech; it turns him into an unwitting enabler of harm against vulnerable communities. This harm isn’t limited to the confines of the digital space. Hate speech online often spills into the real world, fueling violence and discrimination against marginalized groups and further deepening societal divides.  

John Perry Barlow’s Declaration of Independence of Cyberspace imagines the internet as a realm free from government control, self-regulated and untethered. But the reality is far messier. The digital world and government are deeply intertwined, blurring lines of authority and accountability. The question is no longer whether the law can step in, but how it must, to ensure that the digital space fosters justice, not harm.

In this context, the debate over Musk’s X transcends free speech—it centers on the responsibility of platforms to create environments where all users feel safe. As participants in this digital ecosystem, we must decide whether to accept the status quo or demand a more inclusive, accountable future.  

Previous
Previous

Networking Events for Pre-Law in NYC

Next
Next

Blog Reflections from UN Cybersecurity Conference