Elon Musk stands between the logos for X and Twitter Photo: Shutterstock
The social media site X, owned by transphobic billionaire Elon Musk, just released its first transparency report since Musk took it over. The report shows that only 2,361 X accounts were suspended for hateful conduct this year, a 99.7% decrease from the number suspended in 2021, before Musk took it over in October 2022.
The lack of moderation reflects Musk’s willingness to let hate speech flourish on the site, even as advertisers flee.
Related:
The report, which looked at policy enforcement from January 2024 to June 2024, showed over 224 million reports of user policy violations. The reports resulted in over 5 million accounts being suspended.
Stay connected to your community
Connect with the issues and events that impact your community at home and beyond by subscribing to our newsletter.
It is difficult to compare this to the company’s previous transparency reports. The last Twitter report was a 50-page document that covered the last six months of 2021. During that time, Twitter said there were 11.6 million reports of accounts violating the site’s user policies, 4.3 million of which had actions taken against them, and 1.3 million accounts that were suspended as a result.
The current report on X is shorter than the previous one, at only 15 pages. Most illuminating is its data on actions X took against hateful conduct.
X’s policies on hateful conduct state, “You may not directly attack other people on the basis of race, ethnicity, national origin, caste, sexual orientation, gender, gender identity, religious affiliation, age, disability, or serious disease.” Such attacks include referencing historic acts of violence to intimidate; spreading stereotypes, slurs, and dehumanizing language; inciting harassment or discrimination; and displaying hateful images.
This year, X received nearly 67 million reports of hateful conduct. However, X only labeled or removed 4.9 million of the reported posts (about 7.3% of the total reported), and only suspended 2,361 accounts (roughly 0.0035% of the total reported).
In other words, only 2,361 accounts were found to have violated the company’s hateful conduct policies to such a degree that they warranted suspesion from the platform. The 2,361 suspended accounts represent a 99.7% decrease from the over 5 million that were suspended in 2021.
This data reflects how the site has tolerated more hate speech and misinformation since Musk’s takeover.
In April 2023, the site quietly removed a line in its “Hateful Conduct” policy that gave trans people additional protections against digital abuse. Additionally, in October 2023, the site rolled back its moderation policies against misgendering, deadnaming, and slur usage in favor of general categories like “dehumanization,” “violent speech,” and “abuse and harassment.”
The report also suggests the vast majority of actions taken against X’s hateful conduct and harassment policies were conducted by humans.
X also has a profit incentive for allowing hate speech to thrive on its platform. Although Musk has been steadily losing advertiser support on the site (something he has publicly encouraged), he nonetheless makes money directly from neo-Nazi hate speech and far-right X accounts like Libs of TikTok. Libs of TikTok is known for publicly shaming LGBTQ+ individuals and sending massive hate campaigns against them, which often result in bomb threats and mass public harassment against children, educators, health professionals, politicians and others.
Musk also fired much of the trust and safety team on the site and made the site’s API (a set of rules that allows software programs to communicate and exchange data with each other) harder to access for researchers, erecting barriers to data collection. This blocks researchers from reporting numbers that would illustrate the site’s dwindling user count as people flee X for other platforms.
The report also reveals that X has complied with more government takedown requests internationally than it did when it was known as Twitter. There were 72,000 across every country, with compliance rates ranging from 68% in Turkey and 80% in the European Union. The average rate is 70%, much higher than the rate in 2021, which was around 50%.
Theodora Skeadas, a former member of Twitter’s public policy team, told WIRED, “They might have enforced a certain amount of content. But … the numbers might be understating the severity of impact because of reduced capacity for manual review.”
Skeadas, who also helped build Twitter’s Moderation Research Consortium, says that with fewer staff, “automated systems might not be audited as regularly as they should.” This is important, she says, for “human rights defenders, journalists, women, protected demographics, race, ethnic, religious, minority groups. Those are the cases that generally receive special attention by public policy teams and teams that were ensuring safe spaces on the platform.”
Subscribe to the LGBTQ Nation newsletter and be the first to know about the latest headlines shaping LGBTQ+ communities worldwide.
Don’t forget to share: