A year ago, Elon Musk announced that he wanted to buy Twitter to clear it of bots and turn the de facto public town square into a place for unfettered free speech. Social media experts worried that would mean the platform would stop moderating what users post, and warned that the consequence of Musks stated absolutism would be that the platform would be overrun with violent and hateful content.It turns out they were right.
After he took over the platform, Muskinsisted that Twitters strong commitment to content moderation remains absolutely unchanged. But around the same time, Twitterfired most of its trust and safety staff, the team responsible for keeping content that violates the companys policies off the platform.
The result, perhaps unsurprisingly, was that hate speech on Twitter surged dramatically in the weeks following the takeover, according to anew study from the University of Southern Californias Information Sciences Institute, Oregon State University, UCLA, and UC Merced, which also found that there had been no decrease in the number of bots on the platform. It is yet another data point in a series of changes that have taken Twitter from being a global public square to a platform where racists, bigots, and propagandists are more empowered than ever.
View more
A few months ago it was the first place you looked for insight, says Imran Ahmed, CEO of the Center for Countering Digital Hate (CCDH), a nonprofit that tracks disinformation. It was always about finding communities of mutual interest and seeing what the most interesting people around the world were saying about things and what the news was. And that is just destroyed.
Twitter did not respond to a request for comment about its moderation practices since Musks takeover or what systems it has in place.
Researchers found that the increase in hateful content began almost immediately after Musks takeover as users began to test the boundaries of what would get past Twitters new moderation regime.
The day that [Musk] officially took over the platform, a lot of right-wing figures had started tweeting anti-LGBTQ rhetoric, specifically the term groomer, says Kayla Gogarty, research director at Media Matters for America, a media watchdog group, referring to the conspiracy theory that LGBTQ people prey on younger people by grooming them. [These accounts] were basically saying that they were testing the waters of Twitters content moderation, she says.
Twitters policies do not allow slurs and tropes that intend to degrade or reinforce negative or harmful stereotypes about a protected category.
There seems to have been a clear indication that people anticipated that Musk would reduce moderation, says Keith Burghardt, a computer scientist at USCs Information Sciences Institute and one of the co-authors of the paper. But its clear that hate speech didnt decline immediately after Elon Musk bought Twitter, suggesting that whatever moderation he did was not enough.
Even before it reduced the size of its moderation teams, Twitter wasnt particularly quick to remove hateful content, according to Tal-Or Cohen Montemayor, founder and executive director of CyberWell, a nonprofit that tracks anti-Semitism online in both English and Arabic.
Data collected by CyberWell found that though only 2 percent of anti-Semitism contenton social media platforms in 2022 was violent, 90 percent of that came from Twitter. And Cohen Montemayor notes that even the companys standard moderation systems would likely have struggled under the strain of so much hateful content.If youre experiencing surges [of online hate speech] and you have changed nothing in the infrastructure of content moderation, that means youre leaving more hate speech on the platform, she says.
Civil society organizations that used to have a direct line to Twitters moderation and policy teams have struggled to raise their concerns, says Isedua Oribhabor, business and human rights lead at Access Now. We've seen failure in those respects of the platform to actually moderate properly and to provide the services in the way that it used to for its users, she says.
Daniel Hickey, a visiting scholar at the USCs Information Sciences Institute and coauthor of the paper, says that Twitters lack of transparency makes it hard to assess whether there was simply more hate speech on the platform, or whether the company made substantive changes to its policies after Musks takeover. It is quite difficult to disentangle often because Twitter is not going to be fully transparent about these types of things, he says.
That lack of transparency is likely to get worse. Twitter announced in February that it wouldno longer allow free access to its APthe tool that allows academics and researchers to download and interact with the platforms data. For researchers who want to get a more extended view of how hate speech is changing, as Elon Musk is leading the company for longer and longer, that is certainly much more difficult now, says Hickey.
In the months since Musk took over Twitter, major public news outlets like National Public Radio, Canadian Broadcasting Company, and other public media outlets have left the platform after being labeled as state-sponsored, a designation that was formerly only used for Russian, Chinese, and Iranian state media. Yesterday, Musk reportedlythreatened to reassign NPRs Twitter handle.
Meanwhile, actual state-sponsored media appears to be thriving on Twitter. An Aprilreport from the Atlantic Councils Digital Forensic Research Lab found that, after Twitter stopped suppressing these accounts, they gained tens of thousands of new followers.
In December, accounts that had beenpreviously banned were allowed back on the platform, including right-wing academic Jordan Peterson and prominent misogynist Andrew Tate, who was later arrested in Romania for human trafficking. Liz Crokin, a proponent of the QAnon and Pizzagate conspiracy theories, was also reinstated under Musks leadership. OnMarch 16, Crokin allegedfalselyin a Tweet that talk show host Jimmy Kimmel tweet had featured a pedophile symbol in a skit on his show.
Recent changes to Twitters verification system, Twitter Blue, where users can pay to get blue check marks and more prominence on the platform, has also contributed to the chaos. In November, a tweet from afake account pretending to be corporate giant Eli Lilly announced that insulin was free. The tweet caused the companys stock to dip almost 5 percent. But Ahmed says the implications for the pay-to-play verification are much starker.
Our analysis showed that Twitter Blue was being weaponized, particularly being taken up by people who were spreading disinformation, says CCDHs Ahmed. Scientists, journalists theyre finding themselves in an incredibly hostile environment in which their information is not achieving the reach that is enjoyed by bad actors spreading disinformation and hate.
Despite Twitters protestations, says Ahmed, the study validates what many civil society organizations have been saying for months. Twitters strategy in response to all this massive data from different organizations showing that things were getting worse was to gaslight us and say, No, weve got data that shows the opposite.
Visit link:
Twitter Really Is Worse Than Ever - WIRED