What a year it has been. And what a week. Or two, almost. I’ll start with the latest, because it was a telling cap-off to 2022:
On December 8, three of us – Eirliani Abdul Rahman, a survivor of and activist against child sexual abuse, Lesley Podesta representing the Young & Resilient Research Center at Western Sydney University and I – resigned from Twitter’s Trust and Safety Council, a voluntary group of some 100 nonprofit organizations around the world which Twitter had formed in 2016 to help the platform keep its users as safe as possible.
Why did we resign? The many signs that safety on Twitter has tanked since Elon Musk took over the platform, including:
- First and foremost, the first independent data – on the increased hate speech since the takeover – was published. Its sources were the Center for Countering Digital Hate and the Anti-Defamation League. They found that hate speech against Black Americans and gay men had jumped 202% and 58%, respectively, and antisemitic tweets were up 61% in the two weeks following Elon Musk’s takeover – even as Twitter claimed that safety remained a “top priority” (see also this to advertisers), but also…
- The layoffs and resignations of thousands of Twitter employees, including those who worked on moderating harmful content, among them half to two-thirds of moderators who worked on mitigating child sexual abuse material (CSAM), according to Channel News Asia, as well as the reportedly inhumane way the layoffs were handled, none of which bodes well for the safety of humans who use the platform.
- Twitter’s mass reinstatement of accounts that had violated its Rules against harmful content, including “Abuse/Harassment,” “Hateful Conduct” and threats of “Violence.”
- Twitter’s lack of communication with its Trust and Safety advisers, from the Musk takeover to last Monday, when Twitter disbanded the Council. The silence was unprecedented. Patricia Cartes, the Twitter employee who was responsible for our advisory’s design (she resigned in 2018), told me with sadness that dismissing the Council signaled “the end of checks and balances for Twitter safety.”
Almost immediately after the three of us resigned, our Twitter feeds were swamped with vitriol, veiled threats and mis(or dis)information, including a claim or belief that we were employees of, not voluntary advisers to, Twitter. So, they claimed, we were somehow responsible for harm to children on the platform. Just to give you a feel for what this looked like….
Hate but also love
One person with 1 million followers tweeted that we should be jailed, naming (or “tagging”) us in their tweet. Musk retweeted to his 122 million followers the tweet that targeted us. Another Twitter user, one with 426,000+ followers, tagged us and Musk in a tweet saying, among others things, that we “should not be able to walk away,” and Musk responded with “Indeed. Shame on them!” His tweet got 77,000 likes and nearly 8,000 retweets. Within 48 hours we were getting threats in email and on other platforms as well as on Twitter (the ADL explains here how this digital-age-style “stochastic harassment” works). Now we could add direct personal experience of Twitter hate to the bulleted list above. But many, many people have experienced much worse, so here’s why I’m telling you this: It needs to be clear that…
This is the owner and CEO of a major social media platform not only doing nothing to correct this misinformation and vitriol on his platform, he was reinforcing and spreading it by responding to this misinformation associated with us by tagging us. Intentionally or unintentionally, Musk was weaponizing his followers – even as his company was claiming that Twitter was making safety a “top priority.” San Francisco-based startup adviser Jonathan Howard tweeted the “reasoning” behind all this, with a “touch” of sarcasm, way back on November 7: “people don’t like the hate speech right? so next we lock like 90% of the content moderation team out. really flood the place with n-words, while saying nothing’s changed about the policy (cuz we didn’t change *the policy*, see where we’re going? ehhh?).”
Within a few days, Twitter informed the Council that it was moving its meetings with them up from December 14 and 15 (two meetings to cover all the members’ time zones) to Monday, the 12th. Then, within an hour of the start of the meeting, Twitter suddenly ended the whole Council with a three-paragraph email. No meeting at all. The silence on Twitter’s side was now going to be permanent. Sixteen members of the ex-Council issued a joint statement about Twitter’s action on the 13th.
But here’s the thing: I loved Twitter. I was lucky, I see now. I hadn’t experienced the “cesspool” I’d heard so many people call it. It just became that – in my feed, at least – almost overnight. I’d joined way back in 2008 with the help of a tech educator, following everybody she followed. That kernel grew into a respectful professional community that I loved of nearly 10,000 researchers, online safety advocates, educators and journalists. I learned a great deal from and with them and never felt unsafe – until I resigned from Twitter’s own safety advisory.
And friends and strangers who I can’t name for their own protection reached out to us with love, on and off Twitter. One very kind person I can name – A.H. (@a_h_reaume) – because of her public thread on the platform which provided support in the form of correcting all the mis/disinformation. It’s a very visual representation of our experience and a perfect example of the giant overlap between media literacy and safety. A.H.’s timely support was such a relief in the middle of a hate speech storm. Please check it out.
And so much more happened in 2022
Twitter is quite the outlier, with social media platforms now trending toward greater safety. Examples of the trend include greater regulation, with Europe’s Digital Services Act and Digital Markets Act entered into force, the UK’s Online Safety Bill appearing close to being passed and California’s Age-Appropriate Design Code Act, signed by Gov. Gavin Newsom in September. The US federal legislation called the Kids Online Safety Act has gained momentum lately but hasn’t yet passed, with dozens of civil society groups warning that the bill could actually reduce kids’ and teens’ safety by “encouraging more data collection on minors and preventing access to topics such as LGBTQ issues,” the digital rights organization Fight for the Future reports. Here is what the Family Online Safety Institute has to say about KOSA.
Also on the regulatory front, national online protection regulators in three countries have even started something historically unprecedented: a trans-national regulators network, which I wrote about last month.
Another top-of-mind topic in child safety and regulatory circles this year has been age-verification and -authentication for child online safety, though not without controversy. Rightfully, I feel, civil liberties advocates worry about children’s data privacy if companies acquire and store minors’ identity data. Fortunately, in support of data minimization, at least one young company, Yoti, which has partnered with a number of Internet companies, has developed authentication technology that deletes a person’s data immediately upon age estimation.
Then there was the sadness of tens of thousands of tech company layoffs so close to the end of the year, which I wrote about last month – certainly not just Twitter’s, but also Meta’s, Stripe’s, Intel’s, Robinhood’s, Lyft’s, Snap’s and Shopify’s, with hiring freezes at Apple and Amazon.
What’s ahead
We’ll want to watch what happens, not just with Twitter, but also with the Twitter alternatives more and more people are migrating to, certainly people in my professional network:
Post and Spill (the latter started by former Twitter employees) represent the current centralized model (top-down, centrally controlled) and Mastodon, a more seasoned alternative. Mastodon went live in 2016 as part of “the fediverse,” a play on “federal” and “metaverse” that means it’s decentralized. It’s a network of interconnected servers (TechCrunch does a great job of explaining it). I think it’s a fascinating experiment and, well, I’ve joined Mastodon (here). It works a lot like Twitter and is infinitely more civil than my experience on the latter of late. The people I’ve encountered there have been very gracious in helping this newcomer.
One of the interesting subjects that appeared in a few cryptic tweets that tagged us Council resignees week before last was a light-touch argument between Musk and former Twitter CEO Jack Dorsey about the future of content moderation. Musk’s vision appears to be reach/not speech – reducing the reach of, rather than deleting, negative tweets – in other words de-boosting and de-monetizing them. Nothing inherently wrong with that, but it’s an old model.
Dorsey’s is, to me, more plausible going forward: I’d simply call it decentralized content moderation. In the immediate future, we’ll have more and more moderation by communities themselves, or a hybrid of central moderation and server-based moderation such as on Discord and Twitch. More exciting is what’s likely at least five years down the line: individual users moderating for themselves. Not all by themselves – games, platforms, apps, communities will still need to support them in this – but individuals will be given their own machine-learning algorithm for content moderation, one that they can “teach,” choosing the data they want to feed it themselves, based on their own values. Parents will help children do this, and valuable family conversations will happen around this important activity.
Yeah, I know this sounds pretty blue-sky (Dorsey actually founded a startup of that name). But I believe it’s the direction in which content moderation is headed. Nonprofit organizations, and hopefully eventually for-profit ones as well, will share with each other the code they need to provide their stakeholders (Internet users) with these customizable algorithms. That and, I hope, the build-out of the largely missing layer of user care, at least in the United States, are what lie ahead for a better, safer Internet – the middle layer between companies in the cloud and help on the ground which Europe calls Internet helplines. Because platform content moderation will never be enough to keep users safe. For more on that, see the last two bulleted items on this page in our site SocialMediaHelpline.com.
Sending love and wishing you and yours the happiest of holidays and a great 2023!
Related links
- Academic research on the rise of hate on Twitter since it changed hands: “Musk Monitor: Under Musk, Hate Speech Is Rising” from The Fletcher School at Tufts University (added after this post was published)
- My new Mastodon account
- And the drama on Twitter continues, as the Washington Post reports, with Musk saying he’ll abide by a Twitter poll he ran on the platform for 12 hours. The question was whether he should step down as head of Twitter, and 57.5% said “yes.” If he does step down, he’ll still be Twitter’s owner, of course, and it’ll be interesting to see who would want to be the CEO cleaning up the chaos of the past 7+ weeks.
- My interview for MediaCentar Sarajevo (readable with the help of Google Translate) and a sampler of news coverage of our resignation in five other countries: NPR, Grid, Business Insider, i24 in Germany, Channel News Asia in Singapore and Ars Technica – I haven’t figured out how to link you to my interviews with the BBC’s World Service and Radio 4.
- Statement from the Center for Democracy and Technology, our fellow former member of the Trust and Safety Council, condemning Twitter’s dissolution of its Council
- “A Political Theory of King Elon Musk,” by New York Times columnist Ross Douthat (picking up on his final sentence, though, it’s beginning to look like it’s not so “good to be king”)
- The New York Times’s coverage of the independent data on Twitter hate speech, the article that sparked this whole experience for Eirliani, Lesley and me early this month
- Had to add this (much later, in March 2023): Asked by Embedded this week if his Twitter experience has changed since Elon Musk took over (Oct. 27, 2022), millennial writer and artist John Paul Brammer, said, “It’s definitely worse. It feels like Twitter is a mangy animal with rabies shambling around a public park, foaming at the mouth, having periodic spasms and waiting to die. But it hasn’t yet. I will never leave Twitter. My account will be buried with it.”
Henry Frencis says
I really enjoyed reading your blog post on where Twitter went. It was entertaining and had a lot of good points. I also found it very informative. It was interesting to find out that the tweet that was the cause of your resignation was a comment from Elon Musk. I have always been fascinated with Elon Musk and his views on the future of humanity, so this was an interesting tidbit to learn about.
Adrienne Katz says
We are so sorry to hear of the hounding of the council and the way it was amplified. Your guidance and expertise are much valued and we will be a willing audience wherever you post.
Anne says
Thank you for your kind words, Adrienne, as well as for your beautiful work.
R&D Collier says
Oh my !
What you have shared is indeed alarming and also confirms your great expertise. Thank you for all your collaborative work globally. We look forward to learning more about Mastodon. And we share a phrase from a poem now hymn by the late Peter Henniker Heaton that provides comfort in these challenging times namely “God sets the pace.”
Love
Your Michigan Fans
Anne says
Thank you so much, Michigan fans!