…and what that has to do with content moderation
Fizz just might come to be known as the kinder social media app. I know, you’re probably thinking, “Yeah sure.” And I do understand. But, from some great reporting by TechCrunch, I picked up on two design features that, together, could give this new app the edge where user safety’s concerned. They are that it has…
- The local factor. When someone joins, their experience of Fizz is just about their own university campus. They’re joining with other students in their school community, because they have to have their school’s .edu email address to join. Presumably that includes staff and faculty, but that’s not clear yet.
- Peer moderation. This is the kicker. Other apps focused on location certainly didn’t have any edge on civility, but Fizz isn’t just moderated by people at the app’s back end or offices. It pays students on the user’s own campus to moderate the content posted by peers at their school.
Now, the key to making sure that keeps Fizz harmless and anxiety-free is who Fizz hires to do that local moderation work. Will the app screen for emotional intelligence and communication skills in hiring? Let’s hope!
The importance of peers
But why is peer moderation so important? If you’re in Twitter, you may’ve seen me tweet about how crucial offline content is for content moderation. The safety edge is all about offline context.
See, the problem is that the content moderators behind social media platforms, the people who make decisions about what posts and comments are harmful – or not – have no “real world” context for the reported content they’re seeing on their screens.
For example, if someone posts a picture of a plain-old pig, for example, moderators far away who’ve probably never been to that school (or maybe even that country) have no way of knowing whether the poster is suggesting something about the campus police, implying something mean about a roommate or spreading the word about Saturday evening’s fraternity pig roast.
Silly example, but you get the idea. Content moderators looking at a gazillion abuse reports a day have no way of getting that offline context – especially not in the seconds they have to make a take-down decision.
Why content moderation is so hard
We know that, where teens and young adults are concerned, cyberbullying and hate speech are very specific to their social life at school, so it’s almost impossible for app moderators – much less machine-learning algorithms – to get it right.
The vast majority of abuse reports that apps get are what the industry calls “false positives” – not actionable by a platform for any number of reasons. For example, maybe the user is abusing the system to bully someone (reporting them to get them banned); is reporting something that may feel cruel to them but doesn’t violate Terms of Service or community rules and so can’t be deleted; is reporting the content inaccurately (so the report doesn’t trigger human review); or is just reporting an unflattering photo or something else they just don’t like (which also doesn’t break the rules and can’t be actioned).
Content moderation, whether by humans or algorithms, is ridiculously complicated and nuanced. That’s because social content is real-world contextual, individual (often unique to the people involved) and situational (unique to what’s happening at a particular moment in time). Plus social norms and communication are constantly changing. Young users in particular are always innovating, creating their own responses to popular culture in speech, art, interaction and workarounds.
Youth challenges algorithms (!)
Machines aren’t great at nuance. Algorithmic moderation – the kind that catches violating content before it’s seen and definitely doesn’t wait for users to report the problem – is new, too. Platforms used to be entirely reactive, only (sometimes) responding to abuse reports. But where algorithmic moderation is concerned, it’s pretty impossible to teach an algorithm to “learn” from, i.e., find patterns in, data that is constantly changing. Anything close to a pattern keeps changing, which equals no pattern. And for the big platforms, we’re talking about data in just about every language on the planet. They may localize the algorithm to an individual country, but they’ll still have sub-cultures in terms of user ages, ethnicity, dialect, etc.
The same goes for human moderators, except they have mental health that needs to be cared for. They deal with the same lack of real-world context as well as nuance and complexity. They still have no way to get at intention to know whether a reported post is a joke among friends, a mean joke, outright cruel, contextually appropriate or just no big deal.
That’s why it’s refreshing to hear about an app that’s building knowledge of local context into its content moderation. If a student reports what always to be community style-breaking content and if the moderators is conscientious, the moderator can find out the offline (campus) context for that post so they’re likely to make an accurate, off at least informed, decision about whether that content needs to come down. This is why I’m an advocate for internet helplines such as the networks of them in Europe and Netsafe in New Zealand. We need one in the United States. But as long as we don’t, we’re dependent on the apps themselves to do the best they can. Which is usually not great.
What else would really help
If Fizz the startup can figure out how to monetize and grow, it will become one model for how to design for online safety. [For a bit of history: You may (or may not) remember that a college campus is where Facebook got its start, and look what happened there! Back then, user safety was barely an afterthought. Facebook was certainly thinking about it when I joined its Safety Advisory Board in 2009, the year the advisory was formed. Now the company says that half of its some 80,000 employees worldwide are devoted to the safety of its apps’ users.]
With all the regulatory scrutiny, from California to DC to London to Sydney to Wellington, a startup now has to be thinking about safety. Sadly (at least if Fizz’s peer moderation succeeds) there isn’t a Fizz for people in middle and high school. And so far it’s only on a handful of campuses in the US (see the TechCrunch piece for details). So teens are going to have to wait for civility-focused social apps that lower parental anxiety levels. What would help is if the US had a social media helpline – like the ones in Australia, Brazil Europe and New Zealand – that they could call if they ran into trouble online. But that’s another story. For now, I’m rooting for Fizz.
Related links
- “Why Online Speech Gets Moderated“: The Washington Post provides a great primer in Q&A format, courtesy of Bloomberg. It’s focused on Twitter, for reasons of newsworthiness, but the info goes for all social media platforms. It’s also mostly about the US context but touches on what’s happening in China, Europe, India and Russia as well.
- About TikTok’s earliest days: Musical.ly, pre-TikTok in the US
- From Snapchat’s early days, how it was more a break from the self-presentation and performance fatigue that social media had come to be for teens, here and here; what set Snapchat apart from other apps back then; and more generally about anonymity vs. self-presentation fatigue here
- News barely noticed: About two social apps that went away: Secret’s demise in 2015 from TechCrunch and a post on Reddit that Google Play app store removed Secret last spring; and, from their early days, a serious safety issue at Secret that surfaced and what set Whisper apart in its early days – with plenty of other coverage in those posts’ “Related links”
[…] more on algorithms for keeping us safe (I also wrote about safety technology like this in a chapter on online safety, its history and […]