It’s a sign that we’re in a whole new era in online safety now: young users increasingly taking things into their own hands. You might call it “DIY Internet safety.” It’s not all good news, but it’s also not all bad (not that DIY is the all of the new era – keep reading…).
There are both positive and negative aspects in a piece of in-depth reporting on the subject at Buzzfeed News. What Buzzfeed describes seems mostly upside, with the headline: “TikTok Has a Predator Problem. A Network of Young Women Is Fighting Back.” I mean, it’s exciting to hear that users are helping younger peers by engaging in online community policing, right?
The good stuff
It does seem really positive at first glance. These online safety mentors understand the limits of what social media content moderation can do “at scale” (e.g., Facebook has more than 1.5 billion daily active users; YouTube gets more than 500 hours of video uploaded by users every minute). These DIY protectors are not waiting around for platforms to take action; they’re even pressuring newer platforms like TikTok to up their safety game; they’re modeling self and community care for younger users (many under 13 at TikTok); they’re teaching young users what inappropriate behavior looks like; they’re gathering evidence (screenshots) of inappropriate comments and messages from what appear to be predatory older users; and they’re making lists and “outing” those users on other platforms, such as Instagram and YouTube.
So, given all that, what could the downside be? The fact that self-appointed community police – or digital vigilantes – don’t always accuse abusers of rules and people accurately. Also, when they do make false accusations, in social media they often make them very publicly. Even if they do that with the best of intentions, people get hurt.
When it’s not so good
Agency plus education is good; agency all by itself can go either way. As Buzzfeed reporter Ryan Broderick put it, it can be a cross-platform “free-for-all where young users weaponize dubious screenshots, defamatory callout videos, and whisper campaigns via group chats to deliver vigilante justice at dizzying speeds.” So if someone’s innocent, reputations can get destroyed at dizzying speeds too.
To lock in their effectiveness, what the well-intentioned groups of DIY protectors need to have is a set of standards or code of ethics for taking action (for example, see comedian Franchesca Ramsey’s 6 rules for calling people out online here). Developing best practices, or a code of ethics, for DIY community care would make a great lesson in new media literacy, right?
[The only part of the Buzzfeed piece I struggle with is Broderick’s throwaway phrase, “In an era when the failure of social media giants to police their platforms….” Yes, it’s failure in the eyes of many, but we can’t forget that, in the eyes of many others, the platforms police too much or with too much bias; and in the eyes of still others, it should not be up to corporations to be cops, censors, or arbiters of free speech. All of which complicates content moderation. This is not an excuse; it’s a reality.]
Other signs we’re seeing
In any case, do you too see that we’re in a new era for online safety? Because…
- Media’s complex and shape-shifting. Checking assumptions about human behavior is hard enough when the behavior’s happening in physical spaces; it’s harder in media environments. We’re also seeing that no one person, expert group or organization can create safety for everyone – even for giant organizations with huge revenue like Facebook and YouTube (the bigger the platform the harder the solve is). Why? For one thing, because so far it’s really hard for machine learning algorithms to keep up with the fast-changing highly innovative behavior of young human beings; algorithms need a whole lot of examples to “learn,” and just as soon as they get fed examples of new speech and behaviors, youth speech and behaviors have changed.
- We’re seeing the solution’s complex too. It needs all perspectives in the room – those of the people who write the algorithms, of the many different user groups (mentors, students, beneficiaries both resilient and vulnerable), of corporate executives, law enforcement, policymakers and caregivers (from parents to mental healthcare practitioners). Each perspective is crucial to problem solving, algorithm writing, safety feature design and policy making. More and more safety advocates are calling for collaborative rather than adversarial approaches to problem-solving – a defining characteristic to the “digital citizenship” on more and more minds around the world.
- Media peer-mentoring is very new– even newer than adults mentoring kids. The online safety field hasn’t been about teaching young people how to stay safe and keep each other safe online. And it has only just begun encouraging adults to mentor rather than control and surveil children’s media use. Having teens and young adults modeling and teaching safe social norms in media will reinforce older adults’ efforts to work and play with their children in digital environments. [Here are some great resources for adults’ media mentoring from the American Library Association.]
So we’re starting Phase 2 of this multi-phase, global social experiment around optimizing social media for all its users. Besides DIY safety, other signs include last year’s big data wakeup call; the growing public discussion about “recommendation engines” and rabbit holes (see sidebar); and new forms of user care, such as Facebook’s development of a content moderation “appeals” board. What are some other examples? Feel free to put them in a comment or tweet. It’s still the earliest of early days in our new media era. So there’s a whole lot of work to do, here in the Petri dish.
SIDEBAR: Speaking of ‘rabbit holes’
So about “recommendation engines” and rabbit holes. “Like all social media platforms, TikTok is optimized for engagement,” Buzzfeed reporter Ryan Broderick writes, using algorithms that “learn” what you like and show you more and more of it. “It also reacts in real time, delivering an endless stream of similar videos, even if you aren’t logged in,” he adds.
We’re not just talking about TikTok, here. Those learning algorithms that deliver more and more content that’s like what you just watched or liked or shared are also called “recommendation engines” or “systems” that are just part of social media. If you keep clicking on what they turn up for you, you’re going down that rabbit hole. For young YouTube user Caleb Cain, who was recently profiled by New York Times writer Kevin Roose, it was “an alt-right rabbit hole,” as Caleb put it.
To his credit, Caleb climbed back up. I’m not being political, here; what I’m saying is that he was getting fed ever more extreme and conspiratorial content, and so he had the intelligence to look for alternative views. This 26-year-old man honestly wanted to learn not be indoctrinated or have his biases confirmed (Caleb tells his own story about this, as a new YouTube creator, here).
The story is a powerful example of three things about our media environment that are good to keep in mind: 1) how we don’t necessarily just go down the rabbit holes, though irresponsible reporting (not Roose’s) would suggest otherwise; 2) how long it can take to come back up and that it takes inquiring minds and courage to do so, to credit Caleb; and 3) how we need critical thinking not just about what we see and hear in today’s media; we also need to think critically about the algorithms behind what we see and hear. (The story also describes how really smart creators figure out how to game the recommendation system gamers!)
On that 3rd point, as an adviser to several platforms, I can tell you that they’re certainly applying critical thinking to this issue. We need to too – and to help our children do so (both the Roose and Broderick pieces are great teaching tools). Applying critical thinking is for safety and wellbeing, not just their media literacy training. This too is a sign of the new era of safety – safety from the inside out (critical thinking, empathy and resilience), not just the outside in.
Related links
- Caleb Cain courageously telling his story about going down “the alt-right rabbit hole,” coming back up and using that experience to help peers avoid what he went through over the course of about 5 years. His channel, Faraday Speaks, now how more than 20,000 subscribers.
- About safety from the inside out
- My last post about TikTok and how, if policymakers want to regulate social media, they really need to think geopolitically
[…] DIY community care: 1 sign of a new Net safety era – NetFamilyNews.org […]