A vital piece of the global online safety puzzle has just fallen into place: Australia’s eSafety Commissioner’s Office this week unveiled its Safety by Design tools for Internet companies everywhere. They’re the outgrowth of extensive international research and consultation with people in industry, government, academia and advocacy, including youth and parents – a process eSafety started in 2018.
Think of the puzzle as a sort of Rubik’s Cube. It has to be 3D because of the technical “stack” of businesses that make the Internet possible, with all the moving parts on each layer – human, algorithmic, organizational – which help keep users safe. We users know the top layer best, but the layer we use can’t be the only one thinking about safety. On every level, the prevention-intervention spectrum needs to be considered. Prevention typically means legislation and education – of everybody on the receiving and providing end of digital media. Intervention includes everything from law enforcement to algorithmic and human content moderation behind platforms to Internet hotlines and user helplines to regulatory action.
The interesting thing about the eSafety Office is that it not only fills both prevention and intervention roles (regulation, user education and intervening in cases of online abuse), it’s also collaborative – across borders, sectors and fields of expertise – in developing solutions. The people there know that no single actor, whether a national government or even an industry coalition, can solve this puzzle by themselves. They also work it with industry, for industry, which everyone knows is unique for a regulator but I believe essential for our user-driven media environment. [See this about the principles behind the tools and the sidebar below this post to learn more about eSafety directly from Commissioner Julie Inman Grant.]
Though regulators in many countries have long told corporations what they have to do, regulatory work has been short on the how-to part. The missing piece, until now, was corporate education that helps companies assess, against a clear set of safety standards, the policies, products and systems they either have in place or need to develop. The cross-sector consulting eSafety did made the tools applicable globally, not just for a particular government’s regulator. They come in two sets of modules, one for startups (<50 employees), with a primer on all aspects of user safety for development from the ground up, and one for mid-size to large companies (50+ employees) that works as an audit tool. The tool for established companies is truly comprehensive; it covers “structure and leadership,” “internal policies and procedures,” “moderation, escalation and enforcement” (including fulfilling legal obligations), “user empowerment” “and “transparency and accountability.” The list alone sets a kind of standard. The eSafety Office says the aim in developing both tools was for them to be “realistic, actionable and achievable,” and both come with case studies and examples from internationally known companies.
There are so many other things to love about these tools: that they…
- Give early stage companies time to think through unintended consequences (not a feature of social media’s early days, right?!)
- Include a typology of online harms based on human rights principles as well as the lived experience of eSafety user help services
- Are designed to help both corporate leadership and product designers and managers
- Come with a commitment to ongoing iteration, based on collaboration with university business schools, computer engineering departments and programs in international and human rights law.
So yes, safety for users all over the world at scale is a bit of a Rubik’s Cube, but it’s actually getting less challenging. Because, thanks to research and tools like these, we know better than ever what we’re dealing with, how best to get things done, who can help and with what skill sets. Plus, we have some fine models in place (see below) and great tools coming on line. Clarity is spreading.
SIDEBAR: Updated prescription
While we’re on the subject of safety assessment tools for industry, let’s assess where we are as a field. What we’re really talking about is an ecosystem of Internet user care that spans the planet. I got prescriptive in an article for Medium a year ago, so building on the steps I laid out there, here’s an update:
If you got this far, you read the latest on the prevention side, part of a holistic approach that includes intervention as well: the eSafety Commissioner’s Office. On the intervention end of the spectrum for the stack’s whole top layer (social apps and services, games, websites, etc.): In addition to the early provisions of law enforcement, platform abuse reporting systems (of varying degrees of helpfulness) and content moderation, we now have user appeals in the form of an Oversight Board. It was a missing piece too – intervention way after the fact in the form of appeals against content moderation decisions. So far, it’s for Facebook users. It needs to be cross-platform. A couple more prescriptions (I called them predictions) for that Board are here.
That type of intervention is important but, as I mentioned, way after the abuse or Terms of Service violation occurs. Arguably needed even more around the world is reasonably immediate help that goes beyond mere content deletion (the latter being the only help platforms can provide). This kind of help, which can obtain offline context for the issue seen online, is what eSafety, Internet helplines throughout Europe, NetSafe in New Zealand and more and more “traditional” child helplines provide. The world’s vulnerable Internet users need contextualized help, someone who can understand what in the victim’s life gave rise to the harmful online content. Only sometimes is getting the content taken down all they need. If so, perfect. Maybe platforms’ systems can help with that. But the giant platforms get so many non-actionable abuse reports (what they call false positives) that, probably more often than not, they won’t even get to that content deletion. Sometimes they will if a “trusted flagger” like a helpline provides the context (or verification) the platform’s moderation team needs to act on the report. So what I’m saying is, Internet users need a trusted flagger organization such as a helpline – one that the platforms have agreed to work with – in every region of the world, ideally every country (we don’t have one in the US). I call this the “middle layer” of help between the help vulnerable people or groups have on the ground (e.g., hotlines for suicide or violence prevention) and the platforms in the cloud – see this post for a bit more about that and this site with lessons learned from piloting a helpline in the US.
We have the first inklings of support for the people who care for the users: content moderators. Their new professionals association, the Trust & Safety Professionals Association, represents peer support, education and research.
We also have a brilliant model in the form of a working international coalition of companies working together on addressing child sexual abuse material: the Tech Coalition. That cross-industry model needs to be applied to other forms of online harm, including hate speech, harassment and cyberbullying, faced by users of all ages.
We now have the best possible framework for child and youth online safety: General Comment 25 on their digital rights – of participation and provision, as well as protection – adopted this past February by the UN Committee for the Rights of the Child.
The US needs a new model for Internet regulation. Though we have some quasi-governmental organizations (quangos) such as the National Endowment for the Arts and the National Labor Relations Board, we don’t have a regulatory one like the UK’s Ofcom. And – though it would be fascinating to explore – something like Australia’s centralized and collaborative eSafety Office may not work for this larger federal system. “Our existing regulatory tools…are not up to the task. They are not fast enough, smart enough, or accountable enough to achieve what we want them to achieve,” wrote University of Toronto law Prof. Gillian Hadfield in Quartz. She proposes “super-regulation”– “super” because it elevates governments out of the business of legislating the ground-level details and into the business of making sure that a new competitive market of [licensed] private regulators operates in the public interest.” Other interesting proposals, in particular by author Tarleton Gillespie of Microsoft Research New England, are discussed under ‘Super-Regulation’ here.
Related links
- About the launch of the Professionals Trust and Safety Association, an important development, wherein I quote a great tweet by legal scholar Evelyn Douek: “Americans want platforms to be places of open expression, but also misinfo removed, but also tech companies have too much power & can’t be trusted to make content decisions, but also the gvt would be worse.” Right?
- The prescriptive post I mentioned that’s as iterative as Safety by Design
- Some early predictions for the Oversight Board
- Of recent Internet-related dilemmas and developments: deplatforming heads of state and how meme culture gamifies reality (and longstanding institutions)
- Lessons learned from piloting SocialMediaHelpline.com for US schools
[…] “harmful content” is about 35 rules down the list. Repeating myself here, was there any safety by design thinking, there? Reportedly, the rules have been in development for […]