Part 1 of this series was 2018 highlights. Now it’s time to shine light on some interesting ideas and developments that people have surfaced for a better Internet in 2019 and beyond. We’ll look at a new middle layer of user care that’s being discussed by some – and gradually being built out, little by little, worldwide. Called for by the conditions of our new media environment, it’s a new layer of support that will increasingly enhance both moderation and regulation. [Note: Added later, a sidebar with perspective from Australia’s eSafety Commissioner, Julie Inman-Grant. Please scroll down for that.]
So what is this middle layer thing? It’s a way of thinking about Internet safety which people have actually been discussing and building in various ways for most of this decade. We just haven’t thought of it as a whole. It’s like the proverbial elephant we know is there, but we’re all blindfolded and busy dealing with only our part of it – usually in the areas of prevention education, intervention (including law enforcement’s), content moderation or regulation – so we’re not seeing it as a whole. I suggest that, because the Internet encompasses our whole planet, we talk about the whole animal — think more strategically.
How is it a “middle layer”? We’ve been working the problems on only two levels: in the cloud and on the ground. The “cloud,” obviously, is platform-based solutions: content moderation, machine-learning algorithms for proactive abuse detection, transparency reports, and other self-regulatory tools. The “ground” is a whole array of traditional solutions to consumer harm, such as regulation, 911 & law enforcement, litigation, school discipline, hotlines providing care for specific risks and vulnerable classes (e.g., domestic violence, sexual assault, depression, suicidal crisis) and of course parenting.
All of that is needed (maybe–see this) but is not enough. Because our new, fast-changing, global but also very personal media environment calls for new approaches to regulation and “consumer” care. We now need to be working consciously on three levels, and it’s on the middle level at which some really interesting thinking has been done. To keep things manageable here, I’m just going to look at regulation and moderation in this post, starting with the regulation-related part.
The ‘middle layer’ & regulation
“What we don’t hear nearly enough,” wrote University of Toronto law professor Gillian Hadfield in Quartz last summer, “is the call to invent the future of regulation. And that’s a problem.” Interestingly, even the platforms are on board with that, Wired reports. Facebook CEO Mark Zuckerberg has even announced an independent court of appeals for content decisions, according to Quartz. Whatever shape that takes, it’s the independent part that defines the middle layer – not part of what platforms do and not part of what government does – though it certainly works with both.
“Our existing regulatory tools…are not up to the task. They are not fast enough, smart enough, or accountable enough to achieve what we want them to achieve,” Dr. Hadfield added.
A lot could go wrong with regulation as we know it if it doesn’t meet crucial new criteria: 1) folds in technological expertise that keeps up with the pace of technological change, 2) allows for adaptation and periodic review or even obsolescence if outpaced by that change, and 3) draws on multiple perspectives, not just those of the companies being regulated but of age and demographic groups that it aims to protect – and those of researchers!
‘Super-regulation’
For that first criterion, Dr. Hadfield calls for “super-regulation” – “’super’ because it elevates governments out of the business of legislating the ground-level details and into the business of making sure that a new competitive market of [licensed] private regulators operates in the public interest.”
These private regulators fit the description of “middle layer” because they’d have to keep both governments and “regulatory clients” happy “in order to keep their license to regulate.” Keeping their clients happy means developing “easier, less costly and more flexible ways of implementing regulatory controls.”
This layer of competitive independent regulators has actually been developing for some time, Hadfield says. She gives examples such as Microsoft “leading efforts to build global standards for privacy and cybersecurity and Google submitting to AI safety principles.” Other, slightly different, parts of the new regulatory layer have been in development too. Researcher Tijana Milosevic’s new book, Protecting Children Online?, which I mentioned in Part 1, has examples such as Europe’s Safer Social Networking Principles, the CEO Coalition, and the ICT Coalition for Children Online, though these can’t be considered competitive independent regulators. Another entity that might be considered a new kind of “regulator,” though neither independent of government nor competitive, is Australia’s eSafety Commissioner’s Office. That last example fits into both the regulation and moderation categories of the middle layer, even though it’s part of government (more on the moderation part in a minute).
Some ideas offered by researcher and author Tarleton Gillespie – a “public ombudsman” or “social media council” – could fall into either the regulation or moderation category, or both. “Each platform,” he writes in Wired, “could be required to have a public ombudsman who both responds to public concerns and translates those concerns to policy managers internally; or a single ‘social media council’ could field public complaints and demand accountability from the platforms,” he adds, citing a concept fielded by the international NGO Article19. Other ideas he proposed included “an expert advisory panel…of regulators, experts, academics, and activists” given access to platforms “to oversee content moderation, without revealing platforms’ inner workings to the public” and “advisory oversight from regulators” in the form of a government agency that would review content moderation procedures of platforms. “By focusing on procedures, such oversight could avoid the appearance of imposing a political viewpoint.” That would be imperative because, to be effective, the middle layer has to be credible to all whom it serves. Independence from the platforms and, in some countries, government, is key.
The ‘middle layer’ & moderation
Think of content moderation as user care. It both protects users and defines “the boundaries of appropriate public speech,” writes Dr. Gillespie in his 2018 book Custodians of the Internet. The thing is, most of that protection and definition is internal to the platforms – to the cloud. It’s being done by private companies, not by governments and traditional care providers such as crisis hotlines or even 911 (on the ground).
There are several problems with that. The platforms have neither the context nor the expertise to provide real care. All they can do is delete content, which can help a lot in some cases, but – as Gillespie spells out in detail in his book – a lot of content doesn’t get deleted. Not necessarily intentionally on the platforms’ part and not only because of sheer volume, but because deletion decisions are sometimes really complicated. One person’s free speech is another person’s harm. And images that are common in one country can be extremely incendiary and dangerous in another. Potential impacts on the ground often can’t be imagined by platform moderators who’ve never been to the place where the incendiary content was posted.
Another problem is that what we’re talking about, here, is not mainly about technology – even though so many (especially those of us not born with the Internet) think that it is. It’s actually about our humanity. What happens on the Internet is rooted in people’s everyday lives and relationships, so moderating content is often, not always, more like taking a painkiller than really getting at the pain. Which is why Internet help needs to work closely with on-the-ground help such as parents, school administrators, risk prevention experts and mental healthcare specialists. They’re the ones qualified to get at the real issue and help alleviate the pain.
Filling the context gap
Because what’s happening on the ground, in offline life, is the real context of what we see online. In his hearing on Capitol Hill last spring, Facebook CEO Mark Zuckerberg suggested algorithms were getting better and better and would eventually solve the context problem. Yes, maybe for some kinds of content, but not cyberbullying, one of the most prevalent online risks for kids. Nothing is more contextual or constantly changing – within a single peer group at a single school, let alone hundreds of millions of youth in every country on the planet. Even Facebook says in its latest Transparency Report that harassment is content of a “personal nature” that’s hard to detect and proactively remove without context. I agree, and suspect school administrators do too. It’s hard to understand what hurts whom—and what is or isn’t intended to hurt—without talking with the kids involved even in a single peer group, much less school.
So a middle layer of moderation has been developing in the form of “Internet helplines” throughout Europe, in Brazil, and in Australia and New Zealand. Some have folded Internet content deletion into longstanding mental healthcare helplines serving children. Others became part of other longstanding charities such as Save the Children Denmark and Child Focus in Belgium. Some, like SaferNet in Brazil, were nonprofit startups created just for the Internet, and still others, such as Australia’s eSafety Commissioner’s Office and New Zealand’s Netsafe, are part of national governments. But the government-based ones are not regulators, and so far they seem to meet that crucial trust criterion of keeping apart from national politics.
Help in 2 directions
These helplines provide help in two directions: up to the platforms and down to users. To the platforms, the greatest service is context (because how can algorithms and the people who write them tell social cruelty from an inside joke that only looks cruel to an outsider?). Context makes abuse reports actionable so harmful content can come down. The great majority of abuse reports the platforms get are what they call “false positives”: not actionable. There are all kinds of reasons for that, from users not knowing how to report it to users reporting content that doesn’t (without context) seem to violate Terms of Service to users abusing the system. And then there’s the content that hurts but doesn’t violate Terms. There is so much the platforms can’t possibly know, which is why they need help. They need to acknowledge this.
To users, Internet helplines help with things neither the platforms nor services on the ground can help with: understanding the app or platform where the harm’s been done, where to go for the best on-the-ground help in their country, how to gather the evidence of the online harm the app/platform needs in order to take proper action, and when content should be sent to the platform for suggested deletion. I say “suggested” because obviously only the platforms can decide what violates their own Terms of Service, but that independent, trusted 3rd party can cut through a lot of guesswork. Because a helpline has two kinds of information – the context provided by the user and an understanding of the platforms and their Terms – and Internet helplines also have the trust of the platforms.
Trust is essential
I’m not saying these services are perfect; they’re only a very important start. There’s much work to do, including developing uniform best practices for helplines worldwide, and I hope it stops being ad hoc and piecemeal. But the work has begun. It’s independent of the platforms and in many cases even of government (in some countries, having the middle layer supported by government makes sense; just not in our country, I’d argue). Trust is essential. To be effective, the operations in this new layer need the trust of users, platforms and governments.
As it’s built out, the middle layer will provide more and more support for people, platforms and policymakers, enabling each to serve the others better. Closing the circle of care, you might say. Now it just needs to be built out more proactively and strategically – not in reaction to tragedies and laws (sometimes badly written ones) – and drawing on the expertise of all stakeholders, including our children.
Here’s Part 1 of this series. Next: Part 3 on children’s rights online and off. I welcome your thoughts on this installment (in Comments below). If you disagree, please spell out why. More discussion on this is needed, I feel.
SIDEBAR: The view from Australia’s eSafety Commissioner
Australia’s eSafety Commissioner Julie Inman Grant kindly read and emailed me about this article, providing some clarification on how her office works as both a “government agency” and an “independent statutory office” (an office set up by Australian federal law). We are living in a time when that is not necessarily a contradiction – when definitions are changing and, most importantly, new models are emerging in this age of few precedents. Because, as I wrote in Part 1, new social institutions – which the platforms are – call for new forms of regulation. Ms. Inman Grant’s Office is just that. It’s a new model, as is New Zealand’s Netsafe but in a different way. We, especially the United States, I’d argue, which is behind the curve in considering what solutions would work best for us, can be thankful to have these innovative models to factor into our discussion. So here’s Inman Grant on how “independent” and “online safety regulation” work in Australian terms:
“I really liked your characterisation of us as a new kind of regulator. As you have heard me say before, we are currently the only government agency in the world dedicated to the online safety of its citizens. We are set up as an independent statutory office by the Enhancing Online Safety Act of 2015 with a range of regulatory functions and powers to enhance the online safety of Australians. One particularly innovative feature of our work is that we administer three distinct complaints and reporting schemes…cyberbullying targeted at Australian children, image-based abuse (the non-consensual sharing of intimate images), and illegal and offensive online content.
“I noticed a couple of references in the article which could potentially confuse readers about the nature and role of the eSafety office. I’m thinking, here, of the references to us being ‘neither independent of government nor competitive’ and the statement that ‘the government-based ones are not regulators’. While NetSafe is not a government regulatory body (they are what is called an ‘approved agency’), we are an independent statutory body with significant regulatory powers….
The Commissioner continued: “My primary focus is harms minimisation for the victims, and the best way to get content taken down rapidly is through a cooperative, co-regulatory approach.” I agree, though I’d also point out that the definition of “co-regulation” is still quite fluid, as author Tijana Milosevic illustrates in her book Protecting Children Online?, and Australia’s version certainly has more “teeth” because of its statutory origin and relation to government.
“Thus far,” Inman Grant wrote me, “we have had a 100% compliance rate with the platforms in our tier scheme for our cyberbullying take down requests and we have been reticent to use our end user notices or injunctions because most of the recipients have been minors and we have been able to resolve the conflict in other ways.”
To her Office’s credit, because this care and thoughtfulness should be a best practice in every country: “One of the reasons we haven’t issued any end user notices is because the youth-based cyberbullying tends to be peer-to-peer and we would indeed be serving a young person – in our view, any response needs to be proportional and appropriate, meaning that it won’t inflict damage on the young perpetrator either (and we often learn that they are vulnerable in other ways) [emphasis mine].
“But,” she continued “we are with you too, all the way, on prevention in the first instance, followed by these early intervention services to de-escalate the trauma, followed by remediation.” That’s the dream: that this approach be a best practice of the middle layer worldwide.
Related links
- About moving beyond mere criticism: This past November, Alex Stamos, who left his position as chief security officer at Facebook last year, told Slate: “Everybody believes that the best answer is just to turn up the volume of criticism on the companies.” Obviously, I agree. We’re losing time at a time when things are changing fast. Stamos continues: “One of the only ways out of this is that companies are going to have to, A) be much more transparent about these decisions and, B) probably move to a model where the decisions are being made outside of the companies themself…. When you make all these decisions in a vacuum, in a black box, then nobody has any confidence that the fairness that has been shown to the other side will ever be applied to them.”
- The latest statistics from the European Commission on Europe’s network of Internet helplines, called INSAFE
- The latest independent report I’m aware of on Europe’s Internet helplines: “INSAFE Helplines: Operations, effectiveness and emerging issues for internet safety helplines”
- About the challenges of enforcing Germany’s online hate speech law, the Network Enforcement Act that went into effect last year, in DigitalJournal.com last week and earlier in The Atlantic. Arguably, Competence Call Center, a company in Germany, is part of the middle layer too, but like content moderation companies in other parts of the world, it’s a contractor to Facebook and other platforms, so it’s also part of the cloud layer – not independent.
- About one of the “deletion centers”: Motherboard with an inside look at the Center in Essen, Germany
- The UK’s Professionals Online Safety Helpline was the model for the U.S.’s SocialMediaHelpline.com for schools. Now the UK also has ReportHarmfulContent.online, provided by the Safer Internet Centre in that country.
- Brazil’s Safernet
- Australia’s Office of the eSafety Commissioner
- New Zealand’s Netsafe
- CNET on the U.S.’s just-introduced federal privacy law, reporting that “the proposal has industry support.”
- In his new Yale University Press book Custodians of the Internet, researcher Tarleton Gillespie goes in-depth on content moderation.
- In her new MIT Press book, Protecting Children Online?, researcher Tijana Milosevic, takes a deep dive into “the strengths and limitations” of regulation and industry self-regulation where cyberbullying’s concerned.
[Disclosure: My nonprofit organization, the Net Safety CollaborATIVE, is piloting a U.S. version of an Internet helpline, a social media helpline for K-12 schools. This post represents some insight gained from that work but is about growing the conversation about new forms of user care not promoting a U.S. project.]
[…] will preclude the need for what University of Toronto law professor Gillian Hadfield called “super regulation” – competitive independent regulators that support government and are appointed by […]