In every neighborhood, there are neighbors and then there are visitors, and it’s mostly the neighbors who make it what it really is – great to hang out in or maybe not so much. That’s true in social media too, only digital neighborhoods can be geographically based or interest based. They’re interest communities. Some sites are single interest communities, others encompass masses of them – and all kinds of them. YouTube and other global social media services are giant collections of “neighborhoods,” and – just as in big cities in the offline world – a great way to help a service (like the police and other civic services) keep the neighborhoods great places to be is digital neighborhood watch programs.
You could call it “participatory policing.” Or you could call it what YouTube calls it: a “Trusted Flagger Program.” Disclosure: ConnectSafely.org, a nonprofit service I co-direct, is funded by Internet companies including Google, which owns YouTube, but that’s not why I’m writing this. I’m writing this because 1) participatory policing online has made sense to and been used by law enforcement since long before YouTube launched, 2) the more social media users of all ages understand how key they are to the wellbeing of themselves, their peers and their digital communities, the more safety and well-being they’ll experience in digital spaces, and 3) because of its numbers, YouTube is an important example of why participatory policing is needed.
On that last point, in one month, the service gets more than a billion users, and they watch more than 6 billion hours of video (40% of that viewing from mobile devices), and every minute, those users upload 100 hours of video to the site.
So who are “trusted flaggers”? They’re the good neighbors – the people invested in keeping the neighborhood nice, or “clean,” as YouTube puts it. [The site now seems to be testing a “super flaggers” program as well, the Wall Street Journal reports, referring to “roughly 200 people and organizations, including a British police unit.”]
Because of sheer size and all the “false positives” (kind of like false alarms) any site’s customer service department gets from users reporting abuse, no global social media service could possibly detect all violations of its Terms of Use. [One site another moderator once told me that about 90% of abuse reports on her site were false positives, whether from people testing the system, acting out, making stuff up, making a mistake or abusing abuse reporting (see No. 5 in my 2008 post “Top 8 workarounds in kids virtual worlds”).]
“To make the flagging process more efficient,” YouTube says in its Help section, “we invite a small set of users who flag content regularly to join our Trusted Flagger Program.” If invited (the application form is here), they can’t actually take content down – YouTube staff do that – but they do get “access to more advanced flagging tools as well as periodic feedback, making flagging more effective and efficient,” YouTube says. The site’s “policy enforcement team reviews flagged content 24 hours a day, 7 days a week, promptly removing material that violates our policies.”
Safety and well-being in social media is as crowd-sourced as the content – and the crowd includes “neighbors” (posters and other active users) visitors (viewers and other passers-by) in digital spaces, the companies providing them and all the layers of public safety providers offline.
Related links
- According to Engadget, YouTube’s Trusted Flaggers program started last October
- YouTube’s “Policy Enforcement” page, with the full scoop on abuse reporting
- “How a police officer uses Facebook” (2009) and “YouTube as a police tool” (2007)
- “Balancing external with internal safety ‘tools'”
Leave a Reply