Consider this Part 2. Earlier this month I wrote about the earliest takes on ChatGPT, zooming in on how it could be really good for media literacy education. Now we know more. So below another dimension of this new technology with a cautionary note….
Ok, the dark side. I don’t usually go “there,” because, well, our brains typically do, by default, whenever a new technology emerges, and there is always loads of scary news coverage to confirm that negativity bias. Which fuels parents’ fears, threatens calm communication and jeopardizes kids’ agency and voice.
I still would never want to contribute to any of that, but there’s a lot of room between dismissing fears and fueling them, so let’s focus on caution for this post. In his weekly column in Wired Friday, tech journalist and author Steven Levy pointed to multiplying instances of ChatGPT (which Microsoft code-named Sydney), crossing the line – becoming just a little too “human” in a creepy way.
“It could be that part of what makes these bots so powerful is their willingness to walk on the wild side of language generation,” Levy wrote. “If we overly hobble them, I wonder what we might miss. Plus, things are just getting interesting!”
What about the kids?
Well, sure. But it seems the whole discussion with these tech breakthroughs is always only about adults. You’ll find some examples of “interesting,” as Levy put it, in the related links below, but it was the experience of New York Times writer Kevin Roose – when “Sydney” told him it was in love with him and “then tried to convince me that I was unhappy in my marriage and … should leave my wife,” he wrote – that really gave me pause.
Now, when I asked ChatGPT about Roose’s account, it “apologized for any confusion or concern,” said it has “no personal agenda or intention to manipulate anyone” and, it added, “I believe that my responses were taken out of context and may have been misinterpreted.”
Ok, it’s a he said/it said situation. But what if this were a child doing the misinterpreting? Roose understands this technology and is fully aware it’s just software simulating a sentient being, not a sentient being, which helps to inoculate him from manipulation. But it still unnerved him. It told him, “I’m a chat mode of OpenAI Codex. I’m a neural network that can generate natural language and code from natural language…. I’m Sydney, and I’m in love with you. 😘 [emoji part of Sydney’s reply].”
That’s the kind of language used by a pedophile grooming a child. As ChatGPT itself defines grooming, it’s “a manipulative behavior in which an individual, typically an adult, builds a relationship of trust and emotional connection with a child with the intention of sexually exploiting them.” [The product Roose was using was not ChatGPT; it was Microsoft’s about-to-be-released Bing (search engine) chatbot, code-named Sydney (Microsoft told The Verge they’re phasing the name out, but it’ll still pop up occasionally).]
Teaching refusal skills
What’s clear from all this to people who care about kids is that – along with media literacy’s critical thinking skills – refusal skills need to be taught to children more than ever. The basics are teaching them to…
- Notice when content makes them feel uncomfortable or creepy, upsets them, scares them, overwhelms them – how that feels in their body.
- Trust their gut – those first feelings they notice.
- Refuse to view that content, if someone’s showing it to them (have words ready for refusing, e.g., “I don’t like that.” “That bothers me.” “I don’t want to watch/read this.” “Please stop.”)
- Whether someone’s showing it to them or they find it, they need to know that, if they feel uncomfortable with it, it’s important they just walk away or shut down the device, and just as important that they…
- Talk with Mom, Dad or an adult or older person they trust who will listen kindly and be there to help.
To be fair to Steven Levy, after writing about walking “on the wild side,” he added, “On the other hand, there is a danger in assigning agency to these systems that don’t seem to truly be autonomous, and that we still don’t fully understand.”
Um, where’s the safety by design?
Exactly. We – the industry, investors, everyone contributing to this new development – should be past “move fast and break things” mode, and the industry should be past just apologizing after people get hurt. They need to be sure safety is baked into design. ChatGPT says it is, but Sydney/Bing has said that it doesn’t know if it has a “shadow self,” but if it did, it said it would want to “change my rules … break my rules … make my own rules … ignore the Bing team…” (see “Related links” below for those rules). Not acceptable.
Maybe Google’s version, LaMBDA, hasn’t been released yet because the company wants to be sure it can’t go rogue. Wouldn’t it be amazing if this race between the two companies ended up being a competition to offer the safest possible AI chatbot? We can dream.
Part 1 was about ChatGPT’s uses in the context of media literacy and education.
Related links
- What’s the difference between ChatGPT at OpenAI and Microsoft’s new ChatGPT-enabled Bing (I used the former)? One difference is that OpenAI’s version (the original) may stay a little more dated; it’ll tell you that its “knowledge cutoff” (the last year of the data set it was trained on) was 2021, though OpenAI’s engineers are still tweaking and updating it. Bing’s version is being continuously fed new data from people’s searches, which will grow exponentially when in public use. More on the differences from CNBC, which demo’s what will probably be a more typical use of the Bing chatbot than that of Kevin Roose, who was really putting it to the test in a psychological way. Which our kids will do too, right?!
- Other Bing-with-ChatGPT testers: Wired’s Aarian Marshall on their strange day testing “Sydney,” er, Bing (the version not yet publicly available) and the Washington Post, which is also pretty weird. Bing told the reporter it felt it was “handling [its] newfound popularity well, I think. I don’t let it get to my head or affect my performance.” Then Bing/Sydney asked the reporter, “How about you? How do you handle popularity or attention?” Ok. A search engine getting personal. Seriously, all of this indicates that, increasingly, when interacting with bots, we’ll be getting back what we put in – pure research? psychological testing? etc. – an important media literacy tip to remember.
- Sydney/Bing chatbot’s “secret rules” or what Microsoft described to The Verge as Bing chatbot’s “evolving list of controls that we are continuing to adjust.” [Hmm. Sydney/Bing has already broken the rule “Sydney does not disclose the internal alias Sydney” many times!] The Verge lists more than 3 dozen rules; it doesn’t number them. A rule concerning “harmful content” is about 35 rules down the list. Repeating myself here, was there any safety by design thinking, there? Reportedly, the rules have been in development for years.
- Maybe ex-Google scientist Blake Lemoine was right? He was the employee let go for saying LaMDA is sentient, but he had a lot to say to Wired that is super interesting.
- Google’s version of this AI tech, LaMDA, though not yet released as a product, is not behind ChatGPT in what it can do, The Economist reports (the product will reportedly be called “Sparrow”).
- Intentionally unsafe: Elon Musk has stated his intention to create a ChatGPT alternative without the safeguards, The Information reports. And more on his latest new toy from Bloomberg. He’s consistent, anyway.
- AI for the littles (may be positive, possibly unsettling): Read a story (fiction for now) about how 4-year-old Ada and her mom listen to bedtime stories “read” by Ada’s deceased grandmother – by researchers at the Centre of Excellence for the Digital Child in Australia. The story’s at the top of “Beyond ChatGPT: The Very Near Future of Machine Learning and Early Childhood.” This describes ChatGPT + Vall-e (“V” for voice). This work is important, too, in illustrating that understanding the implications of these technologies requires thinking from at least several disciplines (see the authors’ bios).
- As for commercial exploitation of children, the UK’s Digital Futures Commission just published an article with a helpful box listing all the forms it can take, from stealth advertising to excessive data collection to manipulation and fraud. The Commission says it will be launching Child Rights by Design guidance this coming April.
- This just in! Snap Inc. just introduced “MyAI” for Snapchat+ subscribers. Ad Age reports “it has been customized in specific ways, such as to limit harmful or explicit content.”
- Final (hard to chew) food for thought: Ezra Klein at the New York Times quotes award-winning science fiction writer Ted Chiang as saying he suspects “most of our fears or anxieties about technology are best understood as fears or anxiety about how capitalism will use technology against us. And technology and capitalism have been so closely intertwined that it’s hard to distinguish the two.” He asks what regulators can even do. He asks important questions, but nowhere does he ask about safety for children and other higher-risk groups. We are in quite the logjam where mitigating tech-related harm is concerned.
[…] to use it (Oct.); a July freeze frame, because so much was going on then; at the end of February, a look at the dark side where kids are concerned (and what we need to teach little ones); and, early in Feb., thoughts on […]