You may’ve heard about ChatGPT. It reached 100 million active users last month, “making it the fastest-growing consumer product in history,” according to The Economist, which may have something to do with all the headlines about it. So here’s my thinking on this quite remarkable chatbot, which, basically, is artificial intelligence that you can chat with, like Siri or Alexa, except that Siri or Alexa can’t write a poem, an essay, lyrics or software code. That sort of creative work we humans do is what ChatGPT can do, among other things.
Wall Street Journal reporter Joanna Stern had it write an AP Literature essay. Creative staff at advertising agencies are reportedly “turning to ChatGPT … to generate ideas for brands, write rapid-fire briefs for clients, play around with ad copy, and come up with TikTok sketches,” Advertising Age reports. It can also design a logo, write code, compose music and create 3D animations, according to DigitalTrends.com. So it might have a presence in some of our children’s future careers. And then there’s all the news about “the new plagiarism,” though a bit different from stealing other people’s work. Wired reports that “students and professors can’t decide whether [it’s] a research tool—or a cheating engine.”
For media literacy learning
Even ChatGPT’s own parent, Open AI, says the chatbot “sometimes writes plausible-sounding but incorrect or nonsensical answers” and “fixing this issue is challenging.”
Therein lies the rub – the “sometimes” part, the good part for the ethically minded, actually. The user can’t know for sure if what they’re getting from this AI that’s part search engine and part creative engine is accurate or just really great at appearing accurate. Which brings us to why it can be a great media literacy instruction tool.
NPR tells the story of a professor at University of Pennsylvania’s Wharton School, Ethan Mollick who actually encourages his students to use ChatGPT. But he has two requirements: that they 1) disclose they’re using it, acknowledging when and how, and 2) are “responsible for any errors or omissions provided by the tool.”
The first one makes its use ethical. The second piece makes it a media literacy learning tool. Students have to fact-check the bot’s work and, if Professor Mollick requires it (hopefully he does), provide a list of references. That’s great practice for students in many academic subjects at nearly all grade levels, not just those in college.
The plagiarism question
Having ChatGPT do a writing assignment isn’t technically plagiarism – yet – or it isn’t traditional plagiarism, because you’re not stealing someone else’s work. It introduces a new shade of gray as a sort of indirect plagiarism, because you’re using a bot that’s doing the stealing by creating a composite of other people’s work. Though some colleges – such as Bryn Mawr – do consider it plagiarism, according to Wired. If anything, it cheats the student out of learning, especially if they aren’t required to do the fact-checking work. It’ll be interesting to see what the consensus will be among academic ethics committees as it grows. Will they call this practice something else – or allow it and require what Dr. Mollick does of his students?
It might be useful to apply some critical thinking to how we’ve reacted to this technology so far. Of course it’s unsettling, because we fear what we don’t understand, and obviously we don’t understand technology when it first emerges. So we tend to go into a kind of panic every time a new technology emerges.
The part that’s not new
Scholars call this a “moral panic,” which goes all the way back through history, e.g., when Socrates worried about what the innovation of writing things down would do to people’s memories. A bit more recently, Louisa May Alcott’s educator cousin William Alcott argued for the benefits of giving students a new-fangled learning tool called a “slate” (kind of a 19th-century tablet?) at a time when people thought slates spelled nothing but distraction (and think of the classroom management issues!). In the 1870s and ’80s the telephone was seen as a threat that could “break up home life,” make conversation more superficial, bring strangers into the home, speed up life, make us lazy and never let us leave each other alone (well maybe that last one is true). A 1941 study of 6-16 year-olds found that over half were “severely addicted to radio and movie crime dramas,” according to researcher Amy Orben writing in an academic journal. In the 1950s, congressional hearings were held on whether comic books were corrupting the nation’s youth.
I could go on, but you get the idea. Our negativity bias can keep us stuck on the negative side of each new technology that comes on line, right when we need to move quickly to minimize harm and optimize its usefulness. ChatGPT is no exception. It might be worth looking at both the positive and negative uses and implications of this technology in case our kids do need some proficiency in it, and/or in the media literacy and digital literacy that will keep it useful to them.
Hope on the dark side too
But if you do choose to stay on the downside of ChatGPT, there’s great news for you: NPR reports that a computer science student at Princeton University, Edward Tian, has already created a tool to help teachers detect if ChatGPT wrote a student’s assignment. I love that Tian “is now building a community of educators and students who want to figure out what to do with AI in the classroom,” which could be as powerful as the chatbot itself.
Even Open AI has come up with a tool that detects AI-generated text, i.e. an essay by ChatGPT, the Wall Street Journal reports. But there’s something here for people who want to stick with the downside of that, that even “OpenAI said its so-called AI classifier itself fails to detect bot-written text nearly three-quarters of the time.” Oh good, I mean bad. Hmmm.
Next up on generative AI: The flip side
Related links
- 3 important items added after this post:
- “7 ways teachers can use AI in class” (June 12): Though written by a business school professor, Ethan Mollick, there is loads of insight and how-to here for high school teachers as well. In both the article linked to here and an academic paper he co-wrote and links to in the article, he provides prompts you can use to guide ChatGPT to act as a tutor, coach, mentor, teammate, student, simulator or tool (the seven ways).
- About the introduction of Bard (Feb. 6), Google’s own chatbot of this sort, built on its Language Model for Dialog Applications (LaMDA). The more robotic sounding “ChatGPT” stands for “Chat Generative Pre-trained Transformer,” and it’s built on OpenAI’s latest language model, GPT-3. As Mike Masnick of TechDirt posted on Mastodon, “The conversational AI wars commence.”
- Professor Mollick on lessons he and his students have learned from using generative AI (Feb. 17). One thing he found, after Microsoft gave him access to AI Bing, was that that system “is so much more capable than ChatGPT that many of the issues with the old AI are no longer relevant.” And there are a lot of startups releasing AI products too. Things are moving fast.
- OpenAI’s page on ChatGPT
- An AI Symposium: A video panel discussion at the start of which the moderator looks at the insanely rapid uptake of ChatGPT and how people around the world are reacting (e.g., the governments of Russia and China having banned it for all citizens, while some school systems have in the US)
- Scholars on AI (not just ChatGPT) in bullying prevention
- “Dumber than humans”?! I love the lede in this Wired story: “In late December of his sophomore year, Rutgers University student Kai Cobbs came to a conclusion he never thought possible: Artificial intelligence might just be dumber than humans.”
- Tech writer Ben Parr looks beyond just ChatGPT to what the ethical repercussions of A.I. in general is “as it becomes mainstream.” He includes potential elements of an industry code of ethics discussed over dinner by a group of AI experts in California as proposed by Robert Reich, associate director of Stanford University’s institute for Human-Centered Artificial Intelligence. One of the elements is allowing people – artists, writers, academics – to opt out of having their work fed into the algorithm. So important, but it does raise the question of how that would bias the algorithm.
- In an opinion piece, the New York Times’s Farhad Manjoo leads with the question, “Will ChatGPT kill Google?” Well, first of all, Google has invested $400 million in ChatGPT rival Anthropic, an AI startup, Bloomberg reports, so stay tuned. But Manjoo writes that it’s early days for ChatGPT to be a threat to Google and, anyway, he already uses Reddit and YouTube more than Google to search for information – not that that doesn’t require plenty of media literacy.
- How to try it, courtesy of PCMag, in their very readable, very comprehenisve, slightly snarky, somewhat negative roundup of about anything you’d want to know about ChatGPT. The bot’s honest, anyway. When PCMag asked it if it had read PCMag, it said it can’t read or subscribe to magazines, but it was “fed” PCMag articles. However, “my knowledge of…the articles is limited to the data set that was used to train me, and my knowledge cut-off date is 2021.” That’s old news defined – not much competition for journalists, so comforting for them and a great tip-off to anyone using it who wants their “work” to seem current.
- “The Sisyphean Cycle of Technology Panics,” by researcher Amy Orben
- “ChatGPT Is Making Universities Rethink Plagiarism” in Wired
- In 2011, Prof. David Finkelhor, director of the University of New Hampshire’ Crimes Against Children Research Center, coined the term “juvenoia” and gave a talk on the subject I wrote about here and here. He later wrote about online safety’s “three alarmist assumptions.”
- Back in 2009, I attempted a list of “Why technopanics are bad“
[…] Other posts on gen AI in this blog: on whether kids should learn how to use it (Oct.); a July freeze frame, because so much was going on then; at the end of February, a look at the dark side where kids are concerned (and what we need to teach little ones); and, early in Feb., thoughts on ChatGPT for media literacy training […]