It’s feeling like I need a large language model brain to write about safety with large language models. [I certify that I only wish I had such a brain, and I am a human.]
So let’s freeze the film for a moment and see where we humans are with generative AI safety.
First, so that we’re all on the same page, a large language model is basically an algorithmic structure called a “neural network” designed and trained to be a model of human intelligence. For it to get there, the software is trained on humongous amounts of data (the “large” part of “large language model”) scraped from the global Internet, which requires huge financial, computing and energy resources – the reason why it’s mostly large tech companies that are creating these LLMs. Examples are Google’s Bard, Meta’s LLaMA-2 and ChatGPT-4 by OpenAI with huge investments from Microsoft; Apple’s also working on one, Bloomberg reports, as are others (real world examples of what a LLM can do are demo’d by Professor Ethan Mollick here).
For context around the latest news, two interviews struck me this past week:
1. “They don’t understand all the risks,” University of Virginia data science professor Renée Cummings told the BBC, referring to the companies and generative AI risks. People at the companies are saying this too, according to an in-depth article in The Atlantic, “Does [OpenAI CEO] Sam Altman know what he is creating?”
“By his own admission … Altman doesn’t know how powerful AI will become,” reports Ross Andersen in that article, “or what its ascendance will mean for the average person, or whether it will put humanity at risk. I don’t hold that against him, exactly – I don’t think anyone knows where this is all going, except that we’re going there fast….”
Andersen relates a conversation he had with OpenAI’s chief scientist, Ilya Sutskever, about what a LLM’s neural network “looks” like. Its neurons “sit in layers. An input layer receives a chunk of data, a bit of text or an image, for example. The magic happens in the middle – or ‘hidden’ – layers, which process the chunk of data, so that the output layer can spit out its prediction….”
Further down, Andersen reports that “all of those mysterious things that happen in GPT-4’s hidden layers – are too complex for any human to understand, at least with current tools [emphasis mine]. Tracking what’s happening across the model – almost certainly composed of billions of neurons – is, today, hopeless….” But he adds that OpenAI’s model of the ChatGPT-4 model at least helps people understand. Also, Altman believes that making it available for the public to use will help expose problems.
2. The second contextual piece that struck me was about the arc of a new technology in society, as described by Tobias Rose-Stockwell, author of the just-released book Outrage Machine in an interview with All Tech Is Human (ATIH) this week – “how different technologies are metabolized by society.” I’ll give the phases numbers just for convenience: There’s 1) a period of “increased interest and euphoria” that brings mass adoption, followed by what the author calls 2) a “dark valley of hidden harm” where people start to see “the harms hidden by the euphoria and mass adoption,” followed by 3) research (both good and bad) into the harms, until finally 4) society ends up with “the best parts of what originally emerged.” Others have applied phases to tech adoption too, but these make sense to me – I’ve watched them play out with gen AI’s predecessor….
With social media, we seem to be getting past the “dark valley” with a robust and growing body of research. Asked about gen AI, Rose-Stockwell said the pattern will hold. Someone (either he or ATIH’s David Polgar, I can’t remember) said we need to make the dark valleys shallower and shorter. I agree. It’s urgent that we move quickly from fearfully feeling around in the dark and use transparency and research to pinpoint “actual harm,” as Rose-Stockwell put it, so we can minimize it. “If you can get really tight on the actual harm … you really can design around it…. You can fix it.”
What just happened
So here’s key recent news indicating we’re getting closer, as people inside and outside the gen AI “industry” have been thrashing around for ways to identify actual harm. In April there was the open letter calling for a “pause” in development, which many are saying was never going to happen – especially considering competition with other countries, quite reasonably. But just in the past couple of weeks:
- Seven generative AI providers – Amazon, Anthropic, Google, Inflection, Meta, Microsoft and OpenAI – met with President Biden at the White House last week. They announced voluntary commitments to “invest in research and safety, security stress-testing, and assist in third-party audits of system vulnerabilities,” The Verge reported. Critics say we’ve seen “self-regulatory” schemes before, but high-profile dialog between the industry and the executive branch which articulates both the problems and parts of the solution is good for public awareness, education and is at least suggestive of accountability.
- Adding some concrete this week, some of those companies – Anthropic, Google, Microsoft and OpenAI – announced the Frontier Model Forum, an industry body whose objectives are to advance safety research, identify best practices, collaborate with policymakers, researchers, civil society and industry and support the development of AI applications that meet “society’s greatest challenges,” from climate change to digital security threats. Here‘s coverage from Bloomberg. Maybe this will act like the Technology Coalition, a cross-industry body fighting online child sexual abuse.
- Meta announced that it’s open-sourcing its LLaMA 2 model in partnership with Microsoft, making it available not only to researchers but also to startups and smaller companies that don’t have the resources to create their own LLMs. Meta says this adds safety because all these parties will be able to “stress test” it to find problems and fix them. Those parties have to agree to terms of use before they can use the LLM, but Axios points out how hard it would be to enforce those terms once the LLM’s out in the wild – though it adds that Meta counters that point saying LLaMA is not nearly as “smart” as, say, ChatGPT-4. On the other hand, MIT Technology Review writes that open-sourcing “could demonstrate the benefits of transparency over secrecy when it comes to the inner workings of AI models.” [I was interested to learn that LLaMA is not trained on Facebook data, the Associated Press reports. “It says the latest model was trained on “a new mix of data from publicly available sources, which does not include data from Meta’s products or services.”]
- Meanwhile, Apple is “quietly working on AI tools” that could challenge the LLMs of the seven companies the White House convened, Bloomberg reports. It has its own framework, apparently called “Ajax,” for creating LLMs. Just as Apple has been absent from public discussions and industry forums about online safety, the company “has been conspicuously absent from the [generative AI] frenzy,” Bloomberg adds, so by default also from discussions about safeguards and guardrails for LLMs. Let’s hope that changes.
It’s different now … really
So although it may feel like we’re back in “move fast and break things” mode, we’re not, actually. These are good signs. Another is the increasingly common practice of “red-teaming,” testing systems for holes, safety risks and unintended consequences – not something that happened in social media’s earliest days. After ChatGPT-4 finished training, OpenAI “assembled about 50 external red-teamers who prompted it for months, hoping to goad it into misbehaviors,” The Atlantic reports. Apple is red-teaming its AI tools too, Bloomberg reports.
Other things that are different now: a whole lot of federal legislation relevant to gen AI proposed already; a now robust Trust & Safety field and community, with its own professional association and growing public awareness of content moderation and the field; children’s digital rights now well defined by General Comment 25; “responsible tech” being a whole movement now; and even big tech calling for regulation, but not only that – OpenAI’s Sam Altman is calling for a global oversight body like the International Atomic Energy Agency, The Atlantic reports. Other ideas floated: an “Off” switch for LLMs and a “license to operate any GPU cluster large enough to train a cutting-edge AI, along with mandatory incident reporting when an AI does something out of the ordinary,” according to the same article.
So yes, the Silicon Valley venture funding machine does seem to be back in the move-fast mindset, but a lot of other things are different. In fact, there may not have been a euphoria phase for generative AI at all – which is good, because Tobias Rose-Stockwell and other pundits are saying this technology is an order of magnitude more disruptive than social media. Interestingly, social media, with all its data on the minutiae of our everyday lives, has had quite a lot to do with making this next new technology so disruptive!
Related links
- This just in! In yet another example of how fast things are moving, two days after I published this, Wired published this: “A New Attack Impacts Major AI Chatbots—and No One Knows How to Stop It:
Researchers found a simple way to make ChatGPT, Bard, and other chatbots misbehave, proving that AI is hard to tame.” The researchers notified the LLMs’ companies before they published, so the immediate exploit was addressed but, well, Wired’s subhead says the rest (for now). - I’m for this: Perhaps to deal with the pace and complexity of gen AI’s advancement, two months ago Senators Bennet and Welch introduced the Digital Platform Commission Act, the IAPP reported, an expert regulator that “can ensure … AI tools and digital platforms operate in the public interest,” as Senator Bennet described it. Our federal government and lawmakers nationwide need this.
- This is our data: Professor Cummings also told the BBC that the problem isn’t just that there are just a few “big [tech company] fish populating the AI pool,” as the BBC put it, it’s more that “they’re all dipping from the same well…. It’s a case of exploration, experimentation, and of course it’s also about exploitation – exploitation of our data and the ways in which they’re using that data. The challenge is that, when we think about innovation, we want to think about ethical innovation, and many of these LLMs, though they’re doing some brilliant things, they’re creating some unique challenges, in particular when we think about marginalized groups…. We’ve got to take a rights-based approach to this technology because … it’s created some very, very serious civil rights and human rights challenges for us.”
- The crucial human rights component: The responsible tech organization All Tech Is Human just released its “AI and Human Rights Report,” by multiple expert co-authors
- About AI in education: UNESCO’s just-released “Global Education Monitoring [GEM] Report 2023” included generative AI: The authors wrote that, although over-focus on tech “usually comes at a high cost,” a child’s education is “unlikely to be as relevant without digital technology.” They reflect on “what it means to be well-educated in a world shaped by AI,” writing that the ideal response is unlikely to be further specialization in technology-related domains; rather, it is a balanced curriculum that maintains, if not strengthens and improves, the delivery of arts and humanities to reinforce learners’ responsibility, empathy, moral compass, creativity and collaboration…. A consensus is forming about the need to enjoy AI’s benefits while eliminating risks from its unchecked use, through regulation relating to ethics, responsibility and safety.”
- An academic paper on AI in early childhood education aligns very well with the GEM Report’s guidelines. The authors write, “AI has enabled new and innovative ways of interacting with technology. However, it’s important to ensure that the use of AI in early childhood education aligns with the core principles of early childhood pedagogy. Intelligence augmentation (IA) can play a significant role in promoting multimodal creative inquiry among young students, supporting critical thinking, problem-solving, and creativity.” They, too, stress the importance of maintaining a human presence in the learning process.
- An academic paper on using generative AI in the classroom (probably at least high school), offering seven ways to use an LLM: as coach, tutor, mentor, teammate, tool, simulator and student, “each with distinct pedagogical benefits and risks,” as well as “prompts” (the word for instructions the user gives the AI model). Authors Ethan Mollick and Lilach Mollick, both at the University of Pennsylvania’s Wharton business school, embraced gen AI early on (last fall, maybe). Here E. Mollick illustrates a deepfake of himself that took a LLM 2.5 min. to make.
- From the Internet Education Foundation‘s Congressional Internet Caucus Academy, “AI Regulation Roundup: Where Are We Now?“, video of a panel on Capitol Hill last week. Amazing resource links from the panelists and their organizations: “Federal AI Legislation in 117th Congress: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools,” by Anna Lenhart; “Credo AI Perspective on Voluntary Commitments to Manage AI Risks Announced by OSTP,” by Navrina Singh, with panelist Evi Fuelle; “Understanding AI: A Guide To Sensible Governance,” from the Computer and Communications Industry Association; “Generative Artificial Intelligence: Overview, Issues, and Questions for Congress,” by Laurie A. Harris, Congressional Research Service; and “The Top 10 Federal Regulators of AI You Should Know … and This Is What They Think,” by Hermine Wong
- Interview with Anthropic CEO Dario Amodei on safety in generative AI by Kevin Roose and Casey Newton on their Hard Fork podcast at the New York Times (check out the discussion about “constitutional AI”)
- On the need for transparency: MIT Technology Review on how Meta’s open-sourcing of LLaMA-2 “could demonstrate the benefits of transparency over secrecy when it comes to the inner workings of AI models,” adding, “This could not be more timely, or more important.”
- On the resources required for building LLMs: “Behind the Millions: Estimating the Scale of Large Language Models“
- Why the race: Explaining why the ever-faster innovation race (and why Meta’s open-sourcing only ups the pace), a conversation in Semafor with Dylan Patel, the lead of a group of analysts that did a deep dive into the technical details of ChatGPT-4 (he says it’s both potentially dangerous and “the most positive thing that has happened to humanity since the invention of the internal combustion engine.” We’re all going to need to get better at holding paradox (here‘s something on that).
- Two earlier posts of mine on this: AI and safety for little ones and AI and media literacy
[…] posts on gen AI in this blog: on whether kids should learn how to use it (Oct.); a July freeze frame, because so much was going on then; at the end of February, a look at the dark side where kids are […]