Helen Toner’s OpenAI exit only made her a more powerful force for responsible AI
Helen Toner’s OpenAI exit only made her a more powerful force for responsible AI
On November 17, 2023, Helen Toner went from being a little-known member of OpenAI’s board of directors to one of the biggest influencers in the debate over responsible AI. Toner, an AI safety and policy expert, famously voted with the majority of OpenAI board members—along with Ilya Sutskever—to oust CEO Sam Altman over concerns about his honesty with the board, commitment to safe AI, and alleged abusive treatment of employees. But OpenAI’s big money investors and vested employees revolted against Altman’s firing, and the once-and-future CEO was reinstated November 22. That meant Toner’s own time with the company had come to an end, and that Sutskever’s days were numbered
“As the stakes have gone up, there’s been a clear shift in what we’ve heard from industry,” Toner tells Fast Company. “Where they used to tout the importance of building trustworthy AI and supporting government regulation, they’re now racing to push out immature products and spending hundreds of millions on lobbying to undercut any attempt at serious lawmaking.”
Indeed, OpenAI was founded back in 2015 with the idea of conducting advanced AI research in the full view of the wider AI research community and the public. But as OpenAI and other AI research labs attracted more investment money they also became more secretive about their research. They also seem to hope that the state and federal governments will leave them alone to regulate their own tech’s development—an agenda Toner believes is self-serving and ultimately a risk for society.
Toner argues that tech companies can’t be relied upon to self-regulate, when those efforts can so easily come into conflict with the pursuit of the immense profits involved. She has advocated for a stronger government role, including enforceable regulations around transparency and safety standards. “I’m sure there will continue to be some companies that try to do the right thing, but we have to take the incentives facing industry seriously,” Toner says. “And those incentives are to race ahead and impress investors, not to look out for the interests of the public.”
Nowadays Toner is the Director of Strategy and Foundational Research Grants at Georgetown University’s Center for Security and Emerging Technology (CSET), where she advises policymakers and grantmakers on AI policy and strategy. And her dismissal from the OpenAI board has only bolstered her reputation as a trusted voice on responsible AI. As people and organizations hurry to apply AI in meaningful ways, policymakers around the world have become increasingly interested in questions of when and how to regulate large AI systems. Many of them have sought out Toner’s expertise, and appreciate that she can speak the language of both the technical and policy worlds. No wonder she’s advised regulators all over the world on AI policy and strategy, and has testified in front of the U.S.-China Economic and Security Review Commission on the national security implications of AI.
“I think what we’ve seen over the two years since ChatGPT was released will turn out to have been just a taste of what’s to come,” Toner says, “if AI systems keep becoming more capable and more powerful.”
This story is part of AI 20, our monthlong series of profiles spotlighting the most interesting technologists, entrepreneurs, corporate leaders, and creative thinkers shaping the world of artificial intelligence.
On November 17, 2023, Helen Toner went from being a little-known member of OpenAI’s board of directors to one of the biggest influencers in the debate over responsible AI. Toner, an AI safety and policy expert, famously voted with the majority of OpenAI board members—along with Ilya Sutskever—to oust CEO Sam Altman over concerns about his honesty with the board, commitment to safe AI, and alleged abusive treatment of employees. But OpenAI’s big money investors and vested employees revolted against Altman’s firing, and the once-and-future CEO was reinstated November 22. That meant Toner’s own time with the company had come to an end, and that Sutskever’s days were numbered
“As the stakes have gone up, there’s been a clear shift in what we’ve heard from industry,” Toner tells Fast Company. “Where they used to tout the importance of building trustworthy AI and supporting government regulation, they’re now racing to push out immature products and spending hundreds of millions on lobbying to undercut any attempt at serious lawmaking.”
Indeed, OpenAI was founded back in 2015 with the idea of conducting advanced AI research in the full view of the wider AI research community and the public. But as OpenAI and other AI research labs attracted more investment money they also became more secretive about their research. They also seem to hope that the state and federal governments will leave them alone to regulate their own tech’s development—an agenda Toner believes is self-serving and ultimately a risk for society.
Toner argues that tech companies can’t be relied upon to self-regulate, when those efforts can so easily come into conflict with the pursuit of the immense profits involved. She has advocated for a stronger government role, including enforceable regulations around transparency and safety standards. “I’m sure there will continue to be some companies that try to do the right thing, but we have to take the incentives facing industry seriously,” Toner says. “And those incentives are to race ahead and impress investors, not to look out for the interests of the public.”
Nowadays Toner is the Director of Strategy and Foundational Research Grants at Georgetown University’s Center for Security and Emerging Technology (CSET), where she advises policymakers and grantmakers on AI policy and strategy. And her dismissal from the OpenAI board has only bolstered her reputation as a trusted voice on responsible AI. As people and organizations hurry to apply AI in meaningful ways, policymakers around the world have become increasingly interested in questions of when and how to regulate large AI systems. Many of them have sought out Toner’s expertise, and appreciate that she can speak the language of both the technical and policy worlds. No wonder she’s advised regulators all over the world on AI policy and strategy, and has testified in front of the U.S.-China Economic and Security Review Commission on the national security implications of AI.
“I think what we’ve seen over the two years since ChatGPT was released will turn out to have been just a taste of what’s to come,” Toner says, “if AI systems keep becoming more capable and more powerful.”
This story is part of AI 20, our monthlong series of profiles spotlighting the most interesting technologists, entrepreneurs, corporate leaders, and creative thinkers shaping the world of artificial intelligence.