If AGI arrives during Trump’s next term, ‘none of the other stuff matters’
If AGI arrives during Trump’s next term, ‘none of the other stuff matters’
More than 33,000 people—including a hall of fame of AI experts—signed a March 2023 open letter calling on the tech industry to pause development of AI models more powerful than OpenAI’s GPT-4 for six months rather than continue the rush toward Artificial General Intelligence, or AGI. “Powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable,” stated the letter, which was spearheaded by an organization called Future of Life Institute.
Spoiler: The industry didn’t heed the letter’s call. But it did generate tremendous publicity for the case that AGI could spiral out of human control unless safeguards were in place before they were actually needed. And it was only one of many initiatives from the decade-old institute designed to cultivate a conversation around AI’s risks and the best ways to steer the technology in a responsible direction.
[Photo: courtesy of Web Summit]At the Web Summit conference in Lisbon, I caught up with Future of Life cofounder and president Max Tegmark, an MIT professor whose day job involves researching “AI for physics and physics for AI.” We spoke about topics such as the open letter’s impact, why he thinks AI regulation is essential (and not that difficult to figure out), and the possibility that AGI will become real during Donald Trump’s upcoming presidential term—and how Elon Musk, who is an FLI advisor, might help the administration deal with that possibility in a constructive manner.
This interview has been edited for clarity and length.
Is the work you’re doing accomplishing what you hope it will, even if the AI industry doesn’t pause for six months?
My goal with spearheading that pause letter was not that I thought that there was going to be a pause. Of course, I’m not that naive. The goal was to mainstream the conversation and make it socially safe for people to voice their concerns. I actually feel that’s been a massive success.
It certainly got a lot of attention.
A lot of stakeholders who didn’t say much about this have now started speaking about it. I felt there was just a lot of pent-up anxiety people had all across society. Many felt afraid of looking stupid by talking about this, or afraid of being branded as some kind of Luddite scare monsters. And then when they saw that here are all these leaders in the field also expressing concerns, it became socially validated to anyone else who also wanted to talk about it. And it brought into the open the possibility of having that other extinction letter, and then that, in turn, I think, very directly led to there being hearings in the Senate and the international AI summits, the AI safety institutes starting to exist, and stuff like that, which are all very welcome.
I’ve been feeling kind of like Lone Wolf McQuaid for the past 10 years, where was really no uptake among power makers. And that’s completely changed.
I was impressed by the fact that people within large organizations were willing to be attached to your letter even if their bosses were not.
But the extinction letter in May of 2023 was actually signed by all the bosses. I viewed it a little bit as a cry for help from some of those leaders, because it’s impossible for any company to unilaterally pause. They’re just going get crushed by the competition. The only good outcome is that there are safety standards put in place that level the playing field for everyone.
Like if there were no FDA, no pharma company could just unilaterally pause releasing drugs. But if there’s an FDA and nobody can release the new drugs until they’ve gone through FDA approval, there’s an automatic pause on all unsafe drugs. And it really changes the incentive structure in a big way within the industry, because now they’ll start investing a lot of money in clinical trials and research and they’ll get much safer products as a result.
We have that model in basically every industry in America. We have it not just for drugs, we have it for cars, we have it for airplanes, we even have it for sandwich shops. We have to go get the municipal health inspector to check that you don’t have too many rats in the kitchen. And AI is completely anomalous as an industry. There’s absolutely no meaningful safety standards. Sam Altman, if his own prediction comes true and he gets AGI soon, he’s legally allowed to just to release it to see what happens.
How clear an understanding do you have of what the regulations should be, especially given that AI is moving faster than sandwich shop safety or even drugs?
I think that’s actually pretty easy. The harder thing is just having the institutional framework to enforce whatever standards there are. You bring together all the key stakeholders and you start with a low bar, and then gradually you can raise it a bit.
Like with cars, for example, we didn’t start by requiring airbags or anti-lock brakes. We started with seat belts and then some people started putting up some traffic lights and som
More than 33,000 people—including a hall of fame of AI experts—signed a March 2023 open letter calling on the tech industry to pause development of AI models more powerful than OpenAI’s GPT-4 for six months rather than continue the rush toward Artificial General Intelligence, or AGI. “Powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable,” stated the letter, which was spearheaded by an organization called Future of Life Institute.
Spoiler: The industry didn’t heed the letter’s call. But it did generate tremendous publicity for the case that AGI could spiral out of human control unless safeguards were in place before they were actually needed. And it was only one of many initiatives from the decade-old institute designed to cultivate a conversation around AI’s risks and the best ways to steer the technology in a responsible direction.
[Photo: courtesy of Web Summit]At the Web Summit conference in Lisbon, I caught up with Future of Life cofounder and president Max Tegmark, an MIT professor whose day job involves researching “AI for physics and physics for AI.” We spoke about topics such as the open letter’s impact, why he thinks AI regulation is essential (and not that difficult to figure out), and the possibility that AGI will become real during Donald Trump’s upcoming presidential term—and how Elon Musk, who is an FLI advisor, might help the administration deal with that possibility in a constructive manner.
This interview has been edited for clarity and length.
Is the work you’re doing accomplishing what you hope it will, even if the AI industry doesn’t pause for six months?
My goal with spearheading that pause letter was not that I thought that there was going to be a pause. Of course, I’m not that naive. The goal was to mainstream the conversation and make it socially safe for people to voice their concerns. I actually feel that’s been a massive success.
It certainly got a lot of attention.
A lot of stakeholders who didn’t say much about this have now started speaking about it. I felt there was just a lot of pent-up anxiety people had all across society. Many felt afraid of looking stupid by talking about this, or afraid of being branded as some kind of Luddite scare monsters. And then when they saw that here are all these leaders in the field also expressing concerns, it became socially validated to anyone else who also wanted to talk about it. And it brought into the open the possibility of having that other extinction letter, and then that, in turn, I think, very directly led to there being hearings in the Senate and the international AI summits, the AI safety institutes starting to exist, and stuff like that, which are all very welcome.
I’ve been feeling kind of like Lone Wolf McQuaid for the past 10 years, where was really no uptake among power makers. And that’s completely changed.
I was impressed by the fact that people within large organizations were willing to be attached to your letter even if their bosses were not.
But the extinction letter in May of 2023 was actually signed by all the bosses. I viewed it a little bit as a cry for help from some of those leaders, because it’s impossible for any company to unilaterally pause. They’re just going get crushed by the competition. The only good outcome is that there are safety standards put in place that level the playing field for everyone.
Like if there were no FDA, no pharma company could just unilaterally pause releasing drugs. But if there’s an FDA and nobody can release the new drugs until they’ve gone through FDA approval, there’s an automatic pause on all unsafe drugs. And it really changes the incentive structure in a big way within the industry, because now they’ll start investing a lot of money in clinical trials and research and they’ll get much safer products as a result.
We have that model in basically every industry in America. We have it not just for drugs, we have it for cars, we have it for airplanes, we even have it for sandwich shops. We have to go get the municipal health inspector to check that you don’t have too many rats in the kitchen. And AI is completely anomalous as an industry. There’s absolutely no meaningful safety standards. Sam Altman, if his own prediction comes true and he gets AGI soon, he’s legally allowed to just to release it to see what happens.
How clear an understanding do you have of what the regulations should be, especially given that AI is moving faster than sandwich shop safety or even drugs?
I think that’s actually pretty easy. The harder thing is just having the institutional framework to enforce whatever standards there are. You bring together all the key stakeholders and you start with a low bar, and then gradually you can raise it a bit.
Like with cars, for example, we didn’t start by requiring airbags or anti-lock brakes. We started with seat belts and then some people started putting up some traffic lights and som