ChatGPT’s new AI thinks like us, and critics warn it comes with some dangers
ChatGPT’s new AI thinks like us, and critics warn it comes with some dangers
Artificial Intelligence (AI) is increasingly a part of our daily lives. It can analyze data, make a diagnosis, create and edit photos, and it’s a fundamental part of social media. But while AI’s many functions are undeniably impressive, it’s not without flaws. Chatbots can produce essays and come up with human-like responses to questions in seconds. But they struggle with math and science problems where there is only one correct answer.
That’s largely because AI doesn’t have the understanding of the concepts required to come up with the correct answer unless explicitly trained that way. However, a new model, released Thursday as a preview by OpenAI, does. The chatbot maker says it can “think” more like a human than previous models. And it’s about to go mainstream.
The latest model, OpenAI 01, is not as quick as other chatbots. This one takes its time. “We’ve developed a new series of AI models designed to spend more time thinking before they respond, OpenAI said on its website. “They can reason through complex tasks and solve harder problems than previous models in science, coding, and math.” The statement continued, explaining that the new models can “refine their thinking process, try different strategies and recognize their mistakes.” They learn over time through trial and error, much in the same way that humans do.
A chatbot that keeps learning
OpenAI 01, which can build on previous knowledge and keep learning, is primarily useful when it comes to complex math and science problems — the kind of “thinking” that could be useful for writing advanced code, helping solve the world’s climate crisis, or even curing cancer. But, robots that can reason, think, or learn in more human-like ways instantly conjure up worries about just how far AI can go, and at what price.
The radical and speedy growth of AI over the last several years has already led to debates over whether AI has the right to use information and creative work that belongs to someone else, and a growing number of privacy and copyright lawsuits. It’s also a massive energy suck. AI is driving up tech companies’ energy consumption and carbon emissions, and experts have been sounding the alarm. OpenAI’s CEO Sam Altman has routinely expressed concerns about the energy required for the technology and attended a White House discussion for leaders on the issue this week.
However, when it comes to machines that learn from the world around them, the most important social question seems to be: how much do we want them to know?
With enhanced thinking power comes significant safety concerns
In its current form, AI has already developed a number of biases that make it worrisome when it comes to social issues. The technology has already gotten in trouble when it comes to race and gender-based stereotypes about men’s versus women’s professions, and has led to lawsuits over its hiring influence.
Ashwini K.P., UN special reporter on racism and intolerance, talked about concerns with AI in a report released earlier this year. “Generative artificial intelligence is changing the world and has the potential to drive increasingly seismic societal shifts in the future,” Ashwini wrote. “I am deeply concerned about the rapid spread of the application of artificial intelligence across various fields. This is not because artificial intelligence is without potential benefits. It presents possible opportunities for innovation and inclusion.”
OpenAI addressed safety concerns about the new model in the announcement: “As part of developing these new models, we have come up with a new safety-training approach that harnesses their reasoning capabilities to make them adhere to safety and alignment guidelines. By being able to reason about our safety rules in context, it can apply them more effectively.”
We’ll be able to see how those safety rules work in practice soon enough. Open AI says the latest chatbot model will be available soon for most ChatGPT users.
Artificial Intelligence (AI) is increasingly a part of our daily lives. It can analyze data, make a diagnosis, create and edit photos, and it’s a fundamental part of social media. But while AI’s many functions are undeniably impressive, it’s not without flaws. Chatbots can produce essays and come up with human-like responses to questions in seconds. But they struggle with math and science problems where there is only one correct answer.
That’s largely because AI doesn’t have the understanding of the concepts required to come up with the correct answer unless explicitly trained that way. However, a new model, released Thursday as a preview by OpenAI, does. The chatbot maker says it can “think” more like a human than previous models. And it’s about to go mainstream.
The latest model, OpenAI 01, is not as quick as other chatbots. This one takes its time. “We’ve developed a new series of AI models designed to spend more time thinking before they respond, OpenAI said on its website. “They can reason through complex tasks and solve harder problems than previous models in science, coding, and math.” The statement continued, explaining that the new models can “refine their thinking process, try different strategies and recognize their mistakes.” They learn over time through trial and error, much in the same way that humans do.
A chatbot that keeps learning
OpenAI 01, which can build on previous knowledge and keep learning, is primarily useful when it comes to complex math and science problems — the kind of “thinking” that could be useful for writing advanced code, helping solve the world’s climate crisis, or even curing cancer. But, robots that can reason, think, or learn in more human-like ways instantly conjure up worries about just how far AI can go, and at what price.
The radical and speedy growth of AI over the last several years has already led to debates over whether AI has the right to use information and creative work that belongs to someone else, and a growing number of privacy and copyright lawsuits. It’s also a massive energy suck. AI is driving up tech companies’ energy consumption and carbon emissions, and experts have been sounding the alarm. OpenAI’s CEO Sam Altman has routinely expressed concerns about the energy required for the technology and attended a White House discussion for leaders on the issue this week.
However, when it comes to machines that learn from the world around them, the most important social question seems to be: how much do we want them to know?
With enhanced thinking power comes significant safety concerns
In its current form, AI has already developed a number of biases that make it worrisome when it comes to social issues. The technology has already gotten in trouble when it comes to race and gender-based stereotypes about men’s versus women’s professions, and has led to lawsuits over its hiring influence.
Ashwini K.P., UN special reporter on racism and intolerance, talked about concerns with AI in a report released earlier this year. “Generative artificial intelligence is changing the world and has the potential to drive increasingly seismic societal shifts in the future,” Ashwini wrote. “I am deeply concerned about the rapid spread of the application of artificial intelligence across various fields. This is not because artificial intelligence is without potential benefits. It presents possible opportunities for innovation and inclusion.”
OpenAI addressed safety concerns about the new model in the announcement: “As part of developing these new models, we have come up with a new safety-training approach that harnesses their reasoning capabilities to make them adhere to safety and alignment guidelines. By being able to reason about our safety rules in context, it can apply them more effectively.”
We’ll be able to see how those safety rules work in practice soon enough. Open AI says the latest chatbot model will be available soon for most ChatGPT users.