Can ChatGPT Really Think?
OpenAI’s newest model o1 is claimed to be ‘reasoning’ and even ‘thinking,’ but this claim is not accepted by many. Apart from the well known sceptic like Gary Marcus, this time Clem Delangue, CEO of Hugging Face, was also clearly not impressed with the ‘thinking’ claim.
“Once again, an AI system is not ‘thinking’, it’s ‘processing’, ‘running predictions’,… just like Google or computers do,” said Delangue, when talking about how OpenAI is painting the false picture with what the company’s newest model can achieve. “Giving the false impression that technology systems are human is just cheap snake oil and marketing to fool you into thinking it’s more clever than it is,” he added.
On the other hand, isn’t that exactly how thinking works? “Once again, human minds aren’t ‘thinking’ they are just executing a complex series of bio-chemical / bio-electrical computing operations at massive scale,” replied Phillip Rhodes.
Thinking, really?
Sam Altman, the CEO of OpenAI, calls the launch “the beginning of a new paradigm: AI that can do general-purpose complex reasoning.” The new model quite literally takes some time to think before responding, as opposed to earlier OpenAI models that start generating text as soon as you give it a prompt.
The model does this by producing a long internal chain of thought prompting before responding to the user. That explains why, the team has also in some way suggested to not ask generic questions to the model as its high reasoning capabilities would work better for complex PhD-level problems, giving answers with PhD-level accuracy.
But apart from coding and maths, this reasoning capability is the special highlight of the release. These ‘reasoning’ and ‘thinking’ capabilities were long being touted as the next frontier by Altman in all his speeches and it seems to finally be landing on the right spot.
According to the Learning to Reason with LLMs blog, the reinforcement learning algorithm developed by OpenAI helps the model think more efficiently by refining its thought process through a data-efficient training method.
Over time, the performance of “o1” improves as more training and thinking time is added. This differs from traditional LLM pretraining, which focuses more on expanding the size of the model, instead of focusing on increasing reasoning with a small model.
Through reinforcement learning, o1 improves its reasoning skills by breaking down complex problems, correcting mistakes, and trying new approaches when needed. This greatly enhances its ability to handle complicated prompts that require more than just predicting the next word—it can backtrack and “think” through the task.
However, a key challenge is that the model’s reasoning process remains hidden from users, even though they are billed for it, which are called “reasoning tokens.”
OpenAI has explained that hiding the reasoning steps is necessary for two main reasons. First, for safety and policy compliance, as the model needs freedom to process without exposing sensitive intermediary steps. Second, to maintain a competitive advantage by preventing other models from using their reasoning work. This hidden process allows OpenAI to monitor the model’s thought patterns without interfering with its internal reasoning.
Not for every-o1
As Jim Fan explained, this “Project Strawberry” or o1 model is marking a significant shift towards inference-time scaling in production, a concept that focuses on improving reasoning through search rather than just learning.
Reasoning doesn’t require large models. Many parameters in current models are dedicated to memorising facts for trivia-like benchmarks. Instead, reasoning can be handled by a smaller “reasoning core” that interacts with external tools, like browsers or code verifiers.
This approach reduces the need for massive pre-training compute. A significant portion of compute is now dedicated to inference, rather than pre- or post-training. LLMs simulate various strategies, similar to how AlphaGo uses Monte Carlo Tree Search (MCTS). Over time, this leads to better solutions as the model converges on the best strategy.
Recommended by LinkedIn
This was also explained by Subbarao Kambhampati in his post.
OpenAI likely discovered the benefits of inference scaling early on, while academic research has only recently caught up.
While effective in benchmarks, deploying o1 for real-world reasoning tasks presents challenges. Determining when to stop searching, defining reward functions, and managing compute costs for processes like code interpretation are complex issues that need to be solved for broader deployment.
o1 can act as a data flywheel, where correct answers generate training data, complete with both positive and negative rewards. This process improves the reasoning core over time, similar to how AlphaGo’s value network refined itself through MCTS-generated data. This would in the end create more valuable data.
So probably, we can say that ChatGPT is now thinking, that is why it gets better as you spend more time with it, and OpenAI doesn’t care much about speed.
Enjoy the full story here.
How Generative AI Tools are Helping Police Solve Missing Children Cases in India
Recently, Dainik Bhaskar and the Rajasthan Police collaborated to use AI to solve cases of children gone missing. Joining this effort is Sahid NK, a young graphic designer who brings a creative touch to the project.
In an exclusive interview with AIM, Sahid shared, “I get old, worn-out photos where the faces are so faded that it’s hard to even recognise them. We have to be imaginative and use the little details we have to recreate a face from the past.”
Sahid mentioned that he relies on AI-powered tools to bring these faces back to life. “I use models trained to understand intricate facial features. It’s a blend of creativity, skill, and technology,” he said. Among his go-to tools are Freepic’s Picasso and the sophisticated Illusion Diffusion, a generative AI tool.
Read more here.
AI Bytes
Proactive Solutionist, On a Quest for Knowledge | Technology | Innovation | Security | Robotics | IoT | Optics | CGI | Sci-Fi | Video-Games | 3D-Animation | Quanta | Gravity |
3moNope! Thanks for asking. #fraudberry
Peak Post-Truth Era Human | FinTech Professional | Definitely Not an AI Expert
3moIt can't and it doesn't.