November has proved to be a full-of-news month for OpenAI, despite the biggest and most attention-grabbing news of their CEO Sam Altman’s abrupt exit and the plot twist of him being back in the company, OpenAI were indeed affected, the work inside still continued. Besides ChatGPT Voice, there’s this development on OpenAI’s Project Q*, a model, an advanced AI system that showcases novel capabilities, indicating swift progress in AI, while concurrently prompting urgent inquiries about managing super-intelligent systems.
OpenAI’s Breakthrough: OpenAI’s Project Q*
Information is limited regarding OpenAI’s Project Q* (Pronounced “Q Star”), but reports suggest its capability to solve unfamiliar math problems and exhibit unexpected rapid progress, surprising OpenAI’s safety teams. The model’s development intensified in recent months under CEO Sam Altman’s leadership, who hinted at a significant breakthrough at the Asia-Pacific Economic Cooperation (APEC) summit on November 16th before being abruptly removed from the position on the next day. Altman was reinstated a few days later following an apparent investor revolt, which is believed that the threat of a mass exodus of 750 employees tends to be one of the major reasons. This sheds light on OpenAI’s work in advanced AI systems with potentially immense power.
OpenAI’s Project Q* Can Solve Grade-School-Level Math
OpenAI’s Project Q* is able to perform grade-school-level math. Some staff from OpenAI believed that this achievement could mark a significant milestone in the company’s pursuit of creating Artificial General Intelligence (AGI), a highly anticipated concept denoting an AI system surpassing human intelligence.
For years, researchers have attempted to enable AI models to solve mathematical problems. Although language models like ChatGPT and GPT-4 can perform some mathematical tasks, their reliability is limited. According to Wenda Li, an AI lecturer at the University of Edinburgh, existing algorithms and architectures are inadequate for consistently solving math problems using AI. Li emphasises that while deep learning and transformers, utilised by language models, excel at pattern recognition, this capability alone may not suffice.
According to Li, mathematics serves as a benchmark for reasoning. An AI system capable of reasoning about mathematics might, theoretically, extend its capabilities to learn other tasks rooted in existing information, such as coding or drawing inferences from a news article. The difficulty with math lies in the demand for AI models to possess the capacity for reasoning and a genuine understanding of the subject matter.
To proficiently solve math problems, a generative AI system would need a thorough understanding of concrete definitions, especially for abstract concepts. Addressing many math problems also involves planning across multiple steps, according to Katie Collins, a PhD researcher at the University of Cambridge specialising in math and AI. Yann LeCun, chief AI scientist at Meta, suggested on X and LinkedIn that OpenAI’s Project Q* is likely OpenAI’s attempt at incorporating planning into AI systems.
Machine learning research has concentrated on addressing elementary school problems, but cutting-edge AI systems have yet to fully overcome this challenge. Some AI models struggle with basic math problems but can excel at more complex ones, according to Collins. OpenAI has created specialised tools capable of tackling challenging problems presented in high school math competitions, occasionally outperforming humans.
While developing an AI system proficient in solving math equations is a noteworthy advancement, its deeper understanding of mathematics holds potential applications in aiding scientific research and engineering. The ability to generate mathematical responses could enhance personalised tutoring, assist mathematicians in quicker algebraic solutions, and tackle more intricate problems.
Not the First Sparking the AGI Hype
This isn’t the initial instance of a new model fueling AGI excitement. In the previous year, similar sentiments were expressed within the tech community regarding Google DeepMind‘s Gato, a “Generalist” AI model adept at playing Atari video games, providing image captions, engaging in conversation, and manipulating real-world objects with a robotic arm. During that period, some AI researchers asserted that DeepMind was nearing AGI due to Gato’s proficiency in various tasks. The recurrent occurrence of such hype persists across different AI labs.
OpenAI’s Project Q* Raises Safety Measures
The rapid pace of progress alarmed some of OpenAI’s safety researchers, who reportedly warned the board of directors that OpenAI’s Project Q* posed risks if development continued unchecked. Safety has been a major concern around large language models like GPT-3, which can sometimes generate toxic or biased text. More advanced systems may have even less guardrails and oversight in place. OpenAI has yet to officially comment on OpenAI’s Project Q*, but Altman’s return likely signals full steam ahead, despite researchers’ objections.
Related to OpenAI: Ghost Autonomy, Supported by OpenAI, Asserts That LLMs Will Overcome Challenges in Self-Driving, but Experts Express Scepticism
A Contradiction Towards the Company’s Mission?
OpenAI were established as a non-profit initiative with the goal of developing a “Safe and beneficial AGI for the benefit of humanity”. Yet, with the emerging potential threat that OpenAI’s Project Q* brings, it seems to contradict the goal statement. Numerous experts express concern that organisations like OpenAI are progressing too rapidly toward the development of AGI, a system capable of performing diverse tasks at or beyond human intelligence levels, posing the theoretical risk of evading human control.
What Lies Beyond OpenAI’s Project Q* in the Realm of AI Advancements?
Altman has previously expressed concerns about the risks associated with advanced AI. Upon his reinstatement as CEO, he assured OpenAI staff that recent breakthroughs were grounded in principles of safety and oversight. However, whether OpenAI’s Project Q* adheres to these standards remains an open question.
The path forward for OpenAI raises important considerations. If Project Q* signifies another significant advancement in AI capabilities, the pressure to commercialise this technology may prioritise competition over safety. In the fiercely competitive AI landscape against rivals like Google and Microsoft, navigating this balance is crucial. Ethical concerns raised by OpenAI’s researchers regarding the implications of systems like Project Q* underscore the need for careful consideration, though the proverbial Pandora’s box may already be open.
Some experts argue that advanced AI should be approached with the same caution as nuclear research — a potentially transformative technology that, if misused, could pose a threat to humanity’s future. As AI evolves from narrow chatbots to sophisticated code generators and mathematical reasoning engines, the call for oversight and responsible development becomes increasingly urgent. OpenAI now find themselves in a central role, determining whether AI will contribute to the betterment of humanity or pose risks to its existence.
A Final Say
OpenAI’s Project Q* evokes reflections reminiscent of AI-themed movies portraying scenarios where AI robots dominate humanity. This prompts contemplation about the potential future implications of such advancements, instilling a sense of caution and urging AI developers to address emerging ethical concerns. The development of advanced AI introduces a delicate balance akin to a scale, with humans on one side and AI on the other. Ongoing discussions emphasise the need for developers to carefully manage this balance, ensuring that AI doesn’t surpass acceptable boundaries and remains under control.
While the progression of more advanced AI, exemplified by OpenAI’s Project Q*, may shift the scale toward AI, there is an inherent trust in AI developers to navigate this delicate equilibrium, developing AI for the betterment of humanity while upholding ethical principles.