Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn MoreThe end of the year 2024 has brought reckonings for artificial intelligence, as industry insiders feared progress toward even more intelligent AI is slowing down. But OpenAI’s o3 model, announced just last week, has sparked a fresh wave of excitement and debate, and suggests big improvements are still to come in 2025 and beyond.This model, announced for safety testing among researchers, but not yet released publicly, achieved an impressive score on the important ARC metric. The benchmark was created by François Chollet, a renowned AI researcher and creator of the Keras deep learning framework, and is specifically designed to measure a model’s ability to handle novel, intelligent tasks. As such, it provides a meaningful gauge of progress toward truly intelligent AI systems.Notably, o3 scored 75.7% on the ARC benchmark under standard compute conditions and 87.5% using high compute, significantly surpassing previous state-of-the-art results, such as the 53% scored by Claude 3.5.This achievement by o3 represents a surprising advancement, according to Chollet, who had been a critic of the ability of large language models (LLMs) to achieve this sort of intelligence. It highlights innovations that could accelerate progress toward superior intelligence, whether we call it artificial general intelligence (AGI) or not.AGI is a hyped term, and ill-defined, but it signals a goal: intelligence capable of adapting to novel challenges or questions in ways that surpass human abilities.OpenAI’s o3 tackles specific hurdles in reasoning and adaptability that have long stymied large language models. At the same time, it exposes challenges, including the high costs and efficiency bottlenecks inherent in pushing these systems to their limits. This article will explore five key innovations behind the o3 model, many of which are underpinned by advancements in reinforcement learning (RL). It will draw on insights from industry leaders, OpenAI’s claims, and above all Chollet’s important analysis, to unpack what this breakthrough means for the future of AI as we move into 2025.OpenAI’s o3 model introduces a new capability called “program synthesis,” which enables it to dynamically combine things that it learned during pre-training—specific patterns, algorithms, or methods—into new configurations. These things might include mathematical operations, code snippets, or logical procedures that the model has encountered and generalized during its extensive training on diverse datasets. Most significantly, program synthesis allows o3 to address tasks it has never directly seen in training, such as solving advanced coding challenges or tackling novel logic puzzles that require reasoning beyond rote application of learned information. François Chollet describes program synthesis as a system’s ability to recombine known tools in innov