Skip to content

2024 AI Recap: Three Trends in AI This Year

Picture of David Robusto

David Robusto

Policy Analyst

Moving Away from Big Models, Embracing the Military, and Fighting Regulation

By David Robusto and Iskandar Haykel

The world of artificial intelligence looks different than it did at the beginning of the year. Sure, it’s bigger, better, farther-reaching, and more powerful than ever. That was all predictable and it’s not what we’re here to talk about. We’re here to discuss the changing trajectory of AI, shifts in the AI community, and how the path forward for AI has changed since the year started.

Three major trends stand out: a shift away from the release of big new models, an increasing partnership between AI companies and the military-industrial complex, and the retreat of AI leaders from earlier stances on regulation. These trends mark a pivot in the AI landscape, a change in course that deserves attention as we think about the future of AI and AI policy heading into another year.

Ever since the debut of ChatGPT, the race to release the next important AI model has been the hallmark of the industry. Leading developers like OpenAI, Meta, and Anthropic have been in a constant cycle of releasing increasingly larger and more powerful models, with the aim of pushing the boundaries of what AI can do. Anthropic released multiple generations of the Claude family of models over the course of a single year. At the same time, Meta launched Llama 1, 2, and 3 in just 15 months. These models all saw massively improved general performance by following the “scaling laws,” using more and higher-quality data, and training using more compute. 

But as we head toward the end of 2024, it’s been over half a year since any of these labs released a new larger version of their flagship models. Instead, we’ve seen a number of model “updates”, including Claude 3.5 Sonnet (and the newer version, confusingly also Claude 3.5 Sonnet), Llamas 3.1, 3.2, and 3.3, and unspecified improvements to GPT-4o. This shift towards partial releases coincides with a debate about whether the type of progress we’ve grown to expect is slowing at labs like OpenAI, Google, and Anthropic.

OpenAI’s release of its o1 model marked a key turning point – a shift away from general improvements driven by more training data and compute. These new “reasoning” models produce better results by using reinforcement learning (RL) to employ structured chains of thought and by spending more time and computing power on a query. Other AI companies are beginning to follow suit, with Google and Chinese tech giant Alibaba releasing their own reasoning models. Not to be outdone, at the end of their seasonal “shipmas,” OpenAI announced their next generation o3 models, promising massive gains on hard benchmarks like ARC-AGI and positioning this as potentially the next major frontier for popular development.

One possible reason for this shift is the rumored diminishing returns of releasing massive new models. As these models grow in size, gains in performance have allegedly become increasingly marginal while the costs associated with developing them continue to climb rapidly. Some AI leaders argue that the era of hyper-scaling models may be coming to a close, although the major labs certainly won’t admit it, if so. One thing is for sure: the era of multiple new model introductions of exponentially increasing size and cost a year is likely dead and gone.

Another significant trend in 2024 is the growing ties between the AI sector and the military-industrial complex. In the past, many AI companies prided themselves on staying at arm’s length from military applications, often citing ethical concerns about the use of AI in warfare. This year things changed and the boundaries between AI and the military have become increasingly blurred.

One of the most prominent examples is OpenAI. At the start of the year, the organization was explicit in prohibiting the use of its models for “weapons development” or “military and warfare.” But that changed quickly. On January 10, OpenAI softened its prohibition on using models for military purposes, then partway through the year announced it would work with the Pentagon on cybersecurity software. Finally this month, the company announced its partnership with military defense contractor Anduril, which will put OpenAI’s technology on the battlefield. Full 180.

But OpenAI isn’t alone. Meta’s announcement this November that it will allow U.S. national security agencies and defense contractors to use Llama AI for military ends is also a major shift, illustrating the growing alignment between AI developers and military interests. Anthropic also followed suit, partnering with Amazon Web Services and Palantir to provide U.S. defense and intelligence agencies with access to its Claude models.

One possible reason for the AI industry’s changing stance is money. The defense industry is a source of funding, and AI companies are increasingly searching for profitability and resources as they continue to scale up models. Some have also pointed out that the AI industry may be more open to working with the defense sector in order to deepen relationships with the incoming national-security-focused administration.

Perhaps the most striking shift in the AI landscape this year is the growing resistance to regulation. In May 2023, OpenAI’s CEO, Sam Altman, urged the U.S. Senate to regulate AI, citing the need for frameworks to ensure the technology’s safe development. 

He wasn’t alone. At a Senate forum six months later, half a dozen AI CEOs raised their hands in endorsement of the statement that “the government should have a role in the oversight of artificial intelligence.”

But as 2024 has unfolded, AI leaders have backed away from a pro-regulatory position. OpenAI, once a vocal proponent of regulation, along with Meta and Google became staunch opponents of the bill that nearly became the first serious frontier AI regulation in the U.S., California’s SB 1047. These AI giants have begun broadly lobbying against regulations that they say would stifle innovation or impose too many restrictions, without supporting any bills as alternatives.

Perhaps the one exception to the rule is Anthropic. In a blog post published this October, Anthropic continued to make the case for targeted regulation of the AI industry, writing that “Governments should urgently take action on AI policy in the next eighteen months.”

More than anything, the case that Anthropic now stands alone as an industry voice in favor of regulation signals a marked shift since the beginning of the year.

As AI continues to shape our future, the trends in 2024 reveal a fluid landscape. AI companies aren’t just reinventing tech, they’re reinventing themselves, changing their policies and fundamentally changing their missions.

These trends reflect the evolving priorities and pressures within the AI field, a field that will undoubtedly continue to shift as we head into another new year.

Share