Latest Updates
Meta has officially delayed the release of its ambitious ‘Behemoth’ AI model until at least fall 2023. This decision, as reported by The Wall Street Journal, stems from concerns over the model’s lack of significant advancements.
The ramifications for businesses are likely to be less severe than anticipated, given their existing access to earlier models like Llama 4 and a plethora of open-source alternatives.
Broader industry trends indicate that AI breakthroughs may be plateauing, hinting at the challenges of scaling existing models.
Meta is reportedly postponing the release of its latest AI model, termed ‘Behemoth,’ shifting the timeline from the anticipated summer launch to fall 2023 or later.
The latest multimodal model has been deemed not to be showing “significant” improvements, prompting this delay. Initially scheduled for a release in April during Meta’s first developers conference, LlamaCon, this setback marks an initial hiccup in Meta’s aggressive rollout strategy for its flagship Llama model.
Designed as a powerful open-source solution, Llama has, until now, been a beacon of accessibility for developers across small businesses, nonprofits, and academic institutions. It serves as a compelling counterweight to the proprietary models offered by tech giants like OpenAI and Google.
The current effect on businesses appears to be relatively muted; many enterprises are already integrated with cloud providers that predominantly utilize proprietary AI models. While smaller firms can certainly customize Llama’s earlier iterations, implementation challenges persist, particularly since Meta does not provide deployment support. Instead, Llama is primarily used to enhance Meta’s own social media ecosystem, allowing CEO Mark Zuckerberg to navigate his company’s AI trajectory.
Need for Progress
In the fast-paced tech industry, developers and users may rapidly dismiss new releases that fall short of expectations.
During LlamaCon, Meta showcased two smaller variants of Llama 4 that still boast impressive capabilities:
- Maverick: Equipped with 400 billion parameters and a context window of 1 million tokens (equivalent to about 750,000 words—a significant leap compared to GPT-4o’s 128,000 tokens).
- Scout: Featuring 109 billion parameters and a context window of 10 million tokens (or approximately 7.5 million words).
Initially, Behemoth was set to launch simultaneously with these models and was projected to encompass a staggering 2 trillion parameters.
Being that Meta has already invested heavily (budgeting up to $72 billion this year) into AI developments to propel Zuckerberg’s long-term vision, patience may soon wear thin among Meta’s executive team.
Growing Anxieties
While Meta has yet to specify a public release date for Behemoth, concerns are mounting among insiders that its current performance might not align with earlier expectations set by the company.
Reportedly, frustration is growing among Meta’s leadership regarding the slow progress of the Llama 4 team. As a result, discussions around key leadership changes within the AI division are now on the table.
Although marketed as a superior system to offerings from OpenAI, Google, and Anthropic, internal training hurdles have significantly hindered Behemoth’s effectiveness, according to sources familiar with its development.
PYMNTS has reached out to Meta for comments, but has yet to receive a response.
It’s worth noting that OpenAI also faces similar timelines, with its next major model, GPT-5, already behind schedule compared to an original mid-2024 target.
Contributing Factors to Delays
Advancements in AI model development are stalling for various reasons, including:
Data Limitations
AI models require substantial amounts of data for effective training—often scraping from the vastness of the internet. However, with a potential shortage of publicly accessible data and the legal ramifications of copyrighted content, the industry faces significant hurdles.
Leading companies like OpenAI, Google, and Microsoft are actively lobbying for legislation that would protect their ability to use copyrighted material for training purposes. “The federal government can secure Americans’ freedom to learn from AI while preserving our lead in the AI sector,” argues OpenAI.
Algorithmic Constraints
It was once widely believed that simply scaling up models would yield substantial advancements. However, the industry has started to witness diminishing returns, prompting some experts to suggest that scaling laws may be hitting their limits.
Discover more about the current landscape and predictions regarding AI models through this insightful Bloomberg article.
AI, Anthropic, Artificial Intelligence, Behemoth, GenAI, Generative AI, Large Language Models, Llama, LLMs, Mark Zuckerberg, Maverick, Meta, News, OpenAI, PYMNTS News, Scout