Technology

Open-Sourced AI Models Could Cost Businesses More, Study Warns

Open-Sourced AI Models Could Cost Businesses More, Study Warns
Editorial
  • PublishedAugust 15, 2025

As businesses increasingly integrate artificial intelligence into their operations, a recent study reveals that open-sourced AI models may not be the cost-effective solution they appear to be. A report published by Nous Research on March 14, 2024, indicates that while open-source models may have lower initial costs, the total expenses can rise sharply due to their high demand for computing resources.

The research team evaluated a range of AI models, including proprietary systems developed by Google and OpenAI, alongside open-source alternatives from DeepSeek and Magistral. They focused on how many computing resources each model consumed while completing identical tasks, such as answering simple knowledge questions, solving mathematical problems, and tackling logic puzzles. The researchers measured this efficiency by counting the number of tokens each model utilized, which serve as a benchmark for resource consumption.

Token Efficiency Revealed

In the context of AI, a token represents a unit of text, which could be a word or a character. The more tokens a model uses, the greater the computational power required. According to the study, open-source models consumed between 1.5 to 4 times more tokens than their closed-source counterparts, and in some cases, up to 10 times more for answering simple knowledge questions. This discrepancy raises significant concerns for organizations considering the long-term costs associated with AI deployment.

“While hosting open-weight models may be cheaper initially, this cost advantage could be easily offset if they require more tokens to reason about a given problem,” the authors noted. They also emphasized that an increase in token usage results in longer processing times, contributing to potential delays in response.

Closed Models Outperform Open Options

The findings of the study clearly indicate that closed models, such as those from OpenAI and Grok-4, optimize for fewer tokens, thereby reducing overall costs. For tasks involving simple knowledge questions, open models such as DeepSeek and Qwen used significantly more tokens, sometimes by a factor of three. The gap narrowed for more complex tasks, but closed models still maintained an edge.

Among the open models assessed, llama-3.3-nemotron-super-49b-v1 emerged as the most efficient, while models from Magistral were found to be the least efficient. Notably, OpenAI’s o4-mini and the new open-weight gpt-oss models exhibited impressive token efficiency, particularly when addressing mathematical queries. The researchers highlighted that OpenAI’s gpt-oss models, with their concise reasoning processes, could serve as a standard for enhancing token efficiency in other open models.

The implications of this study are significant for businesses weighing their options in AI technology. The choice between open-source and closed-source models is not merely a matter of upfront costs but involves careful consideration of the ongoing operational expenses associated with computational demands. As AI continues to evolve, understanding the nuances of model efficiency will be critical for organizations aiming to harness its full potential.

Editorial
Written By
Editorial

Our Editorial team doesn’t just report the news—we live it. Backed by years of frontline experience, we hunt down the facts, verify them to the letter, and deliver the stories that shape our world. Fueled by integrity and a keen eye for nuance, we tackle politics, culture, and technology with incisive analysis. When the headlines change by the minute, you can count on us to cut through the noise and serve you clarity on a silver platter.