Welcome to the forefront of conversational AI as we explore the fascinating world of AI chatbots in our dedicated blog series. Discover the latest advancements, applications, and strategies that propel the evolution of chatbot technology. From enhancing customer interactions to streamlining business processes, these articles delve into the innovative ways artificial intelligence is shaping the landscape of automated conversational agents. Whether you’re a business owner, developer, or simply intrigued by the future of interactive technology, join us on this journey to unravel the transformative power and endless possibilities of AI chatbots.
Choose the human path for AI. John C Head III Dean Rick Locke on the future of work.
Through intellectual rigor and experiential learning, this full-time, two-year MBA program develops leaders who make a difference in the world.
Earn your MBA and SM in engineering with this transformative two-year program.
A rigorous, hands-on program that prepares adaptive problem solvers for premier finance careers.
A 12-month program focused on applying the tools of modern data science, optimization and machine learning to solve real-world business problems.
Combine an international MBA with a deep dive into management science. A special opportunity for partner and affiliate schools only.
A doctoral program that produces outstanding scholars who are leading in their fields of research.
Bring a business perspective to your technical and quantitative expertise with a bachelor’s degree in management, business analytics, or finance.
Apply now and work for two to five years. We'll save you a seat in our MBA class when you're ready to come back to campus for your degree.
The 20-month program teaches the science of management to mid-career leaders who want to move from success to significance.
A full-time MBA program for mid-career leaders eager to dedicate one year of discovery for a lifetime of impact.
A joint program for mid-career professionals that integrates engineering and systems thinking. Earn your master’s degree in engineering and management.
Non-degree programs for senior executives and high-potential managers.
A non-degree, customizable program for mid-career professionals.
Credit: alexsl / iStock
Ideas Made to Matter
Artificial Intelligence
By
Closed AI models are proprietary systems that keep their code confidential. Open models make public one or more model details. A new paper found:
When grocery shoppers find a generic product that’s 90% as good as the brand name version but costs 87% less, they usually put it in their carts.
But when it comes to large language models, most artificial intelligence users pick the more expensive option.
A new paper co-authored by Frank Nagle, a research scientist at the MIT Initiative on the Digital Economy, found that users largely opt for closed, proprietary AI inference models, namely those from OpenAI, Anthropic, and Google. Those models account for nearly 80% of all AI tokens that are processed on OpenRouter, the leading AI inference platform. In comparison, less-expensive open models from the likes of Meta, DeepSeek, and Mistral account for only 20% of AI tokens processed. (A token is a unit of input or output to an AI model, roughly equivalent to one word in a prompt to an AI chatbot.)
Open models achieve about 90% of the performance of closed models when they are released, but they can quickly close that gap — and the price of running inference is 87% less on open models. Nagle and co-author Daniel Yue at the Georgia Institute of Technology found that optimal reallocation of demand from closed to open models could cut average overall spending by more than 70%, saving the global AI economy about $25 billion annually.
“The difference between benchmarks is small enough that most organizations don’t need to be paying six times as much just to get that little bit of performance improvement,” Nagle said. “They need to think about how to use the right tool for the right job instead of defaulting to what’s popular.”
Much time and effort goes into training frontier AI models, which are the models atop which inference models are built. There are architectures to design and trillion-token datasets to curate, and processing units powerful enough to operate continuously for months need to be acquired.
To defray those costs, some companies keep their inference models proprietary, or closed. That means users must pay for the model and the underlying computing resources to access AI inference services. With a limited number of players in the market, high markups are all but inevitable.
Open models make public one or more model details. Common examples include model weights, source code, training data, or architecture. That openness means that users can host and run models locally — even on a laptop, provided it’s powerful enough.
“This openness enables inference at only the cost of compute power,” Nagle and Yue write, “essentially making the software free and creating competitive pressure akin to commodity markets.”
Nagle likened the market of third parties building open AI inference models to the emergence of companies such as Red Hat, which offers software, training, and customer support atop the open-source Linux operating system. Users are able to reap the advantages of open source while vendors, with the advantage of economies of scale, assume the risk for issues like uptime and security.
In person at MIT Sloan
Register here
The market for open models in AI inference isn’t quite as mature.
From May to September 2025, Nagle and Yue observed daily token usage on the OpenRouter site, which captures approximately 1% of all global spending on AI model inference. The researchers noted that the site is popular because users can engage with multiple providers of model inference through a single interface.
“OpenRouter attracts the type of user who’s more likely to be willing to use open models, as the point is to be able to swap between models,” Nagle said. Even so, closed models still accounted for close to 80% of AI token usage over the five-month study period, as well as nearly 96% of the revenue that passed through OpenRouter. But closed models cost 87% more to run — $1.86 per million tokens, on average, compared with 23 cents per million tokens for open models.
Next, Nagle and Yue combined OpenRouter usage data with model performance benchmark data from Artificial Analysis and LM Arena. The former compares models against multiple performance metrics; the latter crowdsources user preferences. Their analysis showed that open models averaged 89.6% of closed-model performance but were usually able to close the gap within 13 weeks of a closed model’s initial release. That figure had dropped from 27 weeks just one year prior.
Using that information, the researchers calculated that, by switching to open alternatives superior to the closed models they are currently using, OpenRouter users could reduce costs more than 70% while improving benchmark performance by more than 14%. They extrapolated the impact for all AI inference models, using a market size estimate of more than $35 billion from Menlo Ventures, determining that “optimal substitution to open models could save the AI industry approximately $25 billion annually.”
If users could spend less on AI inference models and see better performance, why aren’t they switching? Nagle said companies often have two sets of concerns. One set is valid; the other is based on misconceptions, he said.
Among the valid concerns is that there may be significant costs associated with switching to a new model. These costs are not accounted for in the estimated $25 billion in annual savings and differ greatly among users.
“People build a whole ecosystem around an existing closed model, refine it, and build more on top of it. It’s not going to be trivial to switch to an open model,” Nagle said. “In the long run it may be cheaper, but in the short run it will be costly.”
Further, there may be reliability, regulatory, or security concerns that are easier to assuage with closed models than open models. For example, DeepSeek (an organization that produces open models) is based in China, and as a research organization, it’s not subject to Chinese AI regulations for commercial products.
The misconceptions are based on perceptions of inferior performance, which the researchers demonstrate are inaccurate, and a fear that using open models means private data suddenly becomes public. “That’s not correct,” Nagle said. “Open models can be built and run within your own infrastructure. Your data’s never leaving your servers.”
Nagle encouraged companies to periodically review their use of AI inference models the same way they reevaluate software and infrastructure investments. If closed models are in place, it’s worth considering whether there’s a more cost-effective way to meet the company’s needs.
Open AI models matter for global economics and politics, not just corporate balance sheets, Nagle added. Nations that don’t have a lot to spend will turn to frontier open and inference models. If these models come from China alone, then America’s market influence may wane, as it has in Africa and Asia amid China’s infrastructure investments.
“The United States is investing in data centers but not data models,” Nagle said. “It may benefit the U.S. to invest in creating frontier models to compete with China.”
read the paper: “The Latent Role of Open Models in the AI Economy”
Frank Nagle is a research scientist at the MIT Initiative on the Digital Economy and the chief economist at the Linux Foundation. He studies how competitors can collaborate on the creation of core technologies, while still competing on products and services built on top of them — especially in the context of artificial intelligence. His work frequently explores the domains of open-source software, crowdsourcing, free digital goods, cybersecurity, and the generation of strategic predictions from unstructured big data.
Daniel Yue is an assistant professor at the Georgia Institute of Technology. His research explores why firms openly share innovative knowledge without directly profiting from it, a strategy called “open disclosure.”
The mission of the MIT Sloan School of Management is to develop principled, innovative leaders who improve the world and to generate ideas that advance management practice.