Welcome to the forefront of conversational AI as we explore the fascinating world of AI chatbots in our dedicated blog series. Discover the latest advancements, applications, and strategies that propel the evolution of chatbot technology. From enhancing customer interactions to streamlining business processes, these articles delve into the innovative ways artificial intelligence is shaping the landscape of automated conversational agents. Whether you’re a business owner, developer, or simply intrigued by the future of interactive technology, join us on this journey to unravel the transformative power and endless possibilities of AI chatbots.
Latest
AI
Amazon
Apps
Biotech & Health
Climate
Cloud Computing
Commerce
Crypto
Enterprise
EVs
Fintech
Fundraising
Gadgets
Gaming
Google
Government & Policy
Hardware
Instagram
Layoffs
Media & Entertainment
Meta
Microsoft
Privacy
Robotics
Security
Social
Space
Startups
TikTok
Transportation
Venture
Staff
Events
Startup Battlefield
StrictlyVC
Newsletters
Podcasts
Videos
Partner Content
TechCrunch Brand Studio
Crunchboard
Contact Us
There’s been great interest in what Mira Murati’s Thinking Machines Lab is building with its $2 billion in seed funding and the all-star team of former OpenAI researchers who have joined the lab. In a blog post published on Wednesday, Murati’s research lab gave the world its first look into one of its projects: creating AI models with reproducible responses.
The research blog post, titled “Defeating Nondeterminism in LLM Inference,” tries to unpack the root cause of what introduces randomness in AI model responses. For example, ask ChatGPT the same question a few times over, and you’re likely to get a wide range of answers. This has largely been accepted in the AI community as a fact — today’s AI models are considered to be non-deterministic systems— but Thinking Machines Lab sees this as a solvable problem.
Today Thinking Machines Lab is launching our research blog, Connectionism. Our first blog post is “Defeating Nondeterminism in LLM Inference”
We believe that science is better when shared. Connectionism will cover topics as varied as our research is: from kernel numerics to… pic.twitter.com/jMFL3xt67C
The post, authored by Thinking Machines Lab researcher Horace He, argues that the root cause of AI models’ randomness is the way GPU kernels — the small programs that run inside of Nvidia’s computer chips — are stitched together in inference processing (everything that happens after you press enter in ChatGPT). He suggests that by carefully controlling this layer of orchestration, it’s possible to make AI models more deterministic.
Beyond creating more reliable responses for enterprises and scientists, He notes that getting AI models to generate reproducible responses could also improve reinforcement learning (RL) training. RL is the process of rewarding AI models for correct answers, but if the answers are all slightly different, then the data gets a bit noisy. Creating more consistent AI model responses could make the whole RL process “smoother,” according to He. Thinking Machines Lab has told investors that it plans to use RL to customize AI models for businesses, The Information previously reported.
Murati, OpenAI’s former chief technology officer, said in July that Thinking Machines Lab’s first product will be unveiled in the coming months, and that it will be “useful for researchers and startups developing custom models.” It’s still unclear what that product is, or whether it will use techniques from this research to generate more reproducible responses.
Thinking Machines Lab has also said that it plans to frequently publish blog posts, code, and other information about its research in an effort to “benefit the public, but also improve our own research culture.” This post, the first in the company’s new blog series called “Connectionism,” seems to be part of that effort. OpenAI also made a commitment to open research when it was founded, but the company has become more closed off as it’s become larger. We’ll see if Murati’s research lab stays true to this claim.
The research blog offers a rare glimpse inside one of Silicon Valley’s most secretive AI startups. While it doesn’t exactly reveal where the technology is going, it indicates that Thinking Machines Lab is tackling some of the largest question on the frontier of AI research. The real test is whether Thinking Machines Lab can solve these problems, and make products around its research to justify its $12 billion valuation.
Topics
Plan ahead for the 2026 StrictlyVC events. Hear straight-from-the-source candid insights in on-stage fireside sessions and meet the builders and backers shaping the industry. Join the waitlist to get first access to the lowest-priced tickets and important updates.
With iOS 26.2, Apple lets you roll back Liquid Glass again — this time on the Lock Screen
Google launched its deepest AI research agent yet — on the same day OpenAI dropped GPT-5.2
Disney hits Google with cease-and-desist claiming ‘massive’ copyright infringement
OpenAI fires back at Google with GPT-5.2 after ‘code red’ memo
Google debuts ‘Disco,’ a Gemini-powered tool for making web apps from browser tabs
Marco Rubio bans Calibri font at State Department for being too DEI
Claude Code is coming to Slack, and that’s a bigger deal than it sounds
© 2025 TechCrunch Media LLC.