Welcome to the forefront of conversational AI as we explore the fascinating world of AI chatbots in our dedicated blog series. Discover the latest advancements, applications, and strategies that propel the evolution of chatbot technology. From enhancing customer interactions to streamlining business processes, these articles delve into the innovative ways artificial intelligence is shaping the landscape of automated conversational agents. Whether you’re a business owner, developer, or simply intrigued by the future of interactive technology, join us on this journey to unravel the transformative power and endless possibilities of AI chatbots.
Latest
AI
Amazon
Apps
Biotech & Health
Climate
Cloud Computing
Commerce
Crypto
Enterprise
EVs
Fintech
Fundraising
Gadgets
Gaming
Google
Government & Policy
Hardware
Instagram
Layoffs
Media & Entertainment
Meta
Microsoft
Privacy
Robotics
Security
Social
Space
Startups
TikTok
Transportation
Venture
Staff
Events
Startup Battlefield
StrictlyVC
Newsletters
Podcasts
Videos
Partner Content
TechCrunch Brand Studio
Crunchboard
Contact Us
AI coding tools are getting better fast. If you don’t work in code, it can be hard to notice how much things are changing, but GPT-5 and Gemini 2.5 have made a whole new set of developer tricks possible to automate, and last week Sonnet 4.5 did it again.
At the same time, other skills are progressing more slowly. If you are using AI to write emails, you’re probably getting the same value out of it you did a year ago. Even when the model gets better, the product doesn’t always benefit — particularly when the product is a chatbot that’s doing a dozen different jobs at the same time. AI is still making progress, but it’s not as evenly distributed as it used to be.
The difference in progress is simpler than it seems. Coding apps are benefiting from billions of easily measurable tests, which can train them to produce workable code. This is reinforcement learning (RL), arguably the biggest driver of AI progress over the past six months and getting more intricate all the time. You can do reinforcement learning with human graders, but it works best if there’s a clear pass-fail metric, so you can repeat it billions of times without having to stop for human input.
As the industry relies increasingly on reinforcement learning to improve products, we’re seeing a real difference between capabilities that can be automatically graded and the ones that can’t. RL-friendly skills like bug-fixing and competitive math are getting better fast, while skills like writing make only incremental progress.
In short, there’s a reinforcement gap — and it’s becoming one of the most important factors for what AI systems can and can’t do.
In some ways, software development is the perfect subject for reinforcement learning. Even before AI, there was a whole sub-discipline devoted to testing how software would hold up under pressure — largely because developers needed to make sure their code wouldn’t break before they deployed it. So even the most elegant code still needs to pass through unit testing, integration testing, security testing, and so on. Human developers use these tests routinely to validate their code and, as Google’s senior director for dev tools recently told me, they’re just as useful for validating AI-generated code. Even more than that, they’re useful for reinforcement learning, since they’re already systematized and repeatable at a massive scale.
There’s no easy way to validate a well-written email or a good chatbot response; these skills are inherently subjective and harder to measure at scale. But not every task falls neatly into “easy to test” or “hard to test” categories. We don’t have an out-of-the-box testing kit for quarterly financial reports or actuarial science, but a well-capitalized accounting startup could probably build one from scratch. Some testing kits will work better than others, of course, and some companies will be smarter about how to approach the problem. But the testability of the underlying process is going to be the deciding factor in whether the underlying process can be made into a functional product instead of just an exciting demo.
Some processes turn out to be more testable than you might think. If you’d asked me last week, I would have put AI-generated video in the “hard to test” category, but the immense progress made by OpenAI’s new Sora 2 model shows it may not be as hard as it looks. In Sora 2, objects no longer appear and disappear out of nowhere. Faces hold their shape, looking like a specific person rather than just a collection of features. Sora 2 footage respects the laws of physics in both obvious and subtle ways. I suspect that, if you peeked behind the curtain, you’d find a robust reinforcement learning system for each of these qualities. Put together, they make the difference between photorealism and an entertaining hallucination.
To be clear, this isn’t a hard and fast rule of artificial intelligence. It’s a result of the central role reinforcement learning is playing in AI development, which could easily change as models develop. But as long as RL is the primary tool for bringing AI products to market, the reinforcement gap will only grow bigger — with serious implications for both startups and the economy at large. If a process ends up on the right side of the reinforcement gap, startups will probably succeed in automating it — and anyone doing that work now may end up looking for a new career. The question of which healthcare services are RL-trainable, for instance, has enormous implications for the shape of the economy over the next 20 years. And if surprises like Sora 2 are any indication, we may not have to wait long for an answer.
Topics
Plan ahead for the 2026 StrictlyVC events. Hear straight-from-the-source candid insights in on-stage fireside sessions and meet the builders and backers shaping the industry. Join the waitlist to get first access to the lowest-priced tickets and important updates.
With iOS 26.2, Apple lets you roll back Liquid Glass again — this time on the Lock Screen
Google launched its deepest AI research agent yet — on the same day OpenAI dropped GPT-5.2
Disney hits Google with cease-and-desist claiming ‘massive’ copyright infringement
OpenAI fires back at Google with GPT-5.2 after ‘code red’ memo
Google debuts ‘Disco,’ a Gemini-powered tool for making web apps from browser tabs
Marco Rubio bans Calibri font at State Department for being too DEI
Claude Code is coming to Slack, and that’s a bigger deal than it sounds
© 2025 TechCrunch Media LLC.