Quantum Machines and Nvidia use machine learning to get closer to an error-corrected quantum computer – TechCrunch

Welcome to the forefront of conversational AI as we explore the fascinating world of AI chatbots in our dedicated blog series. Discover the latest advancements, applications, and strategies that propel the evolution of chatbot technology. From enhancing customer interactions to streamlining business processes, these articles delve into the innovative ways artificial intelligence is shaping the landscape of automated conversational agents. Whether you’re a business owner, developer, or simply intrigued by the future of interactive technology, join us on this journey to unravel the transformative power and endless possibilities of AI chatbots.
Latest
AI
Amazon
Apps
Biotech & Health
Climate
Cloud Computing
Commerce
Crypto
Enterprise
EVs
Fintech
Fundraising
Gadgets
Gaming
Google
Government & Policy
Hardware
Instagram
Layoffs
Media & Entertainment
Meta
Microsoft
Privacy
Robotics
Security
Social
Space
Startups
TikTok
Transportation
Venture
Staff
Events
Startup Battlefield
StrictlyVC
Newsletters
Podcasts
Videos
Partner Content
TechCrunch Brand Studio
Crunchboard
Contact Us
About a year and a half ago, quantum control startup Quantum Machines and Nvidia announced a deep partnership that would bring together Nvidia’s DGX Quantum computing platform and Quantum Machine’s advanced quantum control hardware. We didn’t hear much about the results of this partnership for a while, but it’s now starting to bear fruit and getting the industry one step closer to the holy grail of an error-corrected quantum computer.
In a presentation earlier this year, the two companies showed that they are able to use an off-the-shelf reinforcement learning model running on Nvidia’s DGX platform to better control the qubits in a Rigetti quantum chip by keeping the system calibrated.
Yonatan Cohen, the co-founder and CTO of Quantum Machines, noted how his company has long sought to use general classical compute engines to control quantum processors. Those compute engines were small and limited, but that’s not a problem with Nvidia’s extremely powerful DGX platform. The holy grail, he said, is to run quantum error correction. We’re not there yet. Instead, this collaboration focused on calibration, and specifically calibrating the “π pulses” that control the rotation of a qubit inside a quantum processor.
At first glance, calibration may seem like a one-shot problem: You calibrate the processor before you start running the algorithm on it. But it’s not that simple. “If you look at the performance of quantum computers today, you get some high fidelity,” Cohen said. “But then, the users, when they use the computer, it’s typically not at the best fidelity. It drifts all the time. If we can frequently recalibrate it using these kinds of techniques and underlying hardware, then we can improve the performance and keep the fidelity [high] over a long time, which is what’s going to be needed in quantum error correction.”
Constantly adjusting those pulses in near real time is an extremely compute-intensive task, but since a quantum system is always slightly different, it is also a control problem that lends itself to being solved with the help of reinforcement learning.
“As quantum computers are scaling up and improving, there are all these problems that become bottlenecks, that become really compute-intensive,” said Sam Stanwyck, Nvidia’s group product manager for quantum computing. “Quantum error correction is really a huge one. This is necessary to unlock fault-tolerant quantum computing, but also how to apply exactly the right control pulses to get the most out of the qubits.”
Stanwyck also stressed that there was no system before DGX Quantum that would enable the kind of minimal latency necessary to perform these calculations.
As it turns out, even a small improvement in calibration can lead to massive improvements in error correction. “The return on investment in calibration in the context of quantum error correction is exponential,” explained Quantum Machines product manager Ramon Szmuk. “If you calibrate 10% better, that gives you an exponentially better logical error [performance] in the logical qubit that is composed of many physical qubits. So there’s a lot of motivation here to calibrate very well and fast.”
It’s worth stressing that this is just the start of this optimization process and collaboration. What the team actually did here was simply take a handful of off-the-shelf algorithms and look at which one worked best (TD3, in this case). All in all, the actual code for running the experiment was only about 150 lines long. Of course, this relies on all of the work the two teams also did to integrate the various systems and build out the software stack. For developers, though, all of that complexity can be hidden away, and the two companies expect to create more and more open source libraries over time to take advantage of this larger platform.
Szmuk stressed that for this project, the team only worked with a very basic quantum circuit but that it can be generalized to deep circuits as well. “If you can do this with one gate and one qubit, you can also do it with a hundred qubits and 1,000 gates,” he said.
“I’d say the individual result is a small step, but it’s a small step towards solving the most important problems,” Stanwyck added. “Useful quantum computing is going to require the tight integration of accelerated supercomputing — and that may be the most difficult engineering challenge. So being able to do this for real on a quantum computer and tune up a pulse in a way that is not just optimized for a small quantum computer but is a scalable, modular platform, we think we’re really on the way to solving some of the most important problems in quantum computing with this.”
Stanwyck also said that the two companies plan to continue this collaboration and get these tools into the hands of more researchers. With Nvidia’s Blackwell chips becoming available next year, they’ll also have an even more powerful computing platform for this project, too.
Topics
Editor

FLASH SALE ALERT: Register by Oct 17 and save up to $624 (or up to 30% on groups).

Get the ticket type for you:

Founders: Your next big connection and investor are here.

Investors: Meet startups that align with your investment goals.

Innovators & Visionaries: See the future of tech before everyone else.
Amazon’s Ring to partner with Flock, a network of AI cameras used by ICE, feds, and police

Eightfold co-founders raise $35M for Viven, an AI digital twin startup for querying unavailable co-workers

OpenAI has five years to turn $13 billion into $1 trillion

Sam Altman says ChatGPT will soon allow erotica for adult users

Google’s Gemini can now help you schedule Google Calendar meetings

California becomes first state to regulate AI companion chatbots

Nvidia’s AI empire: A look at its top startup investments

© 2025 TechCrunch Media LLC.

source

Scroll to Top