Compute-in-memory chip shows promise for enhanced efficiency and privacy in federated learning systems – Tech Xplore

Sign in with
Forget Password?
Learn more
share this!
6
Tweet
Share
Email
June 24, 2025 feature
by Ingrid Fadelli, Phys.org
contributing writer
edited by Lisa Lock, reviewed by Robert Egan
scientific editor
associate editor
This article has been reviewed according to Science X’s editorial process and policies. Editors have highlighted the following attributes while ensuring the content’s credibility:
fact-checked
peer-reviewed publication
trusted source
proofread
In recent decades, computer scientists have been developing increasingly advanced machine learning techniques that can learn to predict specific patterns or effectively complete tasks by analyzing large amounts of data. Yet some studies have highlighted the vulnerabilities of some AI-based tools, demonstrating that the sensitive information they are fed could be potentially accessed by malicious third parties.
A machine learning approach that could provide greater data privacy is federated learning, which entails the collaborative training of a shared neural network by various users or parties that are not required to exchange any raw data with each other. This technique could be particularly advantageous when applied in sectors that can benefit from AI but that are known to store highly sensitive user data, such as health care and finance.
Researchers at Tsinghua University, the China Mobile Research Institute, and Hebei University recently developed a new compute-in-memory chip for federated learning, which is based on memristors, non-volatile electronic components that can both perform computations and store information, by adapting their resistance based on the electrical current that flowed through them in the past. Their proposed chip, outlined in a paper published in Nature Electronics, was found to boost both the efficiency and security of federated learning approaches.
“Federated learning provides a framework for multiple participants to collectively train a neural network while maintaining data privacy, and is commonly achieved through homomorphic encryption,” wrote Xueqi Li, Bin Gao and their colleagues in their paper. “However, implementation of this approach at a local edge requires key generation, error polynomial generation and extensive computation, resulting in substantial time and energy consumption.
“We report a memristor compute-in-memory chip architecture with an in situ physical unclonable function for key generation and an in situ true random number generator for error polynomial generation.”
As it can both perform computations and store information, the new memristor-based architecture proposed by the researchers could reduce the movement of data and thus limit the energy required for different parties to collectively train an artificial neural network (ANN) via federated learning.
The team’s chip also includes a physical unclonable function, a hardware-based technique to generate secure keys during encrypted communication, as well as a true random number generator, a method to produce unpredictable numbers for encryption.
“Our architecture—which includes a competing-forming array operation method, a compute-in-memory–based entropy extraction circuit design and a redundant residue number system-based encoding scheme—allows low error-rate computation, the physical unclonable function and the true random number generator to be implemented within the same memristor array and peripheral circuits,” wrote the researchers.
“To illustrate the functionality of this memristor-based federated learning, we conducted a case study in which four participants co-train a two-layered long short-term memory network with 482 weights for sepsis prediction.”
To assess the potential of their compute-in-memory chip, the researchers used it to enable the collective training of a long short-term memory network, a deep learning technique often used to make predictions based on sequential data, texts or medical records, by four human participants. The four participants co-trained this network to predict sepsis, a serious and potentially fatal medical condition emerging from serious infections, based on patients’ health data.
“The test accuracy on the 128-kb memristor array is only 0.12% lower than that achieved with software centralized learning,” wrote the authors. “Our approach also exhibits reduced energy and time consumption compared with conventional digital federated learning.”
Overall, the results of this recent study highlight the potential of memristor-based computer-in-memory architectures for enhancing the efficiency and privacy of federated learning implementations. In the future, the chip developed by Li, Gao and their colleagues could be improved further and used to co-train other deep learning algorithms on a variety of real-world tasks.
Written for you by our author Ingrid Fadelli, edited by Lisa Lock , and fact-checked and reviewed by Robert Egan —this article is the result of careful human work. We rely on readers like you to keep independent science journalism alive. If this reporting matters to you, please consider a donation (especially monthly). You’ll get an ad-free account as a thank-you.
More information: Xueqi Li et al, Federated learning using a memristor compute-in-memory chip with in situ physical unclonable function and true random number generator, Nature Electronics (2025). DOI: 10.1038/s41928-025-01390-6
© 2025 Science X Network
Explore further
Facebook
Twitter
Email
Feedback to editors
5 hours ago
0
Jun 24, 2025
0
Jun 20, 2025
0
Jun 19, 2025
0
Jun 18, 2025
1
22 minutes ago
1 hour ago
1 hour ago
5 hours ago
7 hours ago
20 hours ago
20 hours ago
23 hours ago
Jun 24, 2025
Jun 24, 2025
Feb 28, 2024
Mar 15, 2021
Jun 20, 2025
Jan 22, 2025
Jan 13, 2025
Sep 26, 2024
22 minutes ago
1 hour ago
5 hours ago
7 hours ago
20 hours ago
23 hours ago
A memristor-based compute-in-memory chip enables federated learning with improved efficiency and privacy by integrating in situ key generation and true random number generation. The chip reduces data movement, energy, and time consumption, achieving nearly the same accuracy as centralized learning while enhancing security for sensitive applications such as healthcare.
This summary was automatically generated using LLM. Full disclaimer
Use this form if you have come across a typo, inaccuracy or would like to send an edit request for the content on this page. For general inquiries, please use our contact form. For general feedback, use the public comments section below (please adhere to guidelines).
Please select the most appropriate category to facilitate processing of your request
Thank you for taking time to provide your feedback to the editors.
Your feedback is important to us. However, we do not guarantee individual replies due to the high volume of messages.
Your email address is used only to let the recipient know who sent the email. Neither your address nor the recipient’s address will be used for any other purpose. The information you enter will appear in your e-mail message and is not retained by Tech Xplore in any form.
Daily science news on research developments and the latest scientific innovations
Medical research advances and health news
The most comprehensive sci-tech news coverage on the web
This site uses cookies to assist with navigation, analyse your use of our services, collect data for ads personalisation and provide content from third parties. By using our site, you acknowledge that you have read and understand our Privacy Policy and Terms of Use.