OpenAI Launches ‘Trusted Contact’ Feature to Address Potential Self-Harm Risks – CXO Digitalpulse

Welcome to the forefront of conversational AI as we explore the fascinating world of AI chatbots in our dedicated blog series. Discover the latest advancements, applications, and strategies that propel the evolution of chatbot technology. From enhancing customer interactions to streamlining business processes, these articles delve into the innovative ways artificial intelligence is shaping the landscape of automated conversational agents. Whether you’re a business owner, developer, or simply intrigued by the future of interactive technology, join us on this journey to unravel the transformative power and endless possibilities of AI chatbots.

OpenAI has introduced a new safety feature called “Trusted Contact” for ChatGPT users, aimed at providing additional support in situations involving possible self-harm or suicidal thoughts. The company said the feature is part of its broader effort to strengthen user safety and improve crisis intervention tools within its AI platform.
The optional feature allows adult users to select a trusted person — such as a family member, friend, or caregiver — who can be notified if OpenAI’s systems detect conversations that may indicate a serious self-harm risk. According to OpenAI, the trusted contact must first accept the invitation before the feature becomes active.
OpenAI explained that ChatGPT already uses automated systems and human reviewers to monitor potentially harmful conversations related to suicide or self-harm. If the company determines that a conversation presents a serious safety concern, ChatGPT may encourage the user to reach out to their trusted contact. In certain situations, OpenAI can also send a brief alert through email, text message, or in-app notification asking the contact to check on the user’s well-being.
The company stated that privacy protections are built into the feature. Notifications sent to trusted contacts do not include detailed chat content or specifics about the conversation. Instead, the alerts are designed to remain brief while encouraging personal outreach and support.
The rollout follows increasing scrutiny around AI chatbot safety and mental health concerns. OpenAI has faced growing legal and public pressure after several lawsuits alleged that chatbot interactions contributed to emotional harm or suicide cases. The company has since expanded its safety systems, including crisis resource prompts, parental oversight tools, and stronger moderation policies for high-risk conversations.
OpenAI said the Trusted Contact feature will begin rolling out globally for eligible adult users starting May 7, 2026. The company added that users and their designated contacts can remove or modify the connection at any time through account settings.
 



NEWSLETTER
Sign up for our free newsletter

© 2026 CXO Digital Pulse. All Rights Reserved.
Share your details to download the report 2026
Name must be between 1 and 12 characters.
Please enter your business email.
Share your details to download the Cybersecurity Report 2025
Name must be between 1 and 12 characters.
Please enter your business email.
Share your details to download the CISO Handbook 2025
Name must be between 1 and 12 characters.
Please enter your business email.
Sign Up for CXO Digital Pulse Newsletters
Share your details to download the Research Report
Share your details to download the Coffee Table Book
Share your details to download the Vision 2023 Research Report
Download 8 Key Insights for Manufacturing for 2023 Report
Sign Up for CISO Handbook 2023
Download India’s Cybersecurity Outlook 2023 Report
Unlock Exclusive Insights: Access the article
Please enter your business email.
Download CIO VISION 2024 Report
Please enter your business email.
Share your details to download the report
Please enter your business email.
Share your details to download the CISO Handbook 2024
Please enter your business email.
Fill your details to Watch
Please enter your business email.

source

Scroll to Top