OpenAI kills “short-lived experiment” where ChatGPT chats could be found on Google – Malwarebytes

Welcome to the forefront of conversational AI as we explore the fascinating world of AI chatbots in our dedicated blog series. Discover the latest advancements, applications, and strategies that propel the evolution of chatbot technology. From enhancing customer interactions to streamlining business processes, these articles delve into the innovative ways artificial intelligence is shaping the landscape of automated conversational agents. Whether you’re a business owner, developer, or simply intrigued by the future of interactive technology, join us on this journey to unravel the transformative power and endless possibilities of AI chatbots.
Activate Subscription >
Add devices or upgrade >
Renew Subscription >
Billing >
Don’t have an account?
Sign up >
< Products
Have a current computer infection?
Try our antivirus with a free, full-featured 14-day trial
Get your free digital security toolkit
Worried it’s a scam?
Find the right cyberprotection for you
< Business
< Pricing
Protect your personal devices and data
Protect your team’s devices and data – no IT skills needed
Explore award-winning endpoint security for your business
< Resources
< Support
Malwarebytes and Teams Customers
Nebula and Oneview Customers
A little-known ChatGPT “feature” is now gone. It could be a good thing.
On X, OpenAI Chief Information Security Officer Dane Stuckey announced that OpenAI “removed a feature from ChatGPT that allowed users to make their conversations discoverable by search engines, such as Google.” Stuckey called the whole thing a “short-lived experiment to help people discover useful conversations.”
The feature was entirely opt-in, meaning users had to make certain selections to participate, including “picking a chat to share, then by clicking a checkbox for it to be shared with search engines.”
As Stuckey explained for why the company rolled back the experiment:
Ultimately we think this feature introduced too many opportunities for folks to accidentally share things they didn’t intend to, so we’re removing the option. We’re also working to remove indexed content from the relevant search engines. This change is rolling out to all users through tomorrow morning.
Security and privacy are paramount for us, and we’ll keep working to maximally reflect that in our products and features.”
I was unable to find out when the option was officially introduced, which, I guess, might be a reason for the following uproar, as there was no big announcement.
But, such an announcement might have have helped users make informed decisions. The absence of this guidance or of any firm information about the feature during its short-lived life also highlights the way Artificial Intelligence (AI) companies view their users. As a commenter said:
“The friction for sharing potential private information should be greater than a checkbox or not exist at all.”
Many users are conditioned to check checkboxes before being able to use something new, and they don’t read EULAs and other warnings. They just rapidly tick every box they think they need to tick to get to the result they have in mind as fast as possible.
Even though this attempt might have had the right intention, we are reminded of other leaked private conversations, whether they were caused by a bug, or not a bug. Either way, it does not help efforts to get the general public to trust AI chatbots.
Many people confide deeply personal secrets to chatbots and seek support for issues that could typically require hours of professional counseling.
OpenAI removed the option that allowed conversations with ChatGPT to be indexed, so newly shared chats will not appear in search results going forward. Still, OpenAI warns that some conversations already indexed may remain visible temporarily because of search engine caching, even as they work to have this content removed.
Besides the obvious (but often ignored) advice of reading any warnings and privacy policies before using these apps, there are some additional precautions and habits that can help keep your personal conversations private:
In short, trust an AI chatbot with your private info the same way you would trust a “blabbermouth”—not a whole lot.
We don’t just report on threats – we help protect your social media
Cybersecurity risks should never spread beyond a headline. Protect your social media accounts by using Malwarebytes Identity Theft Protection.
SHARE THIS ARTICLE
July 31, 2025 – The Trump Administration is working with 60 companies on a plan to have Americans voluntarily upload their healthcare and medical data.
July 31, 2025 – A Florida correctional institution leaked the names, email addresses, and telephone numbers of visitors to the facility to every inmate.
July 31, 2025 – Scammers are using texts that appear to have been sent to a wrong number to get targets to engage in a conversation.
July 30, 2025 – Apple has released important security updates for iOS and iPadOS patching 29 vulnerabilities, mostly in WebKit.
July 29, 2025 – After the initial uproar about leaked images, a researcher was able to access Tea Dating app private messages
ABOUT THE AUTHOR
Pieter Arntz
Was a Microsoft MVP in consumer security for 12 years running. Can speak four languages. Smells of rich mahogany and leather-bound books.
Contributors
Threat Center
Podcast
Glossary
Scams
Cyberprotection for every one.
COMPUTER SECURITY
MOBILE SECURITY
PRIVACY PROTECTION
IDENTITY PROTECTION
LEARN ABOUT CYBERSECURITY
PARTNER WITH MALWAREBYTES
ADDRESS
One Albert Quay
2nd Floor
Cork T12 X8N6
Ireland
2445 Augustine Drive
Suite 550
Santa Clara, CA
USA, 95054
ABOUT MALWAREBYTES
WHY US
GET HELP
Want to stay informed on the latest news in cybersecurity? Sign up for our newsletter and learn how to protect your computer from threats.
© 2025 All Rights Reserved