#Chatbots

Meta Faces Scrutiny Over AI Prompt Disclosure – PYMNTS.com

Welcome to the forefront of conversational AI as we explore the fascinating world of AI chatbots in our dedicated blog series. Discover the latest advancements, applications, and strategies that propel the evolution of chatbot technology. From enhancing customer interactions to streamlining business processes, these articles delve into the innovative ways artificial intelligence is shaping the landscape of automated conversational agents. Whether you’re a business owner, developer, or simply intrigued by the future of interactive technology, join us on this journey to unravel the transformative power and endless possibilities of AI chatbots.
Meta is under criticism for privacy practices across two of its platforms.

Complete the form to unlock this article and enjoy unlimited free access to all PYMNTS content — no additional logins required.






yesSubscribe to our daily newsletter, PYMNTS Today.
By completing this form, you agree to receive marketing communications from PYMNTS and to the sharing of your information with our sponsor, if applicable, in accordance with our Privacy Policy and Terms and Conditions.



Meta’s artificial intelligence assistant may publicly share user prompts, and its apps may have exploited a technical loophole to track Android users without their knowledge, CPO Magazine reported.
Meta’s AI app introduced a pop-up warning that content entered by users — including personal or sensitive information — may be publicly shared, per a June 20 report. It seems these prompts can be published in the “Discover” feed. The feature, which launched earlier this year, showcases AI-generated content and occasionally displays user-submitted prompts, some of which have included private data such as legal documents, personal identifiers and even apparently audio of minors.
Although users can opt out, the setting is enabled by default, and users must manually disable it, the report said. Privacy advocates argue that no other major chatbot service offers a comparable mechanism that proactively republishes private inputs.
Consumers already have privacy concerns around generative AI. The PYMNTS Intelligence report “Generation AI: Why Gen Z Bets Big and Boomers Hold Back” found that 36% of generative AI users are nervous about these platforms sharing or misusing their personal information, and 33% of non-users are kept from adopting the technology because of the same hesitations.
Separately, Meta may have taken advantage of an Android system vulnerability known as “Local Mess” to harvest web browsing data, per a June 17 CPO Magazine report. The loophole, involving the mobile operating system’s localhost address, potentially allowed Meta and Russian tech company Yandex to listen in on users and correlate their behavior across apps and websites. The tech giants may have been able to do this even when users were browsing in incognito mode or using other privacy protections. This data could be linked to a user’s Meta account or Android Advertising ID.
Meta has since halted sending data to localhost, characterizing the issue as a miscommunication with Google’s policy framework. Privacy watchdogs and experts say both cases could trigger regulatory action in the European Union and other jurisdictions.
Meta is already facing legal action over its privacy practices in an $8 billion lawsuit concerning alleged data misuse.
Google, for its part, is scheduled to appear in court later this month for allegedly violating the privacy of both Android and non-Android mobile phone service users.
For all PYMNTS AI coverage, subscribe to the daily AI Newsletter.
Meta Faces Scrutiny Over AI Prompt Disclosure
Anthropologie Elevates Maeve in Rare Retail Brand Launch
Tariff Uncertainty Helps Drive Down Berkshire Hathaway Profits
Experian Unveils New AI Tool for Managing Credit and Risk Models
We’re always on the lookout for opportunities to partner with innovators and disruptors.

source