#Chatbots

Proton’s New AI Chatbot Lumo Ensures Privacy But What About Data Scraping? – MediaNama

Welcome to the forefront of conversational AI as we explore the fascinating world of AI chatbots in our dedicated blog series. Discover the latest advancements, applications, and strategies that propel the evolution of chatbot technology. From enhancing customer interactions to streamlining business processes, these articles delve into the innovative ways artificial intelligence is shaping the landscape of automated conversational agents. Whether you’re a business owner, developer, or simply intrigued by the future of interactive technology, join us on this journey to unravel the transformative power and endless possibilities of AI chatbots.
MEDIANAMA
Technology and policy in India
MediaNama’s Take: While Proton’s new AI assistant, Lumo, contains a number of commendable features meant to protect user privacy, AI development itself carries inherent privacy risks. Training an AI model requires datasets scraped from across the internet, which can contain personal data of other people themselves. While Lumo is good for protecting its users’ privacy, it requires the infringement of others.
What’s The News: Encrypted email service Proton Mail recently announced the launch of a privacy-focused AI Assistant called Lumo that does not store or share user data or use it to train AI models. Proton positioned Lumo as an answer to conventional AI chatbots from Big Tech firms, which often use a customer’s chat history to train AI models.
In contrast, Lumo contains a host of features centered around privacy and data protection. The app claims to protect all user conversations with end-to-end encryption, meaning that they can only be accessed from the user’s device. Not even Proton can read these chats. In addition, Proton keeps no records of the conversations on its servers.
As a result, it is impossible for the company to share user conversations with government agencies or even third-party vendors or advertisers. It also does not use data from the interactions users have with the chatbot to train other AI models and open-source language models, working from European data centres protected by the General Data Protection Regulation (GDPR). Lumo also offers a Ghost Mode to users, which deletes a conversation as soon as it finishes.
The proliferation of AI chatbots and assistants has brought with it accompanying privacy risks. For example, OpenAI, the creator of the popular AI application ChatGPT, does in fact use the conversations people have with ChatGPT to train future AI models. Since many users may end up asking ChatGPT sensitive personal questions like those concerning their physical or mental health or finances, models trained on this data could end up reproducing the same information to other users.
Another cause of concern is the growing closeness between OpenAI and other AI companies with the US military complex. The company recently won a $200 million contract from the US Department of Defense to develop generative AI capabilities for “warfighting.” Around the same time, top executives from AI firms like OpenAI, MetaAI, and Palantir joined the US Army as reservists to apply AI for military use.
In such a scenario, the fact that OpenAI stores user conversations, which might include sensitive personal information from users, raises questions about the sort of access the US government has to the data in question. Just last year, the Biden administration signed into law an act that compels US based enterprises to share “communication data” with American agencies, including data that comes from foreign citizens.
While Lumo, Proton’s newest offering, offers a number of privacy protections that other chatbots don’t, more serious questions arise over the privacy risks of AI development in general.
Most AI models are trained on publicly available datasets scraped from the internet, which can include personal data as well. India’s Digital Personal Data Protection Act (DPDPA) doesn’t protect publicly available personal information as well, leaving it free to be used for AI training.
The basic privacy principle of data minimisation is itself at odds with AI development, which requires enormous quantities of data to create functioning models.
The risks of personal information like photos showing up in AI datasets are clearly seen in the case of an American man who used an AI model to generate child porn. The AI model in question was trained on a dataset that contained numerous images of children, as well as pornography and other violent images.
Despite its stated commitment to transparency, Proton Mail does not reveal the “open-source foundational models” that it used to build Lumo or the datasets that were part of the model’s training or fine-tuning. While it may be beneficial for the privacy of its users, Lumo is ultimately based on a technology that carries inherent privacy risks.
Also Read:


Support our journalism:


The ED complaint alleged that Myntra had declared it operates under the ‘Wholesale Cash & Carry’ model, which is subject to 100% FDI but it indulged in MBRT activities where only 51% FDI is allowed.
MediaNama is the premier source of information and analysis on Technology Policy in India. More about MediaNama, and contact information, here.
© 2024 Mixed Bag Media Pvt. Ltd.

source