Canadian probe finds ChatGPT maker OpenAI violated privacy laws | Daily Sabah – Daily Sabah

Welcome to the forefront of conversational AI as we explore the fascinating world of AI chatbots in our dedicated blog series. Discover the latest advancements, applications, and strategies that propel the evolution of chatbot technology. From enhancing customer interactions to streamlining business processes, these articles delve into the innovative ways artificial intelligence is shaping the landscape of automated conversational agents. Whether you’re a business owner, developer, or simply intrigued by the future of interactive technology, join us on this journey to unravel the transformative power and endless possibilities of AI chatbots.
Canadian privacy authorities concluded that OpenAI violated federal and provincial privacy laws during the development of ChatGPT, adding fresh pressure on AI companies over the mass collection and use of personal data.
The investigation, conducted by the Privacy Commissioner of Canada, alongside provincial counterparts in Quebec, British Columbia and Alberta, examined how the U.S.-based AI company collected, used and disclosed personal information while developing ChatGPT.

Regulators found that OpenAI over-collected personal information, lacked valid consent and transparency, and produced factual inaccuracies involving individuals, according to the statement.
The findings also revealed that Canadians faced obstacles accessing, correcting and deleting their personal information, and the company demonstrated insufficient accountability on the data it controlled.

In response to the findings, OpenAI has since significantly narrowed the personal and sensitive information used to train new ChatGPT models and committed to better informing Canadians of the implications of using the service.
Privacy Commissioner Philippe Dufresne's office declared that the complaint is well-founded and conditionally resolved, and said it will monitor the company's compliance going forward.
The case comes as OpenAI faces criticism about its handling of the deadly Tumbler Ridge school shooting, after CEO Sam Altman issued an apology to victims for the company's failure to alert law enforcement despite having relevant information. The families of the victims said OpenAI was aware that the shooter had exchanged disturbing, violence-laden messages with its ChatGPT chatbot months before the attack but chose not to alert law enforcement.
“Appropriate safeguards are the cornerstone of responsible innovation,” said Dufresne, adding that he expects the investigation's findings to shape how other AI-powered technologies are designed with privacy in mind.

He also urged parliament to modernize privacy laws, arguing that updated legislation would better support the safe deployment of new technologies.
“This milestone investigation highlights the importance of prioritizing privacy in the development, deployment, and ongoing evolution of artificial intelligence so that Canadians are able to safely use and leverage the benefits of these technologies,” he said.

source

Scroll to Top