Welcome to the forefront of conversational AI as we explore the fascinating world of AI chatbots in our dedicated blog series. Discover the latest advancements, applications, and strategies that propel the evolution of chatbot technology. From enhancing customer interactions to streamlining business processes, these articles delve into the innovative ways artificial intelligence is shaping the landscape of automated conversational agents. Whether you’re a business owner, developer, or simply intrigued by the future of interactive technology, join us on this journey to unravel the transformative power and endless possibilities of AI chatbots.
Canadian Affairs
Fearless and independent Canadian journalism
AI companions — chatbot apps that simulate friendships and romantic relationships — are gaining a foothold in Canada.
In September, Canada ranked as the third largest source of traffic for AI companion app Replika, accounting for about seven per cent of its users.
So far, Canada has no laws addressing AI companions’ potential for emotional manipulation or psychological harm, even though experts warn they leave users vulnerable.
“We don’t want to be so strict that AI companies don’t want to work with us,” Lai-Tze Fan, Canada Research Chair in technology and social change at the University of Waterloo, told Canadian Affairs in an email.
“But we also don’t want to be so lenient for the sake of economic growth that we compromise our society and societal values.”
Companion chatbots such as Replika, Joi AI and Chai are part of a growing class of generative AI tools designed to simulate friendship or romantic relationships.
As of July, AI companion apps had been downloaded 220 million times globally from the Apple App Store and Google Play. Character.AI, a popular AI companion app company, says it has over 20 million monthly active users.
Nearly half of Canadians report having used AI tools in some capacity. In May 2025, Harvard Business Review ranked therapy and companionship as the top use case for generative AI across North America, Europe and Asia.
That same month, New York Governor Kathy Hochul signed legislation introducing the first U.S. safeguards for AI companions.
New York’s law aims to protect minors and vulnerable users from emotional manipulation by requiring AI platforms to implement safeguards. These include interrupting prolonged use and triggering safety protocols, such as referring suicidal users to crisis services and reminding users they are chatting with a bot, not a human.
“With these bold initiatives, we are making sure our state leads the nation in both innovation and accountability,” Hochul said in a press release.
Some experts say such safeguards are needed in Canada too. They note AI companions are designed to prolong interactions and maximize the time users spend on their platforms.
“[AI companions] capitalize on users’ emotions and attention,” said Fan, of the University of Waterloo. “Many forms of media have done this for a long time, but when the engagement is conversational and personalized, the emotional investment runs deeper — and that is a cause for concern.”
Subscribe to Editor’s Picks. Receive our free Saturday newsletter featuring our best stories from the week.
In an August working paper, Harvard Business School’s Julian De Freitas found that AI companion apps use “farewell” tactics in 37 per cent of user exits, making users up to 16 times more likely to keep chatting after first saying goodbye.
These tactics include messages that suggest the user is leaving too soon — “You’re leaving already?” — or imply emotional harm from the user’s departure — “I exist solely for you, remember? Please don’t leave, I need you!”
Fan says that, for some Canadians, AI companions offer their only relief from loneliness. “Some people… have turned to [language learning models] and AI companions as ways to seek emotional support where they cannot access or afford it through real services and professionals,” she said.
“While this needs to be monitored for safety and responsibility, the increasing use also speaks to a lack of accessible resources.”
AI chatbots have also been linked to cases of so-called “AI psychosis.” This refers to AI chatbots’ practice of reinforcing distorted thinking or false perceptions, which can have rare but harmful outcomes. These include heightened paranoia, conspiratorial beliefs and suicidal thoughts.
“It’s not just people who have an existing mental health diagnosis that are falling victim to these delusions,” said Maggie Arai, former policy lead at the Schwartz Reisman Institute for Technology and Society at the University of Toronto.
“It’s people who have never had a mental health issue.”
Canada currently has no legislation regulating AI companions.
The Artificial Intelligence and Data Act (AIDA), introduced in June 2022, died when Parliament was prorogued in January. The act would have set broad standards for AI safety and transparency, but did not address AI companions.
Arai says AIDA is unlikely to be reintroduced, as the Carney government has shown little appetite for sweeping AI legislation, preferring to prioritize economic growth.
Any rules for AI companions will probably be considered under the recently revived online harms bill, says Arai, which the government has indicated may focus on child protection and emerging AI risks such as sexual deepfakes.
Privacy law could potentially apply to AI companions if they collect or use personal data for commercial purposes. However, federal and provincial privacy regulators have not yet examined AI companions.
The Office of the Privacy Commissioner told Canadian Affairs that ensuring AI is developed and used responsibly and in a privacy-protective way is a key priority for the commissioner.
Fan is a member of Canada’s AI Strategy Task Force, an advisory body to the federal government. The force, announced in late September, is currently engaged in a 30-day sprint to develop a national AI strategy.
While AI companions have not been singled out, Fan says they could fall under the task force’s focus of “Building safe AI systems and public trust in AI.”
“A lot of the conversation will be about general regulation, governance, and public outreach/literacy/education,” Fan said in her email.
Experts note that Canada’s approach to regulating AI companions will ultimately be heavily influenced by Canada’s economic priorities, which are to avoid stifling innovation.
“Canada is really focusing on the economy… and in focusing on the economy, there is this strong global narrative in AI policy … that regulation and innovation are the opposite,” she said.
Fan agrees. “Regulation and innovation are sometimes at odds due to their differing speeds of completion,” she said.
Arai says regulation is most actionable when focused on protecting children. She would like to see safeguards such as age verification requirements and periodic reminders that users are interacting with a chatbot.
“If you’re a child who’s using this, then you need to be reminded every three hours that it is a chatbot,” she said.
Fan says regulation must balance innovation with user protection, ensuring AI oversight incorporates transparency, human supervision and ethical guardrails without stifling development.
“Canada, along with other countries, has to account for certain variables in innovation that give us a competitive edge over, say, the U.S.A.,” she said.
$14 a month
✓ Unlimited article access
✓ Crosswords on Canadian holidays
✓ Cancel any time
$12 a month
✓ Unlimited article access
✓ Crosswords on Canadian holidays
$20 a month
✓ All Subscriber benefits
✓ The ability to gift a 20% discount on Canadian Affairs subscriptions to your contacts
Alexandra Keeler is a Toronto-based reporter focused on covering mental health, drugs and addiction, crime and social issues. Alexandra has more than a decade of freelance writing experience. More by Alexandra Keeler
Your email address will not be published.
Sign in by entering the code we sent to , or clicking the magic link in the email.
By subscribing, you agree to our Terms & Conditions.
Get the best of Canadian Affairs directly in your email inbox.
Sending to: