Welcome to the forefront of conversational AI as we explore the fascinating world of AI chatbots in our dedicated blog series. Discover the latest advancements, applications, and strategies that propel the evolution of chatbot technology. From enhancing customer interactions to streamlining business processes, these articles delve into the innovative ways artificial intelligence is shaping the landscape of automated conversational agents. Whether you’re a business owner, developer, or simply intrigued by the future of interactive technology, join us on this journey to unravel the transformative power and endless possibilities of AI chatbots.
Miller and Solomon at Gen(Z)AI by Michael Geist CC BY 2.5 CA
The frenzy to ban kids from social media continues to grow with Culture Minister Marc Miller telling a House of Commons committee that the government has no choice but to act. Miller’s comments are consistent with the federal Liberal policy convention vote backing a minimum age of 16 and Manitoba Premier Wab Kinew announcing that his government will be the first in Canada to ban kids from both social media and AI chatbots. The problem, as I documented in detail last week, is that good intentions do not make for good policy. In this case, a social media ban is bad policy because it does not address the underlying problems with the platforms, evidence to date suggests it doesn’t work, and it creates its own harms. But the bad policy does not end there, as the possibility of extending that same framework to AI chatbots is now squarely on the table. This post examines the implications of a ban on kids’ use of AI chatbots, arguing that such an approach is even worse than a social media ban. To be clear, regulation of AI chatbots is needed, but a ban leaves the genuine concerns associated with AI chatbots largely untouched.
Concerns about AI chatbots are not imagined. As the services have become increasingly popular, so too have the risks, with tragic incidents such as the death of Adam Raine and the belief that OpenAI might have identified the risk of potential harm in advance of the Tumbler Ridge shooting tragedy. While research into chatbot effects is still in its early stages, there is a growing conversation about the need for regulation. That discussion at times conflates two issues. The first is whether and how to regulate AI chatbots at all, which is a genuinely complex policy problem that affects every user. The second is whether AI chatbots warrant a kids-specific access ban on top of whatever general regulation might apply. The post unpacks three issues: what makes AI chatbots different from social media, why an AI chatbot ban for kids is a bad idea, and what a more effective regulatory model would look like.
What makes AI chatbots different from social media
Several considerations should shape any AI chatbot regulatory framework, and none of them point toward an age-based ban as the right answer. The first is the definitional problem. “AI chatbot” is not an established regulatory category, unlike “social media platform” when the Online Harms Act was drafted. A narrow definition limited to consumer-facing products such as ChatGPT or Claude captures the products of immediate concern but leaves the same underlying models accessible through APIs, third-party wrappers, embedded uses, and the AI features that have rapidly become standard infrastructure in everyday digital tools. In short, AI is everywhere: Google search now responds to queries with AI Overviews, Microsoft has integrated Copilot across its Office products, and Apple’s operating systems include AI features at the system level. Identifying what would be covered in a regulatory model is much more complex than it is with social media.
The second is the input-output distinction, which I discussed in detail in the wake of the Tumbler Ridge shootings and the question of whether OpenAI should have reported the shooter’s account activity to police. Concerns about what users tell chatbots as opposed to what chatbots produce as outputs are not the same regulatory problem. Chatbot prompts are far closer in character to search queries and private messages than to public social media posts. Treating the input side (i.e., prompts) as something platforms should be obliged to monitor and potentially report leads quickly to a system of widespread corporate surveillance over what people may reasonably expect to be private interactions. The output side is different. Concerns about the accuracy of information returned, the safety of responses on topics like self-harm, and the design choices that draw users into emotionally intense interactions all involve what the system itself generates. These are conventional questions of product design, algorithmic accountability, and corporate safeguards that are better addressed through a legislated duty to act responsibly framework than a surveillance regime.
Other jurisdictions have begun to legislate on AI chatbots, with a clear pattern of choosing regulation over prohibition. For example, California’s Senate Bill 243, signed by Governor Gavin Newsom, which took effect in January, regulates “companion chatbots” through a targeted set of requirements that include clear disclosure that users are interacting with AI, mandatory crisis-response protocols for content involving suicide or self-harm, restrictions on sexually explicit content for users known to be minors, periodic reminders for minors that the chatbot is not human, and a private right of action for people injured by violations. What California specifically chose not to do was equally significant. The same week he signed SB 243, Newsom vetoed Assembly Bill 1064, which would have prohibited AI companions for minors unless the products were “not foreseeably capable” of harm. His veto message warned that the prohibition was so broad it could effectively amount to a total ban on AI use by minors. New York’s S-3008C, enacted earlier in 2025, took a similar disclosure-and-safety-protocol approach without an access ban.
Why a kids-specific AI chatbot ban would make things worse
While there are real benefits to properly targeted AI chatbot regulation, a ban for those 16 and under would be a mistake. The reasons echo my earlier post on a potential social media ban, but are even more pronounced in the context of AI.
First, the age verification problem is considerably worse. An AI chatbot age verification regime extends that surveillance infrastructure across an open-ended and growing set of services into which AI is being integrated, which would effectively make Canadians’ online activity contingent on submitting ID to third-party verification services. Law professor Eric Goldman has labelled this regulatory model “segregate-and-suppress”, capturing how age authentication compels verification of every user to suppress some users’ access.
Second, the costs of cutting young Canadians off from AI are concrete and substantial in ways the corresponding social media analysis is not. AI tools have demonstrated educational, productivity, and accessibility benefits. A kids’ ban sacrifices those benefits in exchange for an enforcement regime whose effectiveness is at best unknown.
Third, the substitution problem is worse. A teen blocked from using Instagram migrates to less-moderated social platforms, but a teen unable to access ChatGPT or Claude is likely to migrate to open-source models running locally on a laptop or offshore services with no safety teams at all. The major commercial AI companies have their problems, but they are the ones with dedicated trust and safety operations, suicide-prevention routing, and the public reputational stakes that drive ongoing investment in safety research. A regulatory framework that pushes minors away from those products and toward whatever they can find through a free VPN increases the risk to kids.
Fourth, the Charter analysis is at least as serious as on the social media side and likely more so. Section 2(b) protects expression, and Supreme Court of Canada jurisprudence has long recognized that the guarantee covers both the conveying of ideas and the receiving of them. A teenager researching a medical condition, learning to code, exploring identity questions, doing homework with AI assistance, or asking factual questions about the world is engaged in receiving expression at the core of what section 2(b) protects, not at its periphery. Children are increasingly recognized as rights-bearers under the Charter and under international instruments such as the United Nations Committee on the Rights of the Child’s General Comment 25 on children’s rights in digital environments. A wholesale denial of their access to a major source of information and expression is incompatible with that recognition.
Fifth, there is no good test case yet for whether an AI chatbot ban actually works. Australia’s under-16 social media ban has produced three months of compliance data showing roughly 70 per cent of previously-active under-16 users still have access to at least one regulated platform. In other words, thus far social media bans haven’t been shown to work. The verification failures the eSafety Commissioner has documented for social media will be more severe for AI services because the same models can be reached through more interfaces. Manitoba and the federal government would be moving onto policy ground that has not been tested anywhere, on a more difficult version of a problem that the only existing test case has not solved.
Toward a more effective regulatory model
My post arguing against the kids’ social media ban garnered considerable attention, but some asked what alternatives are available to address the problem. Properly scoped regulation could address most of these concerns. I would point to three measures, none of which is an age-based access ban.
The first is an AI Transparency Act of the kind I have argued for in committee testimony and elsewhere. The Tumbler Ridge debate demonstrated that few had a clear picture of what OpenAI’s safety policies were or how they were enforced. A transparency framework would require disclosure of corporate safety policies, protocols for handling content involving suicide and self-harm, practices involving law enforcement reporting, and the age-related restrictions companies themselves apply. Some of what governments are now considering is already the operating policy of the major commercial chatbots. Anthropic’s terms of service for Claude require users to be 18 or older. OpenAI requires users to be at least 13, with parental consent up to 18, and has tightened its controls in the wake of the Raine litigation. Mandatory disclosure of those policies and how they operate in practice would let policymakers and the public see what is already happening before legislating around it.
The second is modernized privacy legislation that addresses both ends of the chatbot interaction. The first piece is conventional and well-developed in Canadian privacy law: rules governing the collection, use, retention, and security of the personal information that users provide as inputs to AI systems. The second piece is newer and likely the more important issue going forward, as I argued in a recent Globe and Mail op-ed on the limits of de-identification in an AI environment. The concern is not what personal data goes into AI systems but rather what personal information comes out. Modern AI systems can access publicly available data from multiple sources, combine fragments that are individually harmless, and draw inferences that re-identify individuals from information that was never intended to be personally identifiable. A privacy framework that addresses only the input side will not do the work that needs to be done on AI, no matter how well it handles inputs.
The third is an enforceable duty to act responsibly tailored to the AI chatbot context. The tailoring matters because the chatbots are genuinely different from social media. Further, the duty must be enforceable since voluntary commitments are insufficient, as Anthropic’s recent walk-back of the central pledge of its Responsible Scaling Policy made clear. But the duty must be tailored to the specific technology. For example, the architectural reality of chatbots, with output generated in response to user prompts rather than pushed by an algorithmic feed, makes age-tiered design genuinely feasible in ways it is not for social media. A duty that mandates and audits developmentally appropriate design across different ages is the version of age-related restriction that fits the technology. In other words, regulation shouldn’t treat 10-year-olds and 16-year-olds the same when it comes to AI.
If governments really have no choice but to act, they should know that AI transparency, privacy protection, and an enforceable duty to act responsibly would address many of the concerns associated with AI chatbots. Meanwhile, an age-based ban would leave most of these issues untouched in favour of a politically appealing but largely ineffective approach. I ultimately believe that governments have a choice. They should reject the age-gating impulse and get on with the harder and more useful work of building an effective Canadian model for AI regulation.
Your email address will not be published.
*
*
*
Law, Privacy and Surveillance in Canada in the Post-Snowden Era (University of Ottawa Press, 2015)
The Copyright Pentalogy: How the Supreme Court of Canada Shook the Foundations of Canadian Copyright Law (University of Ottawa Press, 2013)
From “Radical Extremism” to “Balanced Copyright”: Canadian Copyright and the Digital Agenda (Irwin Law, 2010)
In the Public Interest: The Future of Canadian Copyright Law (Irwin Law, 2005) .
Michael Geist
mgeist@uottawa.ca 
This web site is licensed under a Creative Commons License, although certain works referenced herein may be separately licensed.