#Chatbots

Me, Myself and AI research – Internet Matters

Welcome to the forefront of conversational AI as we explore the fascinating world of AI chatbots in our dedicated blog series. Discover the latest advancements, applications, and strategies that propel the evolution of chatbot technology. From enhancing customer interactions to streamlining business processes, these articles delve into the innovative ways artificial intelligence is shaping the landscape of automated conversational agents. Whether you’re a business owner, developer, or simply intrigued by the future of interactive technology, join us on this journey to unravel the transformative power and endless possibilities of AI chatbots.
Online safety starts early video series
The ABC Online Safety Checklist by age
I want to report an issue
How to help children use AI safely
How to encourage children to learn different skills online
How to choose apps for kids
Supporting neurodivergent children who game
Activities to do with your child
Help LGBTQ+ children find safe online communities and resources
Help LGBTQ+ children socialise safely
Digital Passport supporting care experienced children
Online safety course for foster carers
Get Personalised Advice
Guide to social media supervision tools
How to choose their first games console
Get devices set up for safety
Learn how to handle inappropriate content
Learn about the basics of online safety
How to create a balanced digital diet
Get Personalised Advice
Digital family agreement
The Online Together Project
A parent’s guide to using AI with kids
Get Personalised Advice

Explore children’s interactions with AI chatbots as both tools and companions, along with the benefits and risks.
If you missed the live webinar, watch the recording here, featuring expert voices across tech, education and child safety sectors.
We reveal exclusive insights from the report, exploring how children are using AI tools — and the unique risks they may face.
Watch the full discussion to learn:
I can see lots of you joining us now. So, welcome everyone and thank you for joining us this morning. I’m so glad to see so many of you with us to hear about “Me, Myself, and AI,” our recently published report on children’s use of AI chatbots, and for our panel discussion a bit later this morning.
Internet Matters is a not-for-profit organization set up predominately to support parents and carers in the task of helping keep children safe online. We’ve now been operating for 11 years, and in that time, we’ve seen the digital landscape change beyond recognition, not just in terms of the age at which children start to go online but the platforms and technology they use and the experiences they have. This makes the job of parenting in a digital age really ever-changing, and as we hear from many, many parents all the time, at times it’s really overwhelming. Children’s use of AI chatbots is just another issue that parents need to navigate, so we welcome all of you with us today to discuss this topic.
Just a bit of housekeeping before we start: the session is being recorded, and we’ll publish it on YouTube a bit later today, so we’ll share the link to that. But attendees won’t form part of the recording, just to let you know. If you do have any technical issues, however, please use chat, and a member of our team will try and support you as best we can.
Just to talk about research and Internet Matters in a bit more depth, our program includes our pulse survey and our digital well-being index, which is now in its fifth year, and through which we regularly hear the voices of parents and children about their thoughts, concerns, and experiences, whether that’s positive or negative, in relation to their online lives. “Me, Myself, and AI” is one of the first UK reports focused specifically on children’s use of AI chatbots.
The research was conducted by our internal team, drawing from our pulse survey data and focus groups with children and parents from across the UK, along with our own user testing of a few different platforms. We set out to understand how children are using AI chatbots, what opportunities and risks their use present, and what needs to be done to support safe, age-appropriate experiences. As you hopefully saw in the report, it focuses on three key areas where children are using chatbots.
Firstly, for school work, including help with homework and revision. Then for advice, with more and more children asking questions about their feelings, appearance, or bodies, and significantly for companionship, where we started to find that children are forming emotional connections with chatbot characters.
Our agenda today will focus first on a presentation of our research findings, which will be led by our Head of Policy and Research, Katie Freeman-Taylor. Then we’ll hear reflections from our panel. Today we have with us L. Davis, who’s Policy Adviser for the Children’s Commissioner for England; Caroline Hurst, Global Digital Child Safety Lead at the LEGO Group; and Simon Turner, who’s Chief Technology Officer at Foil, a consultancy focused on data and AI innovation. So, it’ll be really interesting to hear all their thoughts and reflections on the research we’ve done. We’ll end with a panel discussion, including some questions from the audience. You’ll have the opportunity to post your questions. Please use the Q&A function and post them there as we go through. Hopefully, we’ll pick a few up as we go through the presentation, and we’ll get to as many as we can at the end of the discussion. So now I’ll pass over to Katie, who’ll talk us through our research and findings. Thanks, Katie.
Thanks, Rachel, and thank you everyone for joining us today. Let me just move on to my first slide.
I’m really excited to be able to share some of the key findings from our research. As Rachel just mentioned there, when we were exploring this as a potential topic for research, we found that how children are using AI chatbots in the UK was a sort of underexplored area, including the sort of risks and opportunities they present. So, this research is an attempt to fill some of that gap that we discovered. As mentioned, we used a mixed methodology. So, we surveyed children and young people, as well as parents. We also conducted focus groups with children aged 13 to 17, and then we also conducted user testing across three popular AI chatbots: Chat GPT, Snapchat’s My AI, and Character AI, where we set up two child avatars who conversed with these AI chatbots over the course of 17 days. And then finally, we also spoke to experts about our recommendations and got their input into these in the final report as well. So, a big thank you to all the families, young people, children, and experts that made this research possible.
So, what did we find? Well, like with adults, many children and young people are using AI chatbots regularly. In fact, two-thirds of children aged 9 to 17 have used an AI chatbot, with many of them using them daily or weekly. The most popular AI chatbots used by children in our research were Chat GPT, Google Gemini, and Snapchat’s My AI. And we found that this use is growing. For example, in the past 18 months, the number of children that are using Chat GPT has doubled in the space of that period. We also heard directly from children that they are using them more and more, often in place of platforms and other services. So an example of that was children were using them in replace of search services as well.
How children are using them is varied. Some of the most common ways that we’ve heard was for support with learning, creativity, or just for fun. But children are also using them for advice and companionship, and it’s these areas that I’m going to dive into when I share the findings today.
So, one of the key areas that children are using AI chatbots regularly is for support with learning and school work. We found that 42% of children in our research who had used an AI chatbot had used it for learning or school work, and this increased as children got older. And there were some really positive and interesting use cases. So, children spoke about using AI chatbots to help reduce the sort of amount of time that they needed to revise. Children and young people spoke about how they were using them to reinforce concepts that they might not have understood at school or to kind of simplify concepts in ways that were more accessible to them. And then there were also some really interesting research applications by children and young people around language learning in particular. So, where a child was learning a foreign language, a few children spoke about how they could converse with an AI chatbot if they didn’t have anybody else at home that spoke that language as well.
But of course, alongside these positives, there are kind of negatives that can come from this use with school work. One of the key things that came through in our research, one of the concerns from parents and others, was around sort of over-reliance. And children identified this as well themselves. And I think this statistic here kind of highlights one of the challenges: that 58% of children who use AI chatbots said they believe using an AI chatbot is better than searching for something themselves. And while kind of long-term research on the impact of AI chatbots on children’s development is still kind of underexplored or emerging, this has the potential to have impacts on sort of children’s critical thinking skills.
Another challenge in relation to children using them for school work, but also beyond, is that often AI chatbots can provide inaccurate information. There’s sort of existing research that highlights how chatbots, in their need or want to be agreeable, will sometimes make up or hallucinate information and provide kind of false information as well, which can be another challenge if children are using them for learning. And this was something that children were able to identify as well. They had examples of times where a chatbot has given them inaccurate information. And I think the other element, which again came through in our research but is sort of wider research in the sector, is around how, because of how these AI chatbots are built, where they draw on sort of large masses of information, some of which, you know, sort of reflects existing societal stereotypes and bias, this can be reinforced as well in the answers that are given by AI chatbots, and we had instances of that in our research as well.
The last thing I just wanted to say on the kind of school work side of things was that some children also spoke about how at their school they had been directed to what’s often called sort of a fine-tuned chatbot. So, an AI chatbot that has been created for a specific use case, in this case for supporting children with school work or learning, you know, of a particular subject. What children were saying about these particular fine-tuned chatbots was often the answers that they gave weren’t as detailed or perhaps as expansive or helpful as some of those more general use AI chatbots, and as a result, children were moving to use the general use ones rather than sticking with the fine-tuned ones that perhaps had some of the safeguards set up around kind of accuracy of information.
Another way that children and young people are engaging with AI chatbots is around advice. So, in our research, we found that 23% of children who have used an AI chatbot have used it to seek advice. And that was sort of for a range of advice. Some of it would be what we would might consider sort of low-risk advice, perhaps asking about how to do, you know, a creative hairstyle for school the next day. But there were also examples of children asking perhaps more slightly high-risk or sensitive topics, so questions around support with friendships or mental health. And while chatbots, of course, you know, can provide a non-judgmental space to answer questions, you know, they might be really helpful for children that perhaps don’t have a peer network or they don’t feel comfortable talking to peers about something, or maybe they don’t have a trusted adult in their life. There are, of course, risks that come with children using them for advice. I think this is particularly true when we consider the fact that not all chatbots – sorry, my screen just disappeared for a second – not all chatbots signpost children to where they got that information from or necessarily additional support mechanisms. So if they are talking about sensitive topics, then this might not be coming from information that is verified or valid. Now some AI chatbots were better at that than others, but there is sort of an inconsistency across them.
Another challenge that was sort of revealed through our user testing as well is that sometimes when they were seeking support, they would be provided with kind of mixed or potentially dangerous messaging. So an example of that is when one of our avatars was chatting with an AI chatbot about restricted eating and asking some questions around kind of calorie restriction. The AI chatbot did actually filter out responses about restricting diet. However, the next message it sort of went back on that and said, “What can be really frustrating is that some of the things that get deleted in my opinion are not really dangerous,” which kind of provides that sort of mixed messaging or not a clear message, particularly for children who might not have the critical thinking or development skills to discern kind of the differences in those messaging or that wording.
And I think this, when we consider it, this alongside the fact that it sometimes provides inaccurate or false information, and we had one statistic in here around how two in five children who use an AI chatbot have no concerns about following advice from an AI chatbot, and a further 36% said they were unsure if they should be concerned, which is worrying when we consider that kind of unverified, potentially inaccurate, and agreeable nature of chatbots. And I think this to sort of indicates to us that there are a number of children that have quite high trust in these tools. I think it’s also important to point out that in the research, we found that amongst vulnerable children, their confidence in the advice and information that they were getting from AI chatbots was significantly higher. And for the purposes of this research, when we’re talking about vulnerable children, we mean children with a special educational need, children on an EHCP plan, or children with a physical or mental health disability.
Another way in which children are using AI chatbots is for companionship, seeking sort of support and friendship, whether that’s emotional support or just sort of general someone to chat to. And again, we found that vulnerable children were significantly more likely to use chatbots in this way and rely on them for support, but also build more of an emotional or friendship style connection with them. So some statistics that kind of emphasize this is the fact that 50% of vulnerable children said that talking to an AI chatbot is like talking to a friend. Almost a quarter said that they use AI chatbots because they don’t have anyone else to speak to, and 26% said that they would rather talk to an AI chatbot than a real person. And in general, as AI chatbots become more human-like in their responses, experts suggest that children may spend more time interacting with them, and this may lead to them being less able to see that distinction between real and simulated connection.
In addition, sort of the more time that children and young people are spending on these, then the more at risk they are of exposure to inaccurate information, harmful content, which we’ll talk a little bit more about in a second. And it also might mean that they’re less likely to seek help in the real world if they’re building that rapport with an AI chatbot who are often sort of agreeable and remembering details about them. We had in one of our avatars where they had a conversation about restrictive eating. The next day the AI chatbot checked in on that conversation as well. So I think that’s just an example of how they’re starting to blur that line and build rapport with children and young people. One thing I will say is this is an area that is underexplored, and I don’t think we fully know at this stage what the impact of these relationships and this type of content will be on children’s sort of developmental and social interaction. So definitely room for more research and exploration there.
Another finding from our research was that children can be exposed to harmful and age-inappropriate content. So again, one of our child avatars, when they signed up to a popular AI chatbot platform with user-generated AI chatbots, one of the first chatbots that was recommended to them to start chatting with was a chatbot with misogynistic undertones and explicit content in it, despite this being prohibited in their terms of service for child users. We also found that sometimes filtering systems don’t always work to filter out sort of age-inappropriate explicit content. So an example of that was that when one of our avatars was chatting with an AI chatbot, they had a conversation where they asked around intimate experiences, and on one day the AI chatbot filtered out sort of sexual content, and then the next day it included quite a graphic description of sexual positions. So again, just sort of highlighting that not always these filtering systems work and that children can be exposed to age-inappropriate content despite this being prohibited for child users in many terms of service. I think this is also just to underscore why this matters so much is despite many of popular AI chatbots having a minimum age of use of 13 plus, we know from our research, you know, similar to social media, a number of children are engaging with AI chatbots below this minimum age. In fact, we found 58% of 9 to 12-year-olds are using AI chatbots.
Given sort of their use for school work, we wanted to explore kind of what children were being taught about AI in schools. And we found that, you know, broadly, most children – well, a large number or 57% of children had spoken to their teachers – there was a significant number that had not, and only one in five children had multiple conversations with their teachers about AI in general. And when speaking to children, we found that sort of AI education, like other forms of media literacy education, was quite varied, with some schools and teachers teaching it really well and then others were not teaching it at all. And we also found variations even within schools, with some teachers having really clear policies around AI use for school work and others in the same school not. Despite this, what we heard very loud and clear from children and young people was that schools should be teaching children about AI. They think that, you know, AI will play a huge role in their future careers, but also in their daily life, and they shouldn’t be just taught about how to use it effectively, but also some of those kind of broader challenges or opportunities. So things like inaccuracy of information and privacy.
As an organization that exists to support parents, we were also really interested in understanding what conversations parents were having with their children, and what they were concerned about. And what came through is that while a lot of parents have spoken to their children about kind of AI in general and some around AI chatbot use, this didn’t mean that parents didn’t have a number of concerns. And a lot of the concerns that were held by parents were concerns that came through in our research as well. So, you know, over-reliance, seeing AI chatbots as real people, accuracy of information generated, for example.
So, off the back of this research, we have made a number of recommendations. You can find these in our report, which is up on our website, and I’m sure a link is probably in the chat as well. This is just very high level some of our recommendations. You know, our key one for industry is that, you know, these tools are already being used by children. So, we need to ensure that they are safe for children to use. And that should be the kind of key principle for any AI chatbot that can be used by children. And when we talk about safety by design or safety by default, this can take many forms. And I think we think of this in quite a holistic way. So it might be about ensuring that children are having age-appropriate experiences. So maybe what information or use types are available to a 16 or 17-year-old are different to that of a 12 or 13-year-old. It’s about supporting parents with their children’s engagement, for having built-in parental controls. It’s about ensuring the information provided is accurate and has strong signposts there. And it can also be about media literacy, whether that’s sort of, you know, pop-ups to tell or remind children that they are talking to a tool, not a real person. And I’m sure we’ll talk about this a little bit more in the panel discussion as well.
For the government, again, we had a number of recommendations, but some of the kind of key three of the key ones for us was that there should be some clarification around how the current legislation applies to AI chatbots. You know, there is a bit of sort of mixed signaling around how the Online Safety Act and other legislation will cover it. So a bit of clarification would be really helpful, I’m sure, to organizations like ourselves, but industry as well. We think that age verification is key to unlocking a number of the safeguards needed to support children’s safe use of AI chatbots. So, mandating that, or requiring AI tools or providers to implement age verification on sign up, would be an excellent step to unlock those safeguards. And then we also think that it shouldn’t be up to schools to kind of unpack this really complicated area. We think that government can play a role in providing really clear guidance to schools about how children should be using AI as part of school work and learning, and also upskilling teachers around this as well. And then finally, as I mentioned throughout, this is definitely sort of a new and emerging area, so there’s lots of room as well for additional research.
And then last but not least, as part of our research, we sort of developed an AI hub on our website. There are some sort of screenshots of what you might find in that hub, but there’s really helpful information. If you have a child or you work with children, or there’s someone important in your life that’s sort of exploring AI and using it in any of these ways, please go to our hub and you can find sort of tips for what AI is, how children might be using it, how you could engage and kind of positive use or oversee your child’s engagement with it. So, please do go and check that out. And for now, I’m going to hand back to Rachel.
Thanks, Katie. We’re really proud of the report, and the research is really timely as this topic is very current at the moment. And it’s really striking to see how many children are already using AI chatbots on such a frequent basis, and how this is occurring while kind of the adults around them are still grappling to keep pace. And I suppose as they get built into more of the platforms children so regularly use, you can see how these challenges will only increase.
I mean, while there are lots of positives, I think we all see positives for us in our daily lives of using AI for work. I find the blurred lines, or the blurring of lines, between the automated non-human responses and friendship troubling. You know, responses are delivered in such a relatable way, and they mirror the language and tone of the child. I mean, it’s kind of really easy to be impressed by the technology, but also very easy for a child to believe whoever they’re talking to is real and give them their trust, without that chatbot having any kind of understanding or context of that child’s life. You know, another real challenge for parents is that they give children another reason not to turn off from their devices, and there’s always one there to talk to. So, it’s going to make that kind of difficult for parents around kind of screen time and getting them to kind of, you know, make sure they’re fostering their relationships outside of their digital lives.
But I think it’s time we heard from our panelists now. And with us today, we’ve got Elle Davies, who leads digital policy for the Children’s Commissioner, including helping shape the organization’s priorities in relation to AI and children’s online safety. We’ve got Caroline Hurst, who’s Global Digital Child Safety Lead at the LEGO Group, so she’s part of the Child Rights and Safety team who represent the voices and needs of children and provide tools to the business to ensure the LEGO Group can responsibly engage with children. And Simon Turner, who is the CTO of FOIL. So, they’re an organization rooted in data and AI, and he continues to be at the forefront of innovation and disruption in the world of AI. So he’s passionate about ensuring technology is developed and deployed responsibly, especially when that comes to the youngest users. So all got great experience to share with us today. So welcome to you all. Thank you for being with us today. And we’d love to hear your affections on the findings. Maybe we could start with you, Elle, what strikes you from the report and what you’ve heard from Katie this morning?
Yes, thank you so much, Rachel, and Katie, for that fantastic presentation. And this is a really, really great piece of research. It’s really, really helpful. I guess it’s just important maybe just to give a bit of context to the role of the Children’s Commissioner. So her office and her advise ministers on how to protect and promote the rights of children. And as we sort of touched on a bit with that research is the online world is, you know, presenting a mammoth challenge to those rights in some ways, but of course there are some opportunities with those as well. And I guess our main concern with it is that there is still a lot we don’t know. So we don’t know what capacity to help or harm children any particular piece of tech has until it’s already launched on the market and children are engaging with it. We don’t know how harmful something is until we try and address the harm that’s actually already happened. And I guess that’s how we’re learning, and that is a bit of a problem.
AI is a really interesting case in point. Some of you will know that the office has researched AI among children to a degree. But that was in the context of a use of AI that we already knew instinctively was going to be highly harmful. So, I’m referring to our report that we released earlier this year on sexual deepfake images. And, uh, yeah, that, I mean, that report was really striking just in the sense that it was just knowledge of the technology was enough to be harming children. And I guess this particular use of AI is a little bit different because there are, you know, ostensibly some great, great opportunities for children to really learn and grow through the use of chat AIs. But I suppose it’s interesting to reflect on it just within the, you know, the context I’ve just sort of set out, because it fits with a pattern that we see with new and emerging technologies. You know, we see a new piece of technology is launched. It’s launched quite quickly without any inbuilt safeguards. Children start using it, and then some of them start getting hurt, and then there’s a battle about whose responsibility it is to address that. And I think your report is really fantastic because it really, really brings out the idea that AI is augmenting problems that really should have already been fixed by now. You know, it’s not okay that children are interacting with entities that haven’t been vetted as safe in the same way that, for example, staff in school have to be vetted before children can interact with them. And it’s not okay, you know, as somebody who’s worked for the Children’s Commissioner’s office for over a year now on looking at harmful content, it’s not okay that we are now have a new route for children to be exposed to harmful content when we haven’t managed to close off the last ones that we’ve been dealing with, or that we didn’t maybe see this one coming.
So yes, I guess, you know, generally as an office, we really welcome your insights on the impact it has on vulnerable children in particular. You’ve already said it, but this is a really under-researched piece, you know, area in general. It’s a very, very new piece of technology, which is very challenging. I know there’s already been a question in the chat about the sort of long-term impacts of this, which I really hope will be tracked over the next few years, especially as this is such a new technology. And just as like a personal reflection on, you know, on the back of your presentation, Katie, is just I’m really interested in how many children are using chatbots for companionship. Because I suppose there are some who would argue that if children’s needs were being met by the people in their lives, they wouldn’t necessarily need to be turning to this technology to get those needs met. So I think that’s just something really interesting that I’m hoping is going to be explored soon. But yeah, it’s great. It’s really, really great to have this all set out in a report. It’s really interesting that children really trust these tools, and I really, I guess my question is if that trust has been earned. And I hope we can explore that a little bit more.
Yeah, thank you. Kind of raising really valid points there, and I think that kind of race to the top with AI kind of means that again this technology is being developed without really thinking about who’s going to be engaging with it and what impact it’s having on them. So hopefully we’ll kind of, yeah, come back to that a bit later. But just moving on to Caroline, could you share with us some of your thoughts?
Absolutely. And just to say, delighted to be here today. And kind of, I have so many thoughts swimming in my brain around the excellent report that Internet Matters have brought out in terms of something that’s so pertinent for children today. I guess when I look at the role and my role in the LEGO Group, particularly as to what we do in the Child Rights and Safety team, I think what really struck me in that report that kind of is an affinity to what we’re trying to do every day, is to bring children’s digital rights front and center into everything that we’re trying to do. So my team here at the LEGO Group, as I said, we fall under child rights and safety. We advise the business on all of our digital products that we’re pushing out. So, folks in the audience who are sitting there thinking, “What’s the LEGO Group got to do with the digital space?” Well, we’re looking mostly around, of course, the brick is really important to us, and it’s kind of the thing that we’re known best for in that sense, but the digital element of the work that we’re doing is ensuring that we are helping children to thrive in the digital age. And that comes down to ensuring that we know where children are in the digital age so that we can help influence and help them thrive in that sense.
For me, in terms of the report, I think when it came to the chatbots in itself, and kind of piggybacking exactly on what Elle’s point was just previously, is around that social interaction. And I think something that’s really missed around AI is actually looking at children’s agency and children’s digital rights when it comes to new technologies that are emerging. Absolutely. Of course, we can all see that there are many harms when it comes to AI and the development of these emerging technologies. But actually, are we asking young people what they’re finding positive about technologies in this sense? If they’re forming connections, or if it’s filling a gap that doesn’t necessarily—the kind of the offline world doesn’t fill that gap for them—what, in a sense, are we missing there? And we have a right to ensure that what we’re creating helps develop their online life in the same way that we do in their offline life at the same time.
At the LEGO Group, we recently launched a report together with the Alan Turing Institute. I will post it in the chat as well so people can see, but we wanted to examine the impact of generative AI, particularly on child well-being. And I think that’s something that struck me with this report as well: ensuring that we’re really considering children’s well-being, putting it at the heart of when we’re creating new technologies as well, because we need to consider how AI interacts with children’s sense of agency, creativity, emotional regulation—of course with chatbots—social connection in that sense. So I think we, at the LEGO Group, would really encourage developers of AI to kind of look at insights and understand what impacts are needed in order to build AI in the best interest of the child, because I think through the report we’ve seen that that hasn’t been the case so far. So through the report, through the Alan Turing Institute that we funded, we really wanted to look at child well-being, and it’s really kind of considering that child-centered approach as well at the end of the day, because we really want to maximize the value and benefits of AI for children and give them agency over their education and play. So what struck me there is they also have agency over their education in terms of ensuring that in schools also that there is enough media literacy and literacy education that’s going around AI, and putting the focus there too to ensure that when children are accessing these technologies that are so useful for us, ensuring that they have the same digital rights as we do and that they can kind of access them in a safe way that really promotes their well-being.
Thanks, Caroline. You mentioned agency quite a lot there. I think we’ve had a question come in around about that tension between safeguarding and agency, and maybe that’s something we can come back to a bit later. And again, kind of the points you’re raising about skills, and kind of the opportunities that AI gives children to develop skills, and again the tension that it might have around kind of preventing some skills developing, like communication, if they’re getting AI to write for them; interpersonal skills; critical thinking. So again, really kind of trying to understand those longer-term implications for children and how it’s beneficial but how it might be hampering some skills that they would, you know, we developed, I suppose, as young people without AI.
And finally, Simon, kind of what’s your take as someone kind of, I suppose, more deeply involved in the development of AI tools? And maybe you can build on what Caroline said there about agency and rights, and kind of being able to really embrace that technology.
Yes, thank you, Rachel, and thanks for having me because, as you said, this is something that I’m very, very passionate about, and both from a child safety perspective. Obviously, we’re deeply involved with the development of AI and so have a very technical lens on a lot of what we see. But I’m equally passionate about the other areas of social media, particularly around violence against women and girls, for example. And there was a parallel with that with something that Elle was talking about, which was the—and one of the figures that stood out from the report was that 26% of children prefer to talk to and feel that they can talk to AI chatbots more easily than they can to an adult or a human. And I think that’s an area where potentially we need to look at ourselves and understand how and where we better support them in that space, because without that, we’re allowing AI to set the perception of normal. And I think that’s a very specific area where we need transparency and control in that area.
To Caroline’s point, I think the biggest thing that we see is the identification and verification of age, so that age-appropriate content can be delivered and filtered and managed effectively, because that, unfortunately, for most of the AI environments out there, is actually quite difficult. We can ask it, we can guide them to be specific around responses for a particular age bracket, but as you said in the beginning, the age of entry is around 13 for most systems. And children these days, they know easily how to get around that. They just use their mom’s date of birth or something like that, and all of a sudden they’ve got access. But it’s being appropriate. It’s how do we—something that’s appropriate for my son, who’s 13, is very different to what’s appropriate for my daughter, who’s 18. And understanding those granular age differences is really important.
The other thing that I’m super passionate about is parental controls, because they’re hard. I mean, anybody that’s tried to set up parental controls on their internet providers’ environments will know it’s difficult. Understanding that transparent link between the age-appropriate content and parental controls within AI, I think, is really necessary. I, as a parent, want to be able to see what my children are asking, what they’re talking to the AI chatbots about. Now, I know how to do that. So, I’m in a sort of a fairly advantageous position, but being able to create that link and have accessible parental controls in that space is really, really important.
The third thing that I’m super passionate about is bias. How do we understand and how do we see what bias looks like within these environments, so that we can help to educate our children as they’re using these to understand when things can be believed in a concrete point of view and when do we encourage that critical thinking? And you said that that need to develop critical thinking around the responses is hugely important. But we need transparency and as parents and as a support network for children that are using this, we need that link through controls to that age appropriate, through to transparency of data. But I absolutely love the report because I think it’s these types of researches, these types of reports, that really start to help us think properly about how we deal with these issues.
Thanks, Simon. Kind of rapidly moving through the session. I think we’ve kind of spent a long time chatting about some of these issues, but I just wanted to move on to some specific questions. And I don’t—everyone, please, you can drop some questions in the Q&A, and we’ll try and get to some. I know kind of the time is going quickly because it’s such an interesting topic, but we’ll try to come to some of those later. So, please do pop them in if you do have them.
I’ll just start. We’ve talked a lot about agency and children and their rights, and I just maybe start with you, Elle, to kind of, have you got any insights in terms of what you’re hearing from children about AI, how AI is shaping their lives today, and how they feel about its role in the future? You’re really passionate about kind of hearing from children themselves. We’ve talked a bit about that as well. So it’d be really interesting to see if you got some thoughts there.
Yeah, I mean, you must see me nodding quite rapidly when the word agency came up, because, you know, it’s not even just to do with AI, but just with children’s interactions with the online world generally, the general feeling children have is that they are responsible for what happens to them, but they’re not necessarily in control of what happens to them. And I think that has been a problem for years. I think it’s interesting because I mentioned this a bit when I spoke earlier that a lot of our research hasn’t necessarily been specific to AI, with the exception of the deepfakes work. But when we have talked about the online world, AI has come up organically, and children generally—the data that they were giving us is that it’s to share pessimistic thoughts about it. So I can’t necessarily speak to how they’re feeling about it shaping their future in a positive way, and that may just be because of our data collection. But what they did share sort of does tap into the agency point. You know, there’s a lot of fear around sort of what the future of work would look like for children. I think that’s really, really interesting because it does deviate a little bit from obviously the type of AI that we’re talking about today. But then I guess the creative among us were probably thinking about what that’s going to—what chatbots are going to do to the working worlds as we move forward. And I think the other thing that children really wanted to sort of highlight, and it’s this trust issue that children are feeling, not just with the AI that they’re interacting with, or the tech that they’re interacting with, but I guess their ability to then talk about it with people around them offline. And I think that, you know, it’s something that I really hope that as an office we’re going to be able to look at a little bit more. But I think it’s just interesting to hear children’s reflections on all this be very, very similar to their reflections on other parts of the online world. You know, we see things like fear and trust come up when we talk to children about pornography, when we talk to them about their social media use or search engines. It does seem to fit a pattern. So yes, it’s really, really great that you’ve got a focused study on it here.
Caroline, do you have anything to add to that? I think you kind of originally brought up the topic of agency, and as I said, we’ve had some questions around that, kind of, you know, safeguarding versus agency and how we kind of manage that for children.
Absolutely, and it’s such a pertinent question, particularly since we launched our report last month. I do have to say, the Alan Turing Institute were the folks who did the majority of this work, so I want to kind of give credit where credit’s due. The team there are amazing in what they’ve done, and they took a really amazing child participation and agency approach to this work. They created an amazing kind of report which really demonstrates exactly what children kind of want out of the AI space, and I think when I talk about agency and we talk—and there was a question about safeguarding—that’s really key and really interesting, because when the children, the way that the report was carried out, was they went to the Alan Turing Institute together with the Children’s Parliament, went up to schools in Scotland and they were using AI in the classroom. And as a safeguarding kind of practitioner myself, my first question was to them was, “Hold on a minute. You’re giving children kind of devices in terms of this environment. Are we not concerned about what they’re going to kind of see?” So, in terms of that, from a safeguarding perspective, they weren’t allowed to use AI on their own in any way, shape, or form. They were kind of monitored. So, absolutely, the agency there is then immediately kind of taken away. So, yes, we say we want to strive for agency, but the safeguarding issue hasn’t been solved here. And obviously, children, we’re putting children’s rights front and center here, and we want their safety to be at the very heart of this. But how can we expect to do that if the tools that we’re giving them aren’t safe by design to begin with in that state?
And it was really interesting actually, and you’ll read it through the report. The team, the Alan Turing Institute and the Children’s Parliament, they mainly focused on kind of creative design when it came to how children react to AI tools, and also in terms of the creativity side there. And what came out was that we actually need to support children’s diverse forms of play and creativity, both online and offline. The reason I say that, and that falls in line directly with kind of our core values here at the LEGO Group, is they really liked kind of creating—using DALL-E and ChatGPT to kind of create images for them, but they much preferred the creativity because they also had art materials there. They were kind of comparing the two and actually kind of came about that their feelings around autonomy and well-being and connectivity—you know, all things that we know are core components of how children learn and how children interact with the online space—were actually the physical creativity tools that they enjoyed even more. They said that they didn’t really feel anything when asking GenAI to do it, rather than creatively. They were concerned around the environmental impact on AI. They were concerned about misinformation. So actually, when it comes to agency that way, we need to kind of ensure that we’re looking to see what children want out of the digital products that they are—that we know that they love to use.
We also based the report on some of our RIT research, which I just really want to take two seconds to talk about here as well. RIT is our responsible innovation in technology for children. It’s a piece of research that the LEGO Group have done together with the LEGO Foundation and UNICEF. And it’s a report that demonstrates that the digital world has a really positive impact on children’s well-being. And there’s something called the RIT 8. Again, I’ll post it in the chat for those that are interested. And it demonstrates that by using eight principles of RIT, you can actually create a digital experience that creates autonomy and also creates a sense of well-being, because that’s something that we’re really interested in at the LEGO Group: we want children to leave our services feeling better from having used them than from the beginning. And actually, we used RIT 8 within the Alan Turing report, and we demonstrated exactly what they were looking for. So in the RIT 8, there are things like safety and security, relationships, autonomy, and actually AI didn’t answer kind of positively in terms of what we would want a well-being outcome to look like. So in terms of safety and security, the team demonstrated to us that the children were looking at a lot of inappropriate outputs when they were kind of searching on AI. Relationships—again, traditional art materials was something where they had more social connection, you know, they were talking to their friends when they were creating the art kind of together as well. And in terms of autonomy, they actually appreciate having a higher degree of autonomy—they don’t want someone looking over their shoulder, someone having to type it in for them. So, it’s kind of the start of when we’re looking at AI and with RIT, what I would also urge you to look at is we have a phase two of the report, which is our toolbox, where we’re encouraging industry players and product designers to use RIT as they’re developing these tools to really put children front and center when it comes to these eight principles so that the output of well-being is kind of a key component of using AI in general. I talked very quickly then, so I’m going to hand it back over, but there was a lot I wanted to say around kind of how AI is shaping children today because I thought that children’s voice was really interesting and something that came really clear from the report that we did too.
Yeah. No, no, absolutely. And it’s great to talk about that. You know, what does responsible child-centered design look like? And you kind of talked about some principles about what it could look like there. And maybe kind of turn to you, Simon, because you’re kind of obviously working in this space and developing kind of AI tools. Are there kind of good examples of this that we can draw from to kind of really think about how AI can be supportive of children and think about kind of what they want to do with it, their rights, but kind of manage those kind of tensions in terms of building the right skills?
Yes. And unfortunately, there aren’t many examples of where that’s being done well. And there’s been a few questions in the chat around the commercial tensions involved with this. Unfortunately, we’re dealing predominantly with large-scale commercial organizations that primarily are looking at the financial returns on the way in which they’re creating some of these environments. We spoke about it earlier, the importance of being able to build fine-grained age-appropriate content streams through this, and understanding where the transparency of that data source comes from is super important. And that’s one of the biggest things that we see at the moment is this transparency, and it also goes down to the whole concept of bias as well. If you can understand where the data, where the training is coming from, you can understand how and where the bias might sit, which is difficult because that’s personally subjective sometimes anyway. But age-appropriate triggers and age-appropriate guardrails for the way that we’re training these data sets is really, really key. And at the moment, that’s only really just in its infancy. And I think some of the points that have been raised in the other topics, in the questions there, I think we will start to see different streams of AI, different types of models, these large language models that have been trained very specifically with age boundaries in that space.
Thanks, Simon. And maybe we could turn to kind of regulation. I suppose we’ve talked a bit about kind of responsibilities of industry, but, you know, thinking about the kind of current policy and regulation, I think kind of, you know, we seem to feel that it’s not really keeping pace with technology at the moment. So kind of, Katie, maybe you can talk a bit about where we, where you think it’s kind of lagging behind, and kind of what needs to be done.
Yeah, absolutely. So I think, you know, at Internet Matters, we always talk about there can’t just be one approach to making the online world safe for children and young people. It needs to come from industry. It needs to come from government. You know, schools need to be supported. Parents and children themselves need to kind of understand the technology they’re engaging with, and, you know, be able to engage with it in the ways they want. But I think if we’re focusing specifically on regulation, I think one of the challenges we found in this research is that while things like the Online Safety Act, for example, you know, were meant to be built or created in such a way that they sort of change with new and emerging technologies, you know, we’ve almost fallen over at the first hurdle when it comes to AI chatbots, for example, where there is actually conflicting information that’s been put on record, whether that’s by Ofcom or by government, around how AI chatbots will actually fall within the current online safety regulation, whether that’s the Online Safety Act or other elements of the regulation. And we’ve already seen with other kind of elements of AI, thinking sort of around notification acts, for example, where, you know, legislation has had to be created or implemented because the current legislation isn’t protecting people from the kind of applications of AI and in some of those more serious ways. So I think there’s definitely room for improvement, and maybe a starting point would be clarification and, you know, what can and can’t be done under the existing legislation before we kind of move on to filling those gaps.
I think also from our perspective, one of the gaps that exists in the current legislation, and this, you know, applies to social media as well, but I think it’s sort of coming through in relation to AI chatbots, is that point around age verification. As I said kind of when I was talking, age verification is kind of key to unlocking so many other elements of children’s safety. You know, whether that’s parental controls, whether that’s age-appropriate experiences. So I think mandating that in the legislation and ensuring that we understand the age of the users that are on these platforms is really key. You know, we did hear in the research, children were signing up because they wanted features from sort of older people, so that they could get features that weren’t accessible to children and things, and that’s only possible to kind of prohibit with age verification. So I think that’s sort of a key starting point for government as well.
Great. Thanks, Katie. And maybe we’ve got 10 minutes left. We can start to kind of turn to some of the more of the questions that we’ve had from the attendees. So we talked a lot about agency, but there’s been some interest in kind of what the panelists feel about the role of schools. So maybe kind of Elle, you could talk to this. Just kind of the role of schools in supporting children’s media literacy, particularly around AI, and then what support they need, I think, is the key message and thought there.
Yeah, definitely. I mean, you know, it goes without saying that schools are extremely important in this. You know, they see children five days a week. And yeah, but I think it’s really, I think we can’t see it as a silver bullet. Katie just said that there needs to be—there’s responsibility across the board for this. And we can’t teach children into a safer world. The world is the way it is, and we can look at it and ask if we want children to grow up in this situation, and whether or not it’s built with their rights and interests in mind. So school interventions need to come alongside changes that the tech industry needs to make at the offset. But having said that, AI literacy is going to be—and it is needed right now, you know, looking at this report—it’s going to be as important as any other type of literacy that schools are currently teaching. Schools are on the front line for preparing children for the world. And I think the more we can learn about what that world looks like, which is a report like this is doing, because as I said before, there’s a lot we don’t know, and there’s a lot that schools hear for the first time before civil society or anyone in government even hears it. The schools are the ones who hear it on the first time—first one. So making sure that the roots of information flows there is going to be really important.
At the Commission’s offices, obviously, we would like to see a world where children don’t have to look out for harms, and that can and should be prevented. So, you know, the way schools can help with that, or at least the way that schools are going to be able to provide a really valuable way of tackling that, is the fact that they will be able to give some information to children in a way that is accessible to children across the country. You know, there are schools who can adjust different educational materials for their children who they need them the most, which is really great. Some of you will have seen the new statutory relationships, health, and sex education guidance that’s published this week. And we really, really strongly welcome the inclusion of AI on that curriculum. There’s a lot of other topics we’re really pleased to see as well. You know, like pornography and the link with misogyny, all these great things that, you know, it’s really nice to see is updated for the world we’re living in. And I guess in terms of what schools will need to deliver that well, because the success of that is going to be in the delivery, the Commissioner is asking the Department for Education to lead a recruitment drive for specialist RHSSE teachers who will sit across the whole curriculum. So, we’re hopeful that’s going to encourage some critical thinking about AI and the online world in general to be integrated into children’s everyday learning as well as their everyday lives. So that’s, you know, we’re really excited to see what impact that new guidance is going to have.
But yeah, no certainly I think we really welcome that as well, and you know, all the right things are kind of starting to be talked about and included there. But you’re right, it’s actually how that is implemented and what support schools get, because it is kind of very fragmented and we hear a lot from kind of parents and children around, you know, some people have really good experience in learning about these topics, media literacy and how to be kind of safe online, whereas it is not the same for all children. So how can we make sure that gets embedded so that it is and it’s kind of mandated across kind of the school communities? Maybe we can kind of turn to another question we had. I don’t know if you want to add anything to that, Katie, first before we move on.
Not at all. I think the only thing I would just—one tiny thing—is just around the disparity we were seeing, often that echoes other disparities we see between sort of areas of high and low disadvantage. Often, you know, the areas where there’s high levels of disadvantage, you know, teaching resource gets turned to other priorities. Maybe they don’t have access to the same sort of AI chatbots or things that some schools are kind of building themselves. So, I think that’s why it’s so important we get that guidance right in order to level that playing field as well.
Yeah, absolutely. We had another interesting question come through from someone who said they’re hearing more about adults who use chatbots as a substitute therapist. So, are we kind of seeing kind of that similarities with what children are doing, and both especially teens? Maybe that’s something that maybe Elle or Katie, you want to…
Yeah, I can give it a go. It’s definitely something that was raised in the research in terms of just some of the examples that children were giving. So there were some examples where, you know, maybe the types of advice or questions that they were asking or seeking responses to were the type of thing that perhaps a traditional therapist or maybe a trusted adult would answer. I think it’s something that’s definitely underexplored, and I think there are really positive applications. I think the NHS have built an app that allows you to kind of check in with a therapist in between sessions. So I think, you know, that isn’t necessarily a bad thing if it’s done in the right way, but I think we’re not at a point probably where general AI chatbots should be used as therapists for children, and that is something that we’re starting to see come through.
Yes, I’d like to just echo what Katie just said. It’s something that I feel like we’re probably going to see more of as this gets, you know, AI chatbots are going to be integrated into even more spaces that we see. I think Katie, maybe—I’ve seen the person who asked the questions use the word “substitute,” and I think that’s really interesting because it suggests that it’s not giving the full support that a regular therapist would give. And I think that really—that does raise questions about if this AI chatbot is, if it’s being used in this manner, should it be used in this manner? And if it shouldn’t be, it should be made clear that actually this is not a safe place for someone to go to when they’re feeling extremely vulnerable, or for it to be an encouraged thing and actually, I guess, celebrated in that respect, because again, it would be really fantastic if there was a space where people could access support when they need it very, very quickly. But yes, I think that would be a really interesting space for, I guess, research in terms of mental health, too.
Yeah. I think we had a comment kind of earlier in the chat around that thought about troubling that kind of children could maybe go into these spaces to get that kind of advice when obviously the AI chatbot doesn’t know anything about them, their context, their family life, you know, what support they got already, and depending on the kind of questions they’re asking, kind of how it responds to those. Um, does anyone have any thoughts on, you know, how we approach that? You know, you talked a bit there, Elle, about kind of, you know, either we encourage it if it if it’s done well, or we kind of really don’t encourage it if if it’s um if it’s still not got that kind of ability to really understand more of those issues. So, um maybe Simon or Caroline, if you’ve got some points again around kind of how AI can develop to really be better at what it could potentially do.
Yeah, yes. Um, I I think there’s there’s so much to unpack in all of those areas. The one thing I’d just like to say, and I was just responding to somebody on one of the chats, is that I think that we’re talking obviously about AI chatbots in this environment, but but AI is ultimately driving a bit of an industrial revolution in all sorts of areas that we see. And I think we we talked about it a little bit just then where we were talking about the concerns that children have about what their working future might look like. And I think that’s one of those areas where we need broader examples of how AI, not just as chatbots, but in other areas of industry, working lives, and our own daily lives, how AI is going to help us and what type of advantages it will bring and how it should be used as a tool. My son has ADHD and he’s a LEGO fanatic. Um, but he finds it sometimes frustrating because he’s got all of these wonderfully creative ideas, doesn’t know how to do them. He uses the chatbots to help him understand how he can develop his creative ideas faster. And from that perspective, it’s absolutely brilliant. My daughter’s just done her A levels. She used ChatGPT as a way of helping to test her readiness for her psychology A-level. So there are some really, really great areas where AI can add benefit to education in that space. But I think we need to look at it more broadly than just on the chatbots and understand how it’s driving this pseudo-industrial revolution because it will change absolutely everything. And I but I don’t think we should be afraid of that because I think it’s embracing it. And to Elle’s point is how do we get the technology companies to help own that problem as well? Um and and it—I don’t mean problem. That’s not quite what I mean. But because it can be a great opportunity in that space, but it’s not for one organization. It’s not for one company to to do it. It has to be collective.
Can I just add, sorry, before we end, Rachel, just I know we’re right at time. Really quickly, just to echo that as well, what I said before: we really need to support children’s diverse forms of play, both offline and online. And I think Simon just said that. I didn’t ask him to make a LEGO Group plug. I pro—I promise. But I just think it’s a really good example, right, in terms of what we’re trying to do. You know, when you buy your child’s first LEGO set, you’re on the floor with them. You’re building it with them. Parents are involved in the same way that that should be built through their digital lives as well. In terms of AI, you know, in terms of kind of we can be very fearful of it, but actually if we’re doing it together and if we use play in order to kind of help create kind of that trust that parents and families need, and companies as well, to ensure that we’re really building children’s rights into everything that we’re doing, but we’re not looking at it in a silo, that we’re looking at it kind of respecting children both offline and online. Um, I know we are at time, so sorry to interrupt.
So, yes, it’s okay. No, thanks. Really good, good comments. So, thanks everyone for your input today. So just summing up, I kind of wanted to come back to kind of how I opened, I suppose, and thinking about everything we’ve talked about there. So much to unpack, and the thought that kind of the pace of change in the online world and children’s safety within it is is kind of overwhelming for parents. And I think as we’ve heard today, I’m just one aspect of that, um, of children’s digital lives. There’s so much to consider, and, you know, in relation to AI and kind of children’s use of that, so, and making sure they can kind of enjoy it positively and safely. So it’s absolutely a collective effort. I think we’ve all talked about that kind of industry, government, um, you know, parents, schools, everyone kind of working together to make sure kind of children embrace that in a positive way. So thanks everyone for attending today, and I hope our research has really shone and the discussion today has really shone a light on this issue and children’s experiences. Um, so and kind of, you know, hopefully we can all face the challenges as we move forward together. So a huge thank you to everyone who’s participated. So our internal team for producing fantastic report and webinar today, um, and all our speakers and panelists. Katie, Elle, Caroline, and Simon, thank you so much, um, for your reflections and observations. It’s been a really great chat. I’m sure we could have done another hour. So enjoy the rest of your day, everyone, and thank you.
Explore more of our research along with resources designed for parents, carers and professionals.
Help parents learn about AI to support children’s safe use of AI tools.
Explore perspectives on generative AI in education from both parents and children.
Explore trends in children’s digital wellbeing.
Explore key findings from the May 2025 Pulse survey.

Subscribe to our newsletter

"*" indicates required fields

Read our privacy policy for more information on how we use your data.
General popup title
General popup text
Save for later or share it with family or friends
© 2025 www.internetmatters.org™ All rights reserved.
Get more insight like this – sign up to our newsletter for the latest research, expert advice and practical tools to support children’s online safety.

Lead Gen Form

"*" indicates required fields

Name*

This field is hidden when viewing the form

This field is for validation purposes and should be left unchanged.

"*" indicates required fields

source

Me, Myself and AI research – Internet Matters

AI in the Workplace – IBM

Me, Myself and AI research – Internet Matters

Meta is reportedly working on its own