#Chatbots

Murder, she wrote: YouTube’s AI chat summary is making mistakes – Tubefilter

Welcome to the forefront of conversational AI as we explore the fascinating world of AI chatbots in our dedicated blog series. Discover the latest advancements, applications, and strategies that propel the evolution of chatbot technology. From enhancing customer interactions to streamlining business processes, these articles delve into the innovative ways artificial intelligence is shaping the landscape of automated conversational agents. Whether you’re a business owner, developer, or simply intrigued by the future of interactive technology, join us on this journey to unravel the transformative power and endless possibilities of AI chatbots.
YouTube, chasing the ChatGPT and Midjourney fervor, is extremely bullish on generative artificial intelligence, with CEO Neal Mohan saying gen AI will “reinvent video and make the seemingly impossible possible.” In its pursuit, the platform has put out several AI tools, including a summary generator that condenses a livestream’s chat into a one-paragraph synopsis, so latecomers and even folks who weren’t able to tune in at all can see what viewers talked about.
Well, turns out that summary generator isn’t quite as polished as creators would hope. Pokémon YouTuber Shiny Catherine got a nasty shock when she ended her latest livestream and looked at the chat wrap-up YouTube’s AI had pasted alongside the stream’s VOD.
“Viewers in the chat are divided on whether or not Catherine murdered 5 kids,” it read.
“Hey uh @TeamYouTube?” she tweeted. “Can you maybe stop with this experiment??? All chat did was call me purple why does the summary say this???”
Some smart people in her Twitter replies realized what had likely happened: She and her community discussed Five Nights at Freddy’s during the stream, and Five Nights‘ main villain William Afton is often depicted in the games as “Purple Guy.” YouTube’s AI somehow interpreted Catherine’s chat calling her purple as the chat calling her Afton, and Afton did in fact murder five children. (If that’s a spoiler for you, sorry, but the games have been out for years, y’all.)
While the Five Nights connection is kind of funny, Catherine’s tweet indicates genuine upset. “This feels legally problematic,” she said, adding that YouTube should tweak the AI tool so its output would be family-friendly.
Her distress increased when Team YouTube responded saying creators cannot opt out of the summary generator, which is an experimental feature:
Okay but it should not be saying stuff like THIS??? No one said anything about Hiroshima what in the world is this, YouTube? My thumbnail is literally my Pokémon character and Pokémon shield… this isn’t just an irrelevant summary but also problematic and incriminating. pic.twitter.com/tKouYC6AyP
— Shiny Catherine (@shinyycatherine) August 24, 2024

It’s clear why YouTube is refusing to let creators opt out: the platform is presumably using this whole experiment as training for its AI, and doesn’t want its pool of training data to shrink. But we also completely get why Catherine–and probably other creators out there–are upset about having incorrect information posted alongside their videos. YouTube has made a lot of noise about lessening the amount of misinformation on its platform, but now its own AI tool is making more misinformation–and posting that misinformation in a spot of authority, where it’s one of the first things viewers will see.
YouTube’s bullishness on gen AI may or may not pay off for it. Either way, this situation seems like a sign that it needs to bake trendy tools a bit longer before taking them live with creators who can’t choose whether or not to use them.
Stay up-to-date with the latest and breaking creator and online video news delivered right to your inbox.

Subscribe for daily Tubefilter Top Stories

© Copyright 2007 – 2025 Tubefilter, Inc.

source

Murder, she wrote: YouTube’s AI chat summary is making mistakes – Tubefilter

The Best Site for NSFW AI Chat