#Chatbots

Airline held liable for its chatbot giving passenger bad advice – what this means for travellers – BBC

Welcome to the forefront of conversational AI as we explore the fascinating world of AI chatbots in our dedicated blog series. Discover the latest advancements, applications, and strategies that propel the evolution of chatbot technology. From enhancing customer interactions to streamlining business processes, these articles delve into the innovative ways artificial intelligence is shaping the landscape of automated conversational agents. Whether you’re a business owner, developer, or simply intrigued by the future of interactive technology, join us on this journey to unravel the transformative power and endless possibilities of AI chatbots.
When Air Canada's chatbot gave incorrect information to a traveller, the airline argued its chatbot is "responsible for its own actions".
Artificial intelligence is having a growing impact on the way we travel, and a remarkable new case shows what AI-powered chatbots can get wrong – and who should pay. In 2022, Air Canada's chatbot promised a discount that wasn't available to passenger Jake Moffatt, who was assured that he could book a full-fare flight for his grandmother's funeral and then apply for a bereavement fare after the fact. 
According to a civil-resolutions tribunal decision last Wednesday, when Moffatt applied for the discount, the airline said the chatbot had been wrong – the request needed to be submitted before the flight – and it wouldn't offer the discount. Instead, the airline said the chatbot was a "separate legal entity that is responsible for its own actions". Air Canada argued that Moffatt should have gone to the link provided by the chatbot, where he would have seen the correct policy. 
The British Columbia Civil Resolution Tribunal rejected that argument, ruling that Air Canada had to pay Moffatt $812.02 (£642.64) in damages and tribunal fees. "It should be obvious to Air Canada that it is responsible for all the information on its website," read tribunal member Christopher Rivers' written response. "It makes no difference whether the information comes from a static page or a chatbot." The BBC reached out to Air Canada for additional comment and will update this article if and when we receive a response.
Gabor Lukacs, president of the Air Passenger Rights consumer advocacy group based in Nova Scotia, told BBC Travel that the case is being considered a landmark one that potentially sets a precedent for airline and travel companies that are increasingly relying on AI and chatbots for customer interactions: Yes, companies are liable for what their tech says and does. 
"It establishes a common sense principle: If you are handing over part of your business to AI, you are responsible for what it does," Lukacs said. "What this decision confirms is that airlines cannot hide behind chatbots."
Air Canada is hardly the only airline to dive head-first into AI – or to have a chatbot go off the rails. In 2018, a WestJet chatbot sent a passenger a link to a suicide prevention hotline, for no obvious reason. This type of mistake, in which generative AI tools present inaccurate or nonsensical information, is known as "AI hallucination". Beyond airlines, more major travel companies have embraced AI technology, ChatGPT specifically: In 2023, Expedia launched a ChatGPT plug-in to help with trip planning.
Lukacs expects the recent tribunal ruling will have broader implications for what airlines can get away with – and highlights the risks for businesses leaning too heavily on AI. 
In the meantime, how can passengers stand guard against potentially wrong information or "hallucinations" fed to them by AI? Should they be fact-checking everything a chatbot says? Experts say: Yes, and no.
"For passengers, the only lesson is that they cannot fully rely on the information provided by airline chatbots. But, it’s not really passengers' responsibility to know that," says Marisa Garcia, an aviation industry expert and senior contributor at Forbes. "Airlines will need to refine these tools further [and] make them far more reliable if they intend for them to ease the workload on human staff or ultimately replace human staff."
Garcia expects that, over time, chatbots and their accuracy will improve, "but in the meantime airlines will need to ensure they put their customers first and make amends quickly when their chatbots get it wrong," she says – rather than let the case get to small claims court and balloon into a PR disaster. 
Travellers may want to consider the benefits of old-fashioned human help when trip-planning or navigating fares. "AI has advanced rapidly, but a regulatory framework for guiding the technology has yet to catch up," said Erika Richter of the American Society of Travel Advisors. "Passengers need to be aware that when it comes to AI, the travel industry is building the plane as they're flying it. We're still far off from chatbots replacing the level of customer service required – and expected – for the travel industry."
Globally, protections for airline passengers are not uniform, meaning different countries have different regulations and consumer protections. Lukacs notes that Canadian passenger regulations are particularly weak, while the UK, for example, inherited the Civil Aviation Authority and regulations from the 2004 European Council Directive
"It's important to understand that this is not simply about the airlines," he said. Lukacs recommends passengers who fall victim to chatbot errors take their cases to small claims court. "They may not be perfect, but overall a passenger has a chance of getting a fair trial."

Join more than three million BBC Travel fans by liking us on Facebook, or follow us onTwitter and Instagram.
If you liked this story, sign up for The Essential List newsletter – a handpicked selection of features, videos and can't-miss news delivered to your inbox every Friday.
Why do flying fears persist despite falling accident rates? Learn tips to conquer your anxiety.
BBC Click attend Mobile World Congress to test the latest tech products and trends.
BBC Click visits a Madrid hospital to see patients treated with an ultrasound for tremors.
BBC Click speaks to the visual effects team behind the latest Disney blockbuster.
BBC Click meets TikTok creator Peggy Xu who gained millions of views sharing milk videos.
BBC Click's Paul Carter visits the world's first fully AI-powered hotel in Las Vegas.
BBC Click explores an Arctic vault that stores digital artefacts from across the globe.
BBC Click visits CES 2025 to find out about the latest health tech, from medical tools to well-being devices.
BBC Click heads behind the scenes of the Sydney Opera House to explore the tech powering the famous landmark.
BBC reporter Marc Cieslak explores a high-security hub monitoring digital threats ahead of the US election. 
Two voice-over artists were listening to a podcast when they heard their own stolen AI-generated voices.
The iconic team has developed technology which spread to concerts, nightclubs, and other sports teams.
Could the United States' largest saltwater lake hold the key to its energy future?
Space agencies around the world need lunar soil.
Dandora sits on the outskirts of Nairobi, Kenya, and 800 tonnes of garbage is dumped on the site every day.
We spoke to two influencers who use the short form video platform to raise awareness and inform.
Apple is turning science fiction into reality but was VR meant to be used like this?
Watch the full documentary Mark Zuckerberg: The Billionaires Who Made Our World on BBC Select.
Fairphone's technology aims to lower our environmental footprint.
The BBC tests out kits from Lego, MakeBlock and the Micro:Bit Foundation to try to learn to code.
Velvet Sundown has racked up hundreds of thousands of streams on Spotify – without anyone even being sure if it's real.
Businesses that rush to use AI to write content or computer code, often have to pay humans to fix it.
The airline was hit by a cyber attack on a platform storing names, email addresses and phone numbers.
The robots kicked, scored and tumbled while competing at a tourmentant in Beijing.
The state law was challenged by porn sites that argued the age-checking requirement violated constitutional rights to free speech.
Copyright 2025 BBC. All rights reserved.  The BBC is not responsible for the content of external sites. Read about our approach to external linking.
 

source

Airline held liable for its chatbot giving passenger bad advice – what this means for travellers – BBC

Build Your Own AI Terminal Chatbot in