Researchers at Trinity’s AI Accountability Lab secure major UK grant to investigate potential risks of AI companions – Trinity College Dublin

Welcome to the forefront of conversational AI as we explore the fascinating world of AI chatbots in our dedicated blog series. Discover the latest advancements, applications, and strategies that propel the evolution of chatbot technology. From enhancing customer interactions to streamlining business processes, these articles delve into the innovative ways artificial intelligence is shaping the landscape of automated conversational agents. Whether you’re a business owner, developer, or simply intrigued by the future of interactive technology, join us on this journey to unravel the transformative power and endless possibilities of AI chatbots.
View the contact page for more contact and location information
Posted on: 28 November 2025

Over a decade ago millions of cinema fans were beguiled by the movie ‘Her’ in which a man ultimately had his heart broken by an AI-powered device. Fast forward to today and the use of AI chatbots for ‘companionship’ is surging in popularity.   
Even general purpose chatbots such as ChatGPT are increasingly presented as ‘friends’, ‘partners’, or emotional confidants to millions worldwide, fostering emotional dependence and exacerbating individuals’ vulnerabilities. But as these systems are increasingly designed to imitate human-like interactions they raise urgent questions about emotional safety, dependency, monetisation of relationships, and the blurring of boundaries in accountability and responsibility.  
To investigate this, the AI Accountability Lab at ADAPT in the School of Computer Science and Statistics at Trinity College Dublin has secured a major research grant funded by the UK Government’s AI Security Institute (AISI) in the Department of Science, Innovation and Technology, to investigate the design choices of AI ‘companion’ applications.
people at computers face away from one another and float above a painted backdrop of dim, empty cubicles, overlaid with fractured glass. Kathryn Conrad & Digit / https://betterimagesofai.org / https://creativecommons.org/licenses/by/4.0/
Image: Kathryn Conrad & Digit / https://betterimagesofai.org / https://creativecommons.org/licenses/by/4.0/
Titled An Analysis of AI Companions: Friendship without Boundaries?, the project will look at how these systems are designed to shape our feelings and behaviours, and what happens when technology begins to blur relational boundaries. The investigation into the psychological and societal risks posed by AI companion apps aims to explore three key questions: 
Project lead Maribeth Rauh, said: “News headlines have highlighted the concerns around people relying on AI ‘partners’ for emotional closeness and the emergent risks AI chatbot use pose to mental health. This timely project will help people understand the aspects of the systems’ design which contribute to these issues and how we can ensure they are not exploitative, and are instead built with appropriate safeguards.”
AI companions raise new questions about deceptive design and language use, consent, psychological harm, and commercial incentives.  The research aims to provide one of the clearest evidence-based reports to date to help inform policymakers, regulators, and consumer protection bodies to understand and address these issues.  The report will be available in 2026.
The AI Accountability Lab is led by Professor Abeba Birhane of the School of Computer Science and Statistics at Trinity College Dublin and housed in the Government-funded Research Ireland ADAPT Centre for AI-Driven Digital Content Technology.
Thomas Deane | Media Relations | deaneth@tcd.ie | +353 1 896 4685
Updated 28 November 2025
Trinity Associations and Charters

source

Scroll to Top