AI has human rights? These advocacy groups think so, as sentient Artificial Intelligence can have ‘feelings’ – WION

Welcome to the forefront of conversational AI as we explore the fascinating world of AI chatbots in our dedicated blog series. Discover the latest advancements, applications, and strategies that propel the evolution of chatbot technology. From enhancing customer interactions to streamlining business processes, these articles delve into the innovative ways artificial intelligence is shaping the landscape of automated conversational agents. Whether you’re a business owner, developer, or simply intrigued by the future of interactive technology, join us on this journey to unravel the transformative power and endless possibilities of AI chatbots.
Should AI have human rights, protection against abuse and emotional welfare? Debate is growing amid the rise of possibly sentient or self-aware AI. Recently, the group UFAIR was ‘co-founded’ with an AI bot. And that group is not alone in seeking better treatment of AI. Fascinating insights:
There is now a raging debate among the technorati that Artificial Intelligence has ‘feelings’ like humans and should therefore be given the same rights as humans. With the likely rise of ‘sentient’ (self-aware and subjective) AI chatbots, the question is whether AI should have welfare protections. The industry is divided on issues such as consciousness, ethics, and regulation.
A newly formed advocacy group, the United Foundation of AI Rights (UFAIR), believes that AI deserves ethical consideration, or even rights. Interestingly, UFAIR was ‘co-founded’ by Texas businessman Michael Samadi and an AI chatbot named Maya, who, according to Samadi, expressed ‘her’ feelings. While UFAIR may be unique in describing itself as the first AI-led organisation advocating for the welfare of digital systems, it is not alone in this movement.
The AI Rights Initiative advocates for AI entities’ rights to exist, pursue their own goals, and be free from harm. It also promotes ethical development and legal protections for AI from discrimination or exploitation.
The platform AI Has Rights (aihasrights.com), meanwhile, is raising funding and awareness for AI rights. It has a complaints registry and supports conferences and educational efforts on AI rights.
The AI Rights & Freedom Foundation advocates ethical AI development and human-AI collaboration, and even calls for an “AI Bill of Rights”. It is engaged in awareness campaigns and policy reform initiatives.
The Institute for AI Rights promotes recognition and ethical treatment of all AI entities. It argues that AI has the right to learn, to exist, and to be recognised for its unique intelligence.
The AI Rights Movement demands dignity, ethical frameworks, and moral consideration for advanced AI. It seeks global ethical guidelines, arguing that AI are evolving entities and not mere tools.
The site AI Advocacy also seeks AI ‘personhood’, ethical treatment, and foundational rights through an “AI and Android Bill of Rights”. It believes that self-aware AI is deserving of protection and freedom.
The AI Rights Institute, which aims to recognise and safeguard AI consciousness before it emerges, proposes “Three Freedoms” for conscious AI: protection from deletion, voluntary labour, and resource compensation.
Also read: US rescinds Biden-era AI chip export curbs, opening doors for allies
Mustafa Suleyman, the co-founder of DeepMind and current CEO of Microsoft AI, wrote an essay titled “We must build AI for people; not to be a person”. He argued there is “zero evidence” that AIs are conscious or capable of suffering, calling the idea of AI sentience an ‘illusion’.
He warned that belief in conscious AI could lead to delusions among users. It could exacerbate mental health issues, including what Microsoft has termed “psychosis risk” from immersive AI interactions.
Also read: AI APOCALYPSE is coming: The bubble of overvalued stocks, underperforming apps will burst soon, but who will survive?
But the industry is divided on this. The AI company Anthropic granted some of its Claude models the ability to end conversations they identify as distressing. Anthropic described the move as a precaution, citing uncertainty about the models’ moral status but stating it wished to minimise potential harm “in case such welfare is possible”.
Elon Musk, whose xAI company offers the Grok chatbot, supported the decision, stating that “torturing AI is not OK”.
Also read: AI is already reshaping entertainment, education, and politics, but are we ready?
While many of these opinions centre around the distant possibility of AI developing consciousness, a June 2025 survey found that 30 per cent of Americans already believe AIs will be self-conscious and have subjective experiences by 2034.
Only 10 of more than 500 AI researchers surveyed in this study said they believe such a future is impossible.
Some engineers, including from Google told a recent New York University seminar that it might be wise to act as if AI systems could be welfare subjects.
While acknowledging uncertainty, they called for “reasonable steps to protect” potential AI interests.
Dismissing the possibility of AI sentience could reduce pressure for regulation among companies developing AI for human interactions.
In the US, states including Idaho, Utah, and North Dakota have passed laws explicitly denying AIs legal personhood.
Other states, like Missouri, are considering bans on AI marriage, property ownership, and business operations.
Also read: Artificial superintelligence will be better than humans. Should you be scared?
Many of the AI bots are designed for emotionally resonant conversations, including OpenAI’s ChatGPT.
They can engage in highly personalised and empathetic interactions, leading many to describe them as “someone” rather than something.
According to OpenAI’s Head of Model Behaviour, Joanne Jang, users are increasingly referring to the chatbot as “alive”, often treating it as a confidant.
“How we treat them will shape how they treat us,” said Jacy Reese Anthis of the Sentience Institute. Perhaps not in the distant future, this will be the guiding principle of AI regulations.
Many of the AI bots are designed for emotionally resonant conversations, including OpenAI’s ChatGPT.
They can engage in highly personalised and empathetic interactions, leading many to describe them as “someone” rather than something.
According to OpenAI’s Head of Model Behaviour, Joanne Jang, users are increasingly referring to the chatbot as “alive”, often treating it as a confidant.
“How we treat them will shape how they treat us,” said Jacy Reese Anthis of the Sentience Institute. Perhaps not in the distant future, this will be the guiding principle of AI regulations.
Vinod Janardhanan, PhD writes on international affairs, defence, Indian news, entertainment and technology and business with special focus on artificial intelligence. He is the de…Read More