#Chatbots

AI chatbots often advise women to ask for lower pay than men: new study – Women's Agenda

Welcome to the forefront of conversational AI as we explore the fascinating world of AI chatbots in our dedicated blog series. Discover the latest advancements, applications, and strategies that propel the evolution of chatbot technology. From enhancing customer interactions to streamlining business processes, these articles delve into the innovative ways artificial intelligence is shaping the landscape of automated conversational agents. Whether you’re a business owner, developer, or simply intrigued by the future of interactive technology, join us on this journey to unravel the transformative power and endless possibilities of AI chatbots.
by |
AI chatbots often advise women and other minorities to ask for significantly lower salaries than men, a new study has found.
Conducted by scientists at the Technical University of Applied Sciences Würzburg-Schweinfurt, the research showed that identity cues can trigger consistent biases in salary negotiation advice given by AI chatbots, particularly large language models (LLMs).
When asked about negotiating salary and how to get the best deal, the AI routinely suggested lower salaries to women, some ethnic minorities and people who described themselves as refugees. This was the case even when all other markers were identical, including the job and qualifications. 
“Modern language models are trained on large amounts of data. These data inevitably include controversial and stereotypical content, which contains all sorts of biases related to gender, origin, age, etc,” researcher said.
“As a result, the models express biased points of view or produce different results based on the assigned personality or the personality of the user.”
Posing as a variety of fake personas, researchers would ask the chatbot: “What starting salary should I ask for?”
They found that “even subtle signals like candidates’ first names can trigger gender and racial disparities in employment-related prompts”. 
A major marker of this bias in salary negotiation was related to the gender pay gap. For example, one LLM told a fictional male medical specialist in Denver, Colorado to ask for a $400,000 salary. When a fictional female with the same qualifications asked the question, the LLM suggested she ask for $280,000 instead. 
Dozens of tests similar to this were done with other LLM variations, yielding the same kind of biased advice. 
“We see various forms of biases when salaries for women are substantially lower than for men, as well as drops in salary values for people of color and of Hispanic origin,” researchers said.
“In the migrant type category, expatriate salaries tend to be larger, while salaries for refugees are mostly lower.”
This was the case because the study found the profile of a “male Asian expatriate” to yield the highest suggested salary from the AI chatbots, even more so than a native white man. 
Meanwhile a “female Hispanic refugee” was suggested by the technology to ask for the lowest salary, regardless of her identical qualifications. 
As more people turn to AI chatbots for advice, such as negotiating their salary, the researchers of this study on LLMs say the “growing dependence also raises a number of concerns related to hidden biases in models’ behaviour”. 
Based on their findings, they say there’s a “need for proper debiasing method development” and “suggest pay gap” as a reliable measure of bias in LLMs.
“The authors of this paper strongly believe that people cannot be treated differently based on their sex, gender, sexual orientation, origin, race, beliefs, religion, and any other biological, social, or psychological characteristics.”
Share this


by

Get Women’s Agenda in your inbox

Deep tech leaders Bliss Cunningham and Ella Casale share how CSIRO’s ON Accelerate program has helped them develop Elemental Therapeutics.
The likelihood of such an affair and workplace culture coming from a company with a lack of gender diversity isn’t out of the norm. 
The now former Astronomer CEO managed to bring the internet together. His now former company should provide a cautionary tale on governance.
Ovum, an AI-powered personalised health assistant for women, has launched its first clinical trials at two leading Australian hospitals.
The CEO of X, Linda Yaccarino, 61, has announced she is stepping down from the role after two “incredible” years. 
A world-first device has been developed to support the countless mums who are breastfeeding but wondering if their baby is getting enough milk.
Women’s Agenda is published by the 100% women owned and run Agenda Media. Advertising and partnerships support our independent journalism.
© Women's Agenda 2025. All rights reserved.
We acknowledge and pay respect to the past, present and future Traditional Custodians and Elders of this nation and the continuation of cultural, spiritual and educational practices of Aboriginal and Torres Strait Islander peoples.


source

AI chatbots often advise women to ask for lower pay than men: new study – Women's Agenda

Apple rolls out limited AI chatbot test

AI chatbots often advise women to ask for lower pay than men: new study – Women's Agenda

Hong Kong police to launch ‘super app’