#Chatbots

Exploring the mechanism of sustained consumer trust in AI chatbots after service failures: a perspective based on attribution and CASA theories – Nature

Welcome to the forefront of conversational AI as we explore the fascinating world of AI chatbots in our dedicated blog series. Discover the latest advancements, applications, and strategies that propel the evolution of chatbot technology. From enhancing customer interactions to streamlining business processes, these articles delve into the innovative ways artificial intelligence is shaping the landscape of automated conversational agents. Whether you’re a business owner, developer, or simply intrigued by the future of interactive technology, join us on this journey to unravel the transformative power and endless possibilities of AI chatbots.
Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.
Advertisement
Humanities and Social Sciences Communications volume 11, Article number: 1400 (2024)
11k Accesses
6 Citations
Metrics details
In recent years, artificial intelligence (AI) technology has been widely employed in brand customer service. However, the inherent limitations of computer-generated natural language content occasionally lead to failures in human-computer interactions, potentially damaging a company’s brand image. Therefore, it is crucial to explore how to maintain consumer trust after AI chatbots fail to provide successful service. This study constructs a model to examine the impact of social interaction cues and anthropomorphic factors on users’ sustained trust by integrating the Computers As Social Actors (CASA) theory with attribution theory. An empirical analysis of 462 survey responses reveals that CASA factors (perceived anthropomorphic characteristics, perceived empathic abilities, and perceived interaction quality) can effectively enhance user trust in AI customer service following interaction failures. This process of sustaining trust is mediated through different attributions of failure. Furthermore, AI anxiety, as a cognitive characteristic of users, not only negatively impacts sustained trust but also significantly moderates the effect of internal attributions on sustained trust. These findings expand the research domain of human-computer interaction and provide insights for the practical development of AI chatbots in communication and customer service fields.
The world is currently undergoing a transformative era driven by artificial intelligence (AI). With the rise of AI in brand marketing and interactive communication, AI chatbots in customer service—one of their most critical applications—offer users a distinctive interactive experience while simultaneously reducing costs and increasing efficiency for businesses and brands (Roy and Naidoo, 2021; Canhoto and Clear, 2020). The compound annual growth rate of AI chatbots is projected to reach 31.6% by 2026, making it one of the fastest-growing segments in the customer service industry. Particularly during the COVID-19 pandemic, many offline services shifted online, forcing consumers to rely on digital tools such as AI chatbots to gather information, make brand choices, and complete purchasing decisions (Cheng and Jiang, 2022). A recent industry report further predicts that by 2025, 95% of company-consumer interactions will be enhanced or completed through AI chatbots (Mozafari et al. 2022). However, despite the increasing prevalence of AI chatbots in marketing and communication, research on the subject, especially empirical studies, remains relatively limited (Sands et al. 2021). Therefore, understanding how AI chatbots influence consumer interactions and their psychological responses during these engagements is crucial for helping businesses and brands manage consumer relationships more effectively (Kumar et al. 2020).
Although advancements in large language models have significantly enhanced the quality of human-AI interactions, the diversity of consumer scenarios means that AI chatbots based on natural language processing are not well-equipped to handle some subjective or context-detached questions posed by users (Lee et al. 2023). This often results in failures in human-AI interactions, which not only impair the user’s experience but also undermine their trust in both AI chatbots and the associated brands (Gillath et al. 2020). Trust is essential for the acceptance and use of information technology (Venkatesh et al. 2016) and directly influences users’ attitudes toward brands and their consumer behaviors (McLean et al. 2021). Therefore, this study systematically investigates a critical yet underexplored issue in the existing literature—the mechanisms for maintaining trust in AI chatbots during service failure scenarios. This research holds significant implications for both the study of human-AI interactions and the practical application and development of AI chatbots. Despite the growing popularity and application of AI chatbots, how they maintain user trust during service failures remains underexplored, leaving certain research gaps. First, previous studies have predominantly focused on the direct impact of AI characteristics, such as response speed and accuracy, on user satisfaction (Nguyen et al. 2022), while paying insufficient attention to how AI can influence trust restoration through social and emotional dimensions after failures. Second, although some research acknowledges that enhancing the anthropomorphism and social presence of AI chatbots is an effective way to improve the user experience in human-AI interactions (Croes and Antheunis, 2021), current research on the application of anthropomorphism in service scenarios—particularly in contexts of interaction failures—remains limited (Shin et al. 2023). Finally, while some studies have focused on the importance of trust in AI systems and explored the impact of technological features on trust, others argue that the traditional Technology Acceptance Model (TAM) does not fully capture the traits of social intelligence in AI interactions, underscoring the need for a supplementary perspective that incorporates social factors (Fox and Gambino, 2021).
To address the aforementioned gaps, this study examines interaction failures between AI chatbots and consumers. Utilizing the Computers as Social Actors (CASA) theory and Attribution Theory, and introducing AI anxiety as a moderating cognitive factor, it explores the impact of social factors on users’ sustained trust in AI chatbots and the underlying mechanisms. A cross-sectional survey of 462 consumers who have interacted with AI chatbots reveals that social characteristic factors, including perceived anthropomorphic characteristics, perceived empathy, and interaction quality, effectively maintain consumers’ sustained trust in AI chatbots despite service failures. This sustained trust is mediated by different attribution styles for failure. Specifically, users’ perception of the anthropomorphic features of AI chatbots reduces the likelihood of attributing service failures to the AI’s inherent abilities and encourages them to consider external environmental factors that might have caused the failure, thereby sustaining their trust in AI chatbots. Similarly, perceptions of the AI’s empathy and interaction quality promote external attributions for service failures, helping to maintain trust. AI anxiety, as a psychological cognitive characteristic, significantly amplifies the negative impact of internal attributions on sustained trust. The findings of this study not only expand the research scope of AI-human interaction but also provide a more systematic understanding of the factors influencing AI chatbots’ service failures. Furthermore, the study offers new perspectives and strategies for optimizing AI chatbots technology and designing human-computer interactions. By deepening the understanding of users’ psychological and behavioral responses to service failures, this research presents valuable theoretical insights and practical recommendations for the AI chatbots field, particularly in maintaining trust after service failures. Additionally, the study provides valuable references for the practice and development of AI in marketing communication, consumer psychology, and behavioral fields.
Chatbots are defined as “machine chat systems that interact with human users through natural conversational language” (Shawar and Atwell, 2005). They are intangible chat agents that interact with humans via a text interface. An AI customer service chatbot specifically refers to a chatbot used in the customer service field (Chi et al. 2020). With the development of AI technology and the support of big data in recent years, many internet platforms, including Facebook, Skype, Amazon, WeChat, and eBay, have successively launched AI customer service systems (Luo et al. 2019). Similarly, numerous multinational companies and brands, such as Coca-Cola, Gucci, and Louis Vuitton, have employed AI chatbots to provide users with 24/7 services (Chung et al. 2020). This uninterrupted and efficient online service offers greater flexibility compared to traditional services and significantly reduces response times (Ciechanowski et al. 2019). Furthermore, AI customer service can save significant human resources for companies, with studies showing that AI chatbots can currently reduce annual costs by over eight billion dollars in the United States (Ashfaq et al. 2020). AI technology is fundamentally transforming the interaction and service modes between companies and users (Ledro et al. 2022).
The gradual replacement of some traditional human jobs by AI chatbots in the customer service sector is an inevitable trend, but we must not overlook their limitations. Since AI relies on natural language processing technology and computation based on big data to achieve anthropomorphic communication, not all linguistic symbols can be formalized or logicalized, and many aspects of language cannot be detached from their specific contexts. As sociologists Berger and Luckmann stated, “language originates in face-to-face situations.” Therefore, when communication scenarios involve language that computers cannot recognize or accurately interpret, it can lead to failures in AI systems, which often result in breakdowns in human-AI interactions (Berger and Luckmann, 2016). As more AI is applied in customer service, the frequency of AI service failures is also increasing, damaging corporate images and affecting users’ trust in brands (Brandtzaeg and Følstad, 2018; Cheng and Jiang, 2022). Given that the widespread practical application of AI is inevitable, it is essential to explore how to minimize the negative impacts of human-AI interaction failures (Luo et al. 2019). In this context, research on user trust in AI chatbots is crucial because trust not only influences customers’ continued usage behavior but also shapes their attitudes toward companies and brands (Youn and Jin, 2021). Therefore, this study focuses on investigating sustained user trust in the context of human-AI interaction failures.
Existing research indicates that customers’ reactions to interaction failures are often influenced by their own expectations. When customers have low expectations regarding the flexibility of chatbot services, even if the chatbot refuses service, they tend not to provide many negative evaluations. However, when a chatbot expresses apologies for refusing service through emotional expressions, it can paradoxically lead to more negative evaluations from customers (Yu et al. 2024). Similarly, the anthropomorphic features of chatbots often cause customers to have higher expectations of their services. When these services fail, the unmet expectations can result in more negative outcomes, leading to lower customer satisfaction (Crolic et al. 2022). Different expectations can influence individuals’ attribution styles, where attribution refers to perceptions or inferences about the causes of actions, events, and outcomes (Kelley and Michela, 1980). Heider (2013) proposed two types of attributions: internal and external. Internal attribution refers to attributing the cause to internal characteristics, such as abilities and intelligence, while external attribution attributes the cause to external factors, such as the environment, luck, or the faults of others. In the context of human-AI interaction, internal attribution in this study refers to attributing the failure of AI chatbots to their capabilities, such as algorithms and NLP decision trees, while external attribution refers to blaming environmental factors or human errors, such as not clearly stating service needs in a computer-logical language.
Attribution is believed to be related to sustained attitudes and behaviors (Lei and Rau, 2021) and has been applied in the research of human-AI interactions (Chang et al. 2012). Studies on trust models in human-AI interaction suggest that when users attribute service failures to the capabilities of AI systems, it leads to a decrease in sustained trust because the capabilities of AI are seen as relatively constant and not easily changed in the short term, meaning similar problems may recur. Conversely, attributing failures to external factors does not reduce users’ sustained trust in AI chatbots, as external factors are seen as more random and less likely to recur (Hancock et al. 2021). Therefore, this study proposes the following research hypotheses:
H1a: The internal attribution of AI chatbots’ service failures is negatively related to consumers’ sustained trust.
H1b: The external attribution of AI chatbots’ service failures is positively related to consumers’ sustained trust.
The theory of Computers As Social Actors posits that human interactions with computers are similar to interactions with other humans, where perceptions and responses occur instinctively rather than consciously, leading people to treat computers as if they were real humans (Nass et al. 1995). Despite consciously knowing that computers lack human personalities, people still unconsciously perceive computers in a human-like manner (Reeves and Nass, 1996). Given its focus on the social behaviors and relationships that emerge from interactions between humans and non-human artificial entities, this theory has been widely applied in studies of human-machine interaction. For example, Nowak and colleagues found that human-computer interactions can evoke perceptions of co-presence, social presence, and virtual presence—concepts typically associated with interpersonal interactions (Nowak and Biocca, 2003). Research by Zadro and colleagues (2004) further indicates that human self-esteem and sense of belonging can be negatively affected when individuals are excluded from human-machine interactions. These studies demonstrate that human-machine interactions can generate similar emotional responses to those found in interpersonal interactions and influence individual cognition. Similarly, humans unconsciously apply social scripts from interpersonal interactions to their interactions with AI, as our evolutionary brain cannot keep pace with the rapid development of new digital communication technologies.
Based on CASA theory, when AI chatbots generate social cues, people exhibit more social behaviors, leading to different cognitions and reactions (Nass and Moon, 2000). The factors influencing interpersonal interactions are thus analogized to those between humans and machines. Consequently, anthropomorphization has been adopted by many companies and researchers as a strategy to address the identity challenges of AI chatbots (Xu et al. 2022; Belanche et al. 2021) and is widely applied in online AI customer service chatbots to enhance the consumer interaction experience (Kull et al. 2021). Common factors influencing CASA include humans’ perceptions of AI’s anthropomorphic characteristics, perceived empathic abilities, and perceptions of past human-machine interaction quality (Pelau et al. 2021).
Existing research shows that, unlike in “human-human” interactions where internal and external attributions vary, in “human-machine” interactions, people tend to attribute errors to the machine (Kim and Hinds, 2006). With AI becoming more human-like than traditional machines, could this change attribution patterns? Based on the CASA paradigm, which posits that people unconsciously apply interpersonal interaction norms to human-machine interactions, we hypothesize that the factors outlined in CASA will influence user attribution in service failure contexts and further impact sustained trust.
A key factor in CASA is the perception of anthropomorphic characteristics in AI, defined as the application of human-like traits, behaviors, or mental states to non-human entities such as objects, brands, animals, and especially recent AI devices (Golossenko et al. 2020). These traits further influence people’s emotions and cognition (Aggarwal and McGill, 2012). Research on the effects of anthropomorphic perceptions of AI presents two contrasting views. On one hand, there are concerns that overly anthropomorphized AI can induce a perception of identity threat in humans, leading to aversion, commonly referred to as the “uncanny valley” effect (Pelau et al. 2021; Ciechanowski et al. 2019). Additionally, studies have shown that when customers are angry, the anthropomorphic features of chatbots can decrease their satisfaction with the service and result in lower evaluations of the company. This negative impact is driven by the inflated pre-interaction expectations caused by the anthropomorphization of the chatbot, which, when unmet, lead to expectation violations (Crolic et al. 2022). On the other hand, it is argued that anthropomorphism can effectively enhance users’ experiences and trust in AI devices (Klaus and Zaichkowsky, 2020). Furthermore, the anthropomorphic features of chatbots can meet people’s social needs and create positive interaction experiences, sometimes even promoting consumer purchasing behavior (Sheehan et al. 2020; Han, 2021). In previous research on attribution tendencies, it has been confirmed that people are more inclined to attribute failures to computers rather than humans (Moon, 2003). However, modern artificial intelligence, with its anthropomorphic features, makes computers more akin to “humans,” potentially altering attribution dynamics to resemble those found in “human-human” interactions.
The crucial difference between “human-machine” and “human-human” interactions lies in the presence of emotion. Traditional robots or computers are perceived to operate based on set rules to complete tasks, devoid of emotion, and are thus expected to be employed in standardized and procedural services (Yogeeswaran et al. 2016). Recent studies, however, indicate that AI chatbots, particularly those powered by large language models (LLM), foster expectations of more personalized responses and exhibit anthropomorphic traits that enable users to form emotional connections with them (Shahzad et al. 2024). Evidence suggests that consumers’ emotional bonds with customer service can activate feelings of kindness and sympathy, thereby reducing the propensity to blame the service provider’s competence for failures (Lastner et al. 2016). Moreover, according to Heider’s (2013) attribution theory, people tend to explain behaviors based on the known characteristics of the actor, whether human or machine. AI chatbots that display strong “human-like” qualities, distinct from traditional robots or chatbots, can enhance users’ perceptions of their capabilities (Chi and Hoang Vu, 2023). To maintain consistency in perceptions of AI chatbots’ capabilities, failures might be attributed to external circumstances or unforeseeable errors, rather than a lack of inherent ability in the AI. Furthermore, highly anthropomorphized AI, through its human-like interactive methods, can adjust users’ expectations of its capabilities (Yang et al. 2022). When users perceive AI to have complexities and limitations similar to humans, they are likely more lenient toward errors, considering them “human errors.” Based on the logic outlined above, this study hypothesizes that anthropomorphic perceptions of AI chatbots will suppress the tendency to attribute service failures to chatbots’ internal factors (blaming the AI chatbot’s lack of ability) and promote external attributions (attributing failures to external environmental factors rather than the AI chatbot’s abilities). Combining the previously discussed relationship between attribution and ongoing trust, this paper proposes the following hypothesis:
H2a: The perception of anthropomorphic characteristics in AI chatbots is negatively correlated with internal attributions (capability) in the context of service failures.
H2b: The perception of anthropomorphic characteristics in AI chatbots is positively correlated with external attributions (environment) in the context of service failures.
H2c: Internal attribution (capability) mediates the impact of perceived anthropomorphic characteristics on sustained trust.
H2d: External attribution (environment) mediates the impact of perceived anthropomorphic characteristics on sustained trust.
Empathy is defined as the human capacity to experience the emotions of others, meaning that one person can be influenced by the emotions of another (Cuff et al. 2016). Mutual understanding between customer service representatives and consumers is crucial, and empathy is therefore considered an important component in relationship marketing and service research (Iglesias et al. 2019). Empathy is also viewed as an essential human trait, shown to significantly enhance positive feelings toward companies and customer service personnel while reducing blame (Rozin and Royzman, 2001). Therefore, when AI chatbots exhibit empathetic traits, users perceive them as more human-like, and this reduces blame in situations of service failure. Consequently, similar to the logic discussed earlier, the following hypotheses are proposed:
H3a: The perception of empathetic ability in AI chatbots is negatively correlated with internal attributions (capability) in the context of service failures.
H3b: The perception of empathetic ability in AI chatbots is positively correlated with external attributions (environment) in the context of service failures.
H3c: Internal attribution (capability) mediates the impact of perceived empathetic ability on trust.
H3d: External attribution (environment) mediates the impact of perceived empathetic ability on trust.
Interpersonal communication and interaction are considered among the most important features of human society (Araujo, 2018). According to social cognitive theory, people’s attitudes, cognitions, and behaviors are influenced by past experiences (Bandura, 1986); therefore, people’s perceptions of AI chatbots are also shaped by previous interactions. High-quality interactions encourage people to engage more eagerly with AI, fostering the development of para-social relationships between humans and AI (Ashfaq et al. 2020). This suggests that users with positive past experiences interacting with AI chatbots are more likely to perceive the AI chatbots as human-like, and thus less likely to directly blame them in situations of service failure, enabling more objective attributions. Based on the above, the following hypotheses are proposed:
H4a: Perceived interaction quality with AI chatbots is negatively correlated with internal attributions (capability) in the context of service failures.
H4b: Perceived interaction quality with AI chatbots is positively correlated with external attributions (environment) in the context of service failures.
H4c: Internal attribution (capability) mediates the impact of perceived interaction quality on trust.
H4d: External attribution (environment) mediates the impact of perceived interaction quality on trust.
Robot anxiety, defined as a pre-existing anxiety toward robots among users, is a relatively stable psychological trait. This anxiety does not stem from distrust in the robot’s capabilities but rather from the uncertainty and discomfort triggered by interacting with robots (Nomura et al. 2008). In research contexts involving AI robots (or chatbots), this is often referred to as AI anxiety (Johnson and Verdicchio, 2017). Such anxiety can occur in both real and imagined human-robot interaction scenarios, and in studies of human-robot interaction, anxiety toward AI robots is considered one of the significant reasons why people avoid artificial intelligence (Celik and Yesilyurt, 2013). In scenarios where AI chatbots are employed, extensive research has confirmed the negative effects of AI anxiety. For example, studies by Mende et al. demonstrated that anxiety leads to negative attitudes and distrust toward service robots (Mende et al. 2019), and Meyer et al. indicated that high levels of anxiety reduce users’ attitudes toward human-robot interactions (Meyer et al. 2020). Similarly, we hypothesize that similar effects caused by AI anxiety may also occur in our study; specifically, users with high levels of AI anxiety may have a lower willingness to sustain trust in AI chatbots in the context of service interaction failures, and it may inhibit the impact of internal attributions on sustained trust. Since external attributions involve attributing causes to agents other than the robot, this study does not consider the impact of AI anxiety on this pathway. Based on the above, the following research hypotheses are proposed:
H5a: AI anxiety is negatively correlated with sustained trust in AI chatbots in the context of service failure.
H5b: AI anxiety moderates the impact of internal attributions on sustained trust.
Conceptual model
Based on the CASA framework and attribution theory, the specific research model of this paper is depicted in Fig. 1. Additionally, in the model, we include gender, age, education, and average daily internet usage as covariates.
Sustained trust impact model based on CASA theory.
The current study aims to explore the impact of CASA factors on the attribution of service failures and the sustained trust in AI chatbots. Therefore, the questionnaire is divided into two parts. The first part measures consumers’ perceptions of CASA factors and levels of AI anxiety, which are relatively stable cognitive aspects related to AI chatbots. After completing these measurements, participants are asked whether they have experienced service or interaction failures with AI chatbots. If the answer is no, the survey ends; if yes, they proceed to the second part, which aims to ensure that the sample has experienced service failures and to measure their attribution tendencies and willingness for sustained trust in such cases.
The study employs a survey method and recruits volunteers through social media platforms, given the high overlap between AI users and social media users. To ensure that the sample encompasses a broader audience, including different age groups, professional backgrounds, and social media usage habits, we posted recruitment information on multiple social media platforms, such as Facebook and TikTok. Initially, 600 questionnaires were collected and screened based on the following criteria: (1) The response to the question “Have you interacted with AI chatbots and experienced service failure?” must be “yes”; (2) Screening questions must be answered correctly; (3) Total response time must exceed 60 s; (4) Respondents must not choose the same value for more than eight consecutive questions. It is important to emphasize that before the formal start of the survey, we asked respondents to recall their most memorable service failure experience to facilitate situational activation. The criterion of “most memorable” was chosen because such experiences often leave a lasting impression on customers and can more accurately reflect how the CASA characteristics of AI chatbots influence customer sustained trust. After excluding 138 invalid questionnaires, 462 valid responses were obtained, resulting in a response rate of 77%. Basic sample information is presented in Table 1.
This study includes seven latent variables to be measured, encompassing three CASA factors: perception of anthropomorphic characteristics, empathic abilities, and the past interaction quality of AI chatbots; the psychological trait of inherent AI anxiety in users; attribution methods for chatbots’ service failures, including internal attribution (related to capability) and external attribution (related to the environment); and users’ sustained trust in AI chatbots following service failures. The measurement scales for all latent variables are based on previously validated scales, with the average of the items within each scale used to form the score for each latent variable. A 7-point Likert scale is used throughout the questionnaire.
All scales are adapted for scenarios involving AI chatbots’ service failures. The scale for perceived anthropomorphic characteristics is adapted from Wang et al. (Wang, 2017), including 6 items (e.g., “I feel the AI customer service chatbot seems to have its own emotions”). The scale for perceived empathic abilities is sourced from Pelau et al. (Simon, 2013), comprising 4 items (e.g., “I feel the AI customer service chatbot can understand my feelings”). The scale for perceived interaction quality is adapted from Kim et al. (Kim and Baek, 2018), including 4 items (e.g., “The AI customer service chatbot can engage in effective two-way communication with me”). The AI anxiety scale is referenced from Song and Kim (Song and Kim, 2022), containing 5 items (e.g., “I am concerned about talking to AI chatbots because it might lead to the disclosure of my personal information”). As previously described, before measuring attribution tendencies, we activated the service failure scenario by asking participants whether they had experienced service failures with AI customer service chatbots. The scales for internal and external attribution are derived from Lei et al. (Lei and Rau, 2021), each containing 4 items (internal attribution example: “The service failure of the AI customer service chatbot is mainly due to its inadequate internal algorithms”; external attribution example: “The service failure of the AI customer service chatbot is mainly because the consumer did not clearly express their service demands”). The scale for sustained trust is sourced from Koufaris et al. (Koufaris and Hampton-Sosa, 2004), including 3 items (e.g., “I will continue to use the AI customer service chatbot in the future”).
Given that this study is exploratory in nature, the model architecture is constructed based on theoretical reasoning and logical derivation rather than existing models. Therefore, it is appropriate to use the Partial Least Squares Path Modeling method (PLS) with Smart PLS for testing the research model in this context (Hair et al. 2012). Furthermore, Partial Least Squares Path Modeling (PLS-PM) allows us to handle models with multiple complex structures, incorporating numerous latent and manifest variables, and to explore potentially intricate relationships between variables. This is particularly suited to the context of our study. Compared to traditional covariance-based Structural Equation Modeling (SEM), PLS-PM does not require strict assumptions about data distribution. It uses a component-based estimation approach, making it more flexible when dealing with non-normally distributed data. This is especially important for exploratory research, as researchers may not be able to ensure that data perfectly follows a normal distribution in the early stages. Finally, PLS-PM is suitable for small sample sizes, which is advantageous for exploratory studies. In the specific research context of this paper concerning AI chatbots’ service failures, obtaining large samples is challenging. However, PLS-PM can provide stable and reliable results even with small sample sizes (Hair et al. 2019).
To rigorously test our hypotheses regarding the mediating and moderating effects within our theoretical model, we employed a robust methodological framework in our analysis. The choice of mediation and moderation analyses was driven by the aim to uncover the underlying mechanisms and conditional processes influencing the relationships among our variables.
Mediation Analysis: We conducted mediation analysis to investigate the process through which the independent variable (perceived anthropomorphic characteristics, perceived empathic abilities, and perceived interaction quality) influences the dependent variable (sustained trust) via the mediator (internal and external attribution). This was essential to provide a deeper understanding of the psychological processes at play. We used the bootstrapping method, a non-parametric resampling technique, to test the indirect effects. This approach does not assume normality of the sampling distribution and thus provides more accurate confidence intervals for the indirect effects. We used 5,000 bootstrap samples and a 95% confidence interval to determine the significance of the mediation effects.
Moderation Analysis: The moderation analysis was utilized to examine how different conditions affect the strength or direction of the relationship between our variables. We included interaction terms in our regression models to test for moderation effects. This approach allowed us to identify the circumstances under which the effects were more pronounced or diminished, providing insights into the variability of psychological impacts across different subgroups or conditions.
Covariates: To enhance the robustness of our findings, we controlled for potential confounding variables (gender, age, education background and average daily internet usage) that could influence the relationships being examined. This step was critical to ensure that the effects observed were not spuriously driven by other factors unrelated to the theoretical constructs under investigation.
As shown in Table 2, all factor loadings of the measurement items in this study range from 0.731 to 0.958, indicating that all measurement items are retained. Additionally, the Cronbach’s alpha values for the latent variables range from 0.778 to 0.958, demonstrating that the scale’s internal consistency meets the required standards. All Composite Reliability (CR) values exceed the standard value of 0.7, confirming that the scale’s composite reliability is satisfactory (Hair et al. 2019). The Average Variance Extracted (AVE) values for all variables exceed the acceptable threshold of 0.5, indicating that the convergent validity of the variables meets the standards (Fornell and Larcker, 1981). Furthermore, the Variance Inflation Factor (VIF) values for each factor are below 10, suggesting that there is no multicollinearity issue in the measurement scale of this study (Hair, 2009).
The discriminant validity among the variables was examined, and the results are presented in Table 3. The square root of all variables’ AVE values (on the diagonal) is greater than the Pearson correlation coefficients between variables, indicating that the discriminant validity of the scales meets the required standards. Furthermore, we employed Harman’s single-factor test to examine the presence of common method bias. There were seven factors with eigenvalues greater than 1, and the variance explained by the first unrotated factor was 29.012%, which is below the critical threshold of 40%. Therefore, this study does not exhibit common method bias (Podsakoff et al. 2003).
The fitness of the research model was assessed next. Initially, using the PLS Algorithm, the R2 values of the variables were all greater than the accepted threshold of 0.1, indicating good predictive accuracy of the model (Hair et al. 2017). Subsequently, through Blindfolding calculations, the Stone-Geisser Q2 values for the variables were all above 0, demonstrating the model’s robust predictive relevance (Dijkstra and Henseler, 2015). Additionally, the SRMR value was 0.054, which is below the required 0.08, indicating a good model fit, and the RMS Theta value was 0.115. Given its sensitivity to sample size, we primarily focused on CFI, TLI and NFI values, CFI = 0.923 > 0.9, TLI = 0.912 > 0.9, NFI = 0.903 > 0.9, all indicating a good fit. These results suggest that the research model exhibits good fitness.
A bootstrapping test with a sample size of 5000 was conducted on the collected raw data to explore the path coefficients and their significance within the model. The final test results of the model are presented in Table 4.
The data analysis results indicate that internal attribution for AI customer service chatbot service failures is significantly negatively correlated with sustained trust (β = −0.409, p = 0.000, 95% Boot CI = [−0.502, −0.319]), while external attribution is significantly positively correlated with sustained trust (β = 0.429, p = 0.000, 95% Boot CI = [0.337, 0.513]), supporting hypotheses H1a and H1b. The perception of anthropomorphic characteristics of AI chatbots reduces users’ tendency for internal attribution (β = −0.158, p = 0.006, 95% Boot CI = [−0.272, −0.052]) and promotes external attribution (β = 0.336, p = 0.000, 95% Boot CI = [0.230, 0.437]), thereby influencing sustained trust. Internal attribution (β = 0.067, 95% Boot CI = [0.019, 0.120]) and external attribution (β = 0.143, 95% Boot CI = [0.087, 0.204]) act as mediators in this process, thus confirming H2a, H2b, H2c, and H2d. However, the perception of the chatbot’s empathic abilities does not significantly reduce internal attribution tendencies (β = 0.029, p = 0.650, 95% Boot CI = [−0.091, 0.155]), but it does promote external attribution (β = 0.107, p = 0.013, 95% Boot CI = [0.025, 0.196]), affecting sustained trust (β = 0.046, 95% Boot CI = [0.010, 0.084]), thus confirming H3b and H3d while H3a and H3c are not supported. Similarly, the perceived quality of interaction with AI chatbots does not significantly reduce users’ internal attribution tendencies (β = 0.050, p = 0.405, 95% Boot CI = [−0.064, 0.168]), but it does promote external attribution (β = 0.349, p = 0.000, 95% Boot CI = [0.247, 0.455]), affecting sustained trust (β = 0.150, 95% Boot CI = [0.099, 0.207]), thus confirming H4b and H4d while H4a and H4c are not supported. AI anxiety significantly reduces sustained trust (β = −0.472, p = 0.006, 95% Boot CI = [−0.595, −0.353]), supporting H5a; it also significantly moderates the impact of internal attribution on continued trust (β = −0.208, p = 0.000, 95% Boot CI = [−0.282, −0.121]). When individual AI anxiety levels are high (+1 SD), the impact of internal attribution on sustained trust is strong, whereas it is weak when AI anxiety levels are low (−SD), confirming H5b. The specific moderation effects are illustrated in Fig. 2.
Moderating effect of AI anxiety.
The current study aims to explore the impact mechanism of CASA (Computers Are Social Actors) characteristics exhibited by AI customer service chatbots on consumer sustained trust in the context of service failures. By integrating attribution theory, this research introduces internal attribution (related to capability) and external attribution (related to the environment) as mediators to explain this mechanism. Additionally, AI anxiety is introduced as a moderating factor to examine the effects of psychological trait differences among consumers. The specific research results are presented in Fig. 3. This study provides valuable references for brands on how to properly handle service failures of AI customer service chatbots and offers directions for future enhancements of AI chatbots’ services.
Tested structural model of sustained trust.
The current study validates the positive role of human-like social interaction traits displayed by AI customer service chatbots in maintaining user trust during service failure scenarios. This process is primarily mediated by individual attributions of service failures to either the internal capabilities of AI chatbots or external environmental factors. Specifically, when consumers attribute service failures to the internal capabilities of AI chatbots, their sustained trust in the AI diminishes. Conversely, attributing failures to external environmental factors tends to sustain their trust. Furthermore, consumers’ perceptions of high anthropomorphic characteristics in AI customer service chatbots not only reduce the tendency to blame service failures on the internal capabilities of the AI but also encourage consideration of external environmental factors as potential causes, thereby promoting sustained trust in AI chatbots. Additionally, the perception of AI’s empathetic abilities fosters sustained trust by encouraging attributions of service failures to external factors; however, this perception does not significantly correlate with internal attributions. The perception of past interaction quality with AI follows a similar logic, facilitating external attributions that sustain trust but not significantly associated with internal attributions.
It is noteworthy that our research findings do not align with some previous studies, which indirectly highlights the complexity of the impact of anthropomorphic features in the study of AI chatbots. Our results indicate that anthropomorphism can mitigate customers’ attributions of incompetence to AI chatbots in service failure scenarios, thereby maintaining trust in the AI. This finding contrasts significantly with the conclusions of Crolic et al. (2022), who noted that anthropomorphic features exacerbate negative evaluations of AI chatbots when consumers are angry. Our interpretation of this discrepancy is as follows: Firstly, the emotional state of the customer is a critical factor. In our study, it is possible that customers did not reach a state of anger when encountering service failures. Consequently, the anthropomorphized chatbot, by emulating human empathy and understanding, could effectively alleviate customers’ disappointment, thereby reducing doubts about the chatbot’s capabilities. However, in the study by Crolic et al. (2022), when customers were already in a state of anger, anthropomorphic features might be perceived as provocative, leading to heightened expectations of AI chatbots. As a result, any minor errors or deficiencies could be magnified, intensifying negative evaluations. Secondly, the nature and severity of the service failure might also influence the effects of anthropomorphism. In instances of minor service failures, an anthropomorphized chatbot might rectify customer dissatisfaction through displays of understanding and care. In contrast, in severe service failure scenarios, this strategy may not be sufficient, as customers might expect more practical solutions rather than just emotional support. Therefore, we advocate for future research to explore more conditions that might affect the efficacy of anthropomorphic features, to better understand the complex role of anthropomorphism in AI services.
Moreover, this study suggests that, compared to general anthropomorphism, empathy and past quality service are more tangible and higher-level human-like social interactions, which can elevate customer expectations (Park et al. 2022). When faced with service failures, particularly minor errors, higher levels of anthropomorphic perception may contradict these expectations (Cheng, 2023). This contradiction means that consumers may not reduce their attribution of faults to the internal factors of the AI, aligning with previous research indicating that advanced anthropomorphism and elevated social interaction expectations might not reduce, and may even exacerbate, the negative impacts of service failures (Puntoni et al. 2021). Overall, the CASA (Computers Are Social Actors) factors play a constructive role in maintaining user trust in AI chatbots in service failure scenarios, echoing past research suggesting that improving user experiences with AI systems through increased anthropomorphism and social factors can be beneficial (Ho et al. 2018).
This research introduces AI anxiety, a cognitive perception, as both a direct influencing factor and a moderating variable. It was found that levels of AI anxiety not only directly reduce consumers’ willingness to maintain trust in AI customer service in the context of service failures but also exacerbate the negative impact of internal attributions on sustained trust. Specifically, consumers with higher levels of AI anxiety experience a more severe negative impact on their trust in AI chatbots due to internal attributions. Conversely, those with lower levels of AI anxiety exhibit a relatively mild negative impact on trust under similar circumstances. Consumers with high levels of AI anxiety tend to hold more negative and resistant attitudes toward AI systems. When service failures occur, these users are more likely to rely on stereotypical views of AI, leading to a more pronounced reduction in sustained trust. However, those with lower levels of AI anxiety, despite perceiving that the service failure may be due to internal factors of the AI, adopt a more tolerant attitude and thus experience a lesser impact on trust (Chuah and Yu, 2021).
AI anxiety, as an individual psychological state, reflects consumers’ insecurity about artificial intelligence technology and concerns about potential negative consequences. This insecurity may stem from a lack of understanding of the technology, previous negative experiences, or negative media portrayals (Kaya et al. 2024). Therefore, brands employing AI chatbots must consider how to alleviate users’ AI anxiety, potentially by enhancing users’ AI literacy and improving communication transparency to increase understanding and trust in the technology (Su et al. 2023). This study aligns with existing literature, as previous research has highlighted that consumers’ technological anxiety significantly affects their acceptance and willingness to use technology. The introduction of AI anxiety provides a new perspective on understanding and improving AI customer service experiences, particularly in managing users’ emotions and expectations in the context of service failures. As the widespread adoption of AI in customer service is inevitable, AI service providers should consider implementing strategies to reduce users’ AI anxiety, such as offering reassurances about AI’s safety and effectiveness and developing more user-friendly and understandable interfaces. These measures can help maintain consumer trust during inevitable service failures.
The theoretical contributions of this study include the following: First, by integrating CASA theory into the research context of AI customer service chatbots and consumer trust, and combining it with attribution theory, this study develops a framework to explain the impact on sustained trust. While some past research focused on the functionalist perspective, emphasizing the impact of technological features on trust in AI robots, this study approaches the issue from the perspective of social interaction attributes, extending the research domain of human-AI communication and enriching empirical research in this field. This provides a theoretical analytical perspective that can be referenced in future related research. Second, the study reveals the varying impacts of different levels of anthropomorphism within CASA on sustained trust. There is ongoing debate in academia about the effects of AI anthropomorphism. While most studies acknowledge that anthropomorphism and social factors can be effective pathways to improve human-AI interaction experiences (Skjuve et al. 2021), some argue that AI’s anthropomorphic characteristics not only trigger users’ excessive expectations but also elicit negative emotions toward smart technologies, reducing trust (Lu et al. 2019). Although the findings of this study generally align with the majority of research, supporting the positive role of anthropomorphic characteristics through empirical evidence, the varying impacts of different levels of CASA factors also illustrate the dual effects of anthropomorphism, opening up new avenues for future research exploration. Third, this study introduces the psychological factor of AI anxiety, which advances the integration and development of service marketing theories in AI contexts with individual psychological traits. Past research has verified differences in attitudes and behavioral choices toward AI systems between individuals characterized as “technologists” and “conservatives,” as well as the impact of factors such as individual time orientation (Huang et al. 2019). The findings encourage future researchers to focus more on the endogenous factors affecting users’ attitudes and behaviors toward AI.
The practical contributions of this study include the following: First, as AI customer service becomes increasingly widespread in global marketing and service sectors, scenarios of human-AI interaction failure are inevitable, affecting not only user trust but also damaging companies and brands. However, designing customer service chatbots with increased anthropomorphism and social cues can effectively maintain users’ sustained trust. Second, companies should design AI customer service chatbots based on user profiles, particularly for users with higher levels of AI anxiety. Human-centric and dynamic service adjustments will be a direction for the future development of human-AI communication and AI customer service practices.
While the current study has unearthed some valuable findings, it also has its limitations. Firstly, our investigation primarily focused on sustained trust, but trust encompasses many dimensions, including initial trust, cognitive trust, and behavioral trust, among others. Future research could delve deeper into these dimensions (Li et al. 2014). Secondly, this study utilized cross-sectional surveys, which are limited in their ability to explain causality, particularly since trust is a dynamically changing process. Thirdly, although we incorporated covariates in our data analysis to enhance the accuracy and reliability of the results, alternative explanatory factors may still exist. Therefore, we encourage future research to systematically explore and validate potential alternative explanations. Moreover, as this study primarily recruited participants through social media platforms, representativeness issues arise, as social media users tend to be younger, potentially skewing the sample toward younger demographics. This may result in a lack of representation from older or less frequent social media users, affecting the generalizability of the study findings. Hence, future studies might benefit from adopting experimental methodologies to advance this line of inquiry. Lastly, this research encourages future studies to integrate both functional and social factors to more comprehensively explore the combined impact of AI chatbots on consumer psychology and behavior.
The datasets generated and/or analyzed during the current study are not publicly available due to privacy issues. Making the full data set publicly available could potentially breach the privacy that was promised to participants when they agreed to take part, and may breach the ethics approval for the study. The data are available from the corresponding author on reasonable request.
Aggarwal P, McGill AL (2012) When Brands Seem Human, Do Humans Act Like Brands? Automatic Behavioral Priming Effects of Brand Anthropomorphism. J. Consum Res 39(2):307–323. https://doi.org/10.1086/662614
Article  Google Scholar 
Araujo T (2018) Living up to the chatbot hype: The influence of anthropomorphic design cues and communicative agency framing on conversational agent and company perceptions. Comput Hum. Behav. 85:183–189. https://doi.org/10.1016/j.chb.2018.03.051
Article  Google Scholar 
Ashfaq M, Yun J, Yu S, Loureiro SMC (2020) I, Chatbot: Modeling the determinants of users’ satisfaction and continuance intention of AI-powered service agents. Telemat. Inf. 54(10):14–73. https://doi.org/10.1016/j.tele.2020.101473
Article  Google Scholar 
Belanche D, Casaló LV, Flavián C (2021) Frontline robots in tourism and hospitality: service enhancement or cost reduction? Electron Mark. 31(3):477–492. https://doi.org/10.1007/s12525-020-00432-5
Article  Google Scholar 
Bandura A (1986). Social foundations of thought and action. Englewood Cliffs, NJ, (23-28), 2
Berger P & Luckmann T (2016). The social construction of reality. In Social theory re-wired (pp. 110-122). Routledge
Brandtzaeg PB, Følstad A (2018) Chatbots: changing user needs and motivations. Interactions 25(5):38–43. https://doi.org/10.1145/3236669
Article  Google Scholar 
Canhoto AI, Clear F (2020) Artificial intelligence and machine learning as business tools: A framework for diagnosing value destruction potential. Bus. Horiz. 63(2):183–193. https://doi.org/10.1016/j.bushor.2019.11.003
Article  Google Scholar 
Celik V, Yesilyurt E (2013) Attitudes to technology, perceived computer self-efficacy and computer anxiety as predictors of computer supported education. Comput Educ. 60(1):148–158. https://doi.org/10.1016/j.compedu.2012.06.008
Article  Google Scholar 
Chang WL, White JP, Park J, Holm A, & Šabanović S (2012, September). The effect of group size on people’s attitudes and cooperative behaviors toward robots in interactive gameplay. In 2012 IEEE RO-MAN: The 21st IEEE International Symposium on Robot and Human Interactive Communication (pp. 845-850). IEEE. https://doi.org/10.1109/ROMAN.2012.6343857
Cheng LK (2023) Effects of service robots’ anthropomorphism on consumers’ attribution toward and forgiveness of service failure. J Consum Behav 2023 22(1):67–81
Cheng Y, Jiang H (2022) Customer-brand relationship in the era of artificial intelligence: understanding the role of chatbot marketing efforts. J. Prod. Brand Manag 31(2):252–264. https://doi.org/10.1108/JPBM-05-2020-2907
Article  MathSciNet  Google Scholar 
Chi NTK, Hoang Vu N (2023) Investigating the customer trust in artificial intelligence: The role of anthropomorphism, empathy response, and interaction. Caai T Intell. Techno 8(1):260–273. https://doi.org/10.1049/cit2.12133
Article  Google Scholar 
Chi OH, Denton G, Dogan G (2020) Artificially intelligent device use in service delivery: a systematic review, synthesis, and research agenda. J. Hosp. Mark. Manag 29(7):757–786. https://doi.org/10.1080/19368623.2020.1721394
Article  Google Scholar 
Chuah SHW, Yu J (2021). The future of service: The power of emotion in human-robot interaction. J Retail Consum Serv 61. https://doi.org/10.1016/j.jretconser.2021.102551
Chung M, Ko E, Joung H, Kim SJ (2020) Chatbot e-service and customer satisfaction regarding luxury brands. J. Bus. Res 117:587–595. https://doi.org/10.1016/j.jbusres.2018.10.004
Article  Google Scholar 
Ciechanowski L, Przegalinska A, Magnuski M, Gloor P (2019) In the shades of the uncanny valley: An experimental study of human-chatbot interaction. Future Gener. Comp. Sy 92:539–548. https://doi.org/10.1016/j.future.2018.01.055
Article  Google Scholar 
Croes EAJ, Antheunis ML (2021) Can we be friends with Mitsuku? A longitudinal study on the process of relationship formation between humans and a social chatbot. J. Soc. Pers. Relat. 38(1):279–300. https://doi.org/10.1177/0265407520959463
Article  Google Scholar 
Crolic C, Thomaz F, Hadi R, Stephen AT (2022) Blame the bot: Anthropomorphism and anger in customer–chatbot interactions. J Mark 86(1):132–148. https://doi.org/10.1177/00222429211045687
Article  Google Scholar 
Cuff BMP, Brown SJ, Taylor L, Howat DJ (2016) Empathy: A Review of the Concept. Emot. Rev. 8(2):144–153. https://doi.org/10.1177/1754073914558466
Article  Google Scholar 
Dijkstra TK, Henseler J (2015) Consistent and asymptotically normal PLS estimators for linear structural equations. Comput Stat. Data 81(1):10–23. https://doi.org/10.1016/j.csda.2014.07.008
Article  MathSciNet  Google Scholar 
Fornell C, Larcker DF (1981) Structural Equation Models with Unobservable Variables and Measurement Error: Algebra and Statistics. J. Mark. Res 18(3):382–388. https://doi.org/10.1177/002224378101800313
Article  Google Scholar 
Fox J, Gambino A (2021) Relationship Development with Humanoid Social Robots: Applying Interpersonal Theories to Human/Robot Interaction. Cyberpsych Beh Soc. N. 24(5):294–299. https://doi.org/10.1089/cyber.2020.0181
Article  Google Scholar 
Gillath O, Ai T, Branicky MS, Keshmiri S, Davison, et al. (2020). Attachment and trust in artificial intelligence. Comput Hum Behav 115. https://doi.org/10.1016/j.chb.2020.106607
Golossenko A, Pillai KG, Aroean L (2020) Seeing brands as humans: Development and validation of a brand anthropomorphism scale. Int J. Res Mark. 37(4):737–755. https://doi.org/10.1016/j.ijresmar.2020.02.007
Article  Google Scholar 
Hancock PA, Kessler TT, Kaplan AD et al. (2021) Evolving Trust in Robots: Specification Through Sequential and Comparative Meta-Analyses. Hum. Factors 63(7):1196–1229. https://doi.org/10.1177/0018720820922080
Article  CAS  PubMed  Google Scholar 
Hair JF (2009). Multivariate data analysis. exploratory factor analysis. Prentice Hall
Hair JF, Hult GTM, Ringle C, Sarstedt M (2017) A primer on partial leastsquares structural equation modeling (PLS-SEM). Sage Publications, Los Angeles, CA
Google Scholar 
Hair JF, Ringle CM, Gudergan SP et al. (2019) Partial least squares structural equation modeling-based discrete choice modeling: an illustration in modeling retailer choice. Bus. Res 12:115–142. https://doi.org/10.1007/s40685-018-0072-4
Article  Google Scholar 
Hair JF, Sarstedt M, Ringle CM, Mena JA (2012) An assessment of the use of partial least squares structural equation modeling in marketing research. J. Acad. Mark. Sci. 40(3):414–433. https://doi.org/10.1007/s11747-011-0261-6
Article  Google Scholar 
Han MC (2021) The impact of anthropomorphism on consumers’ purchase decision in chatbot commerce. J. Internet Commer. 20(1):46–65. https://doi.org/10.1080/15332861.2020.1863022
Article  Google Scholar 
Heider, F (2013). The psychology of interpersonal relations. Psychology Press
Ho A, Hancock J, Miner AS (2018) Psychological, Relational, and Emotional Effects of Self-Disclosure After Conversations With a Chatbot. J. Commun. 68(4):712–733. https://doi.org/10.1093/joc/jqy026
Article  PubMed  PubMed Central  Google Scholar 
Huang MH, Rust R, Maksimovic V (2019) The Feeling Economy: Managing in the Next Generation of Artificial Intelligence (AI). Calif. Manag. Rev. 61(4):43–65. https://doi.org/10.1177/0008125619863436
Article  Google Scholar 
Iglesias O, Markovic S, Rialp J (2019) How does sensory brand experience influence brand equity? Considering the roles of customer satisfaction, customer affective commitment, and employee empathy. J. Bus. Res 96:343–354. https://doi.org/10.1016/j.jbusres.2018.05.043
Article  Google Scholar 
Johnson DG, Verdicchio M (2017) AI anxiety. Assoc. Inf. Sci. Tech. 68(9):2267–2270. https://doi.org/10.1002/asi.23867
Article  Google Scholar 
Kaya F, Aydin F, Schepman A et al. (2024) The roles of personality traits, AI anxiety, and demographic factors in attitudes toward artificial intelligence. Int J. Hum.-Comput Int 40(2):497–514. https://doi.org/10.1080/10447318.2022.2151730
Article  Google Scholar 
Kelley HH, Michela JL (1980) Attribution theory and research. Annu Rev. Psychol. 31:457–501. https://doi.org/10.1146/annurev.ps.31.020180.002325
Article  CAS  PubMed  Google Scholar 
Kim S, Baek TH (2018) Examining the antecedents and consequences of mobile app engagement. Telemat. Inf. 35(1):148–158. https://doi.org/10.1016/j.tele.2017.10.008
Article  Google Scholar 
Kim T & Hinds P (2006, September). Who should I blame? Effects of autonomy and transparency on attributions in human-robot interaction. In ROMAN 2006-The 15th IEEE international symposium on robot and human interactive communication (pp. 80-85). IEEE. https://doi.org/10.1109/ROMAN.2006.314398
Klaus P, Zaichkowsky J (2020) AI voice bots: a services marketing research agenda. J. Serv. Mark. 34(3):389–398
Article  Google Scholar 
Koufaris M, Hampton-Sosa W (2004) The development of initial trust in an online company by new customers. Inf. Manag. 41(3):379–U375. https://doi.org/10.1016/j.im.2003.08.004
Article  Google Scholar 
Kull AJ, Romero M, Monahan L (2021) How may I help you? Driving brand engagement through the warmth of an initial chatbot message. J. Bus. Res 135:840–850. https://doi.org/10.1016/j.jbusres.2021.03.005
Article  Google Scholar 
Kumar V, Ramachandrana D, Kumar B (2020) Influence of new-age technologies on marketing: A research agenda. J. Bus. Res 125:864–877. https://doi.org/10.1016/j.jbusres.2020.01.007
Article  Google Scholar 
Lastner MM, Folse JAG, Mangus SM et al. (2016) The road to recovery: Overcoming service failures through positive emotions. J. Bus. Res 69(10):4278–4286. https://doi.org/10.1016/j.jbusres.2016.04.002
Article  Google Scholar 
Ledro C, Nosella A, Vinelli A (2022) Artificial intelligence in customer relationship management: literature review and future research directions. J. Bus. Ind. Mark. 37(13):48–63
Article  Google Scholar 
Lee SE, Ju N, Lee KH (2023) Service chatbot: Co-citation and big data analysis toward a review and research agenda. Technol. Forecast Soc. 194:122722. https://doi.org/10.1016/j.techfore.2023.122722
Article  Google Scholar 
Lei X, Rau PLP (2021) Effect of Robot Tutor’s Feedback Valence and Attributional Style on Learners. Int J. Soc. Robot 13(7):1579–1597. https://doi.org/10.1007/s12369-020-00741-x
Article  Google Scholar 
Li H, Jiang JH, Wu MJ (2014) The effects of trust assurances on consumers’ initial online trust: A two-stage decision-making process perspective. Int J. Inf. Manag. 34(3):395–405. https://doi.org/10.1016/j.ijinfomgt.2014.02.004
Article  CAS  Google Scholar 
Lu L, Cai R, Gursoy D (2019) Developing and validating a service robot integration willingness scale. Int J. Hosp. Manag 80:36–51. https://doi.org/10.1016/j.ijhm.2019.01.005
Article  Google Scholar 
Luo XM, Tong SL, Fang Z, Qu Z (2019) Frontiers: Machines vs. Humans: The Impact of Artificial Intelligence Chatbot Disclosure on Customer Purchases. Mark. Sci. 38(6):937–947. https://doi.org/10.1287/mksc.2019.1192
Article  ADS  Google Scholar 
McLean G, Osei-Frimpong K, Barhorst J (2021) Alexa, do voice assistants influence consumer brand engagement? – Examining the role of AI powered voice assistants in influencing consumer brand engagement. J. Bus. Res 124:312–328. https://doi.org/10.1016/j.jbusres.2020.11.045
Article  Google Scholar 
Mende M, Scott ML, van Doorn J et al. (2019) Service robots rising: How humanoid robots influence service experiences and elicit compensatory consumer responses. J. Mark. Res 56(4):535–556. https://doi.org/10.1177/0022243718822827
Article  Google Scholar 
Meyer P, Jonas JM, Roth A (2020) Frontline employees’ acceptance of and resistance to service robots in stationary retail-an exploratory interview study. SMR-J. Serv. Manag. Res. 4(1):21–34. https://doi.org/10.15358/2511-8676-2020-1-21
Article  Google Scholar 
Moon Y(2003) Don’t blame the computer: When self-disclosure moderates the self-serving bias J. Consum Psychol. 13(1-2):125–137
Article  Google Scholar 
Mozafari N, Weiger WH, Hammerschmidt M (2022) Trust me, I’m a bot-repercussions of chatbot disclosure in different service frontline settings. J. Serv. Manag. 33(2):221–245. https://doi.org/10.1108/JOSM-10-2020-0380
Article  Google Scholar 
Nass C, Moon Y, Fogg BJ et al. (1995) Can computer personalities be human personalities? Int J. Hum.-Comput St 43:223–239. https://doi.org/10.1006/ijhc.1995.1042
Article  Google Scholar 
Nass C, Moon Y (2000) Machines and mindlessness: Social responses to computers. J. Soc. Issues 56(1):81–103. https://doi.org/10.1111/0022-4537.00153
Article  Google Scholar 
Nguyen TM, Quach S, Thaichon P (2022) The effect of AI quality on customer experience and brand relationship. J. Consum Behav. 21(3):481–493. https://doi.org/10.1002/cb.1974
Article  Google Scholar 
Nomura T, Kanda T, Suzuki T, Kato K (2008) Prediction of human behavior in human–robot interaction using psychological scales for anxiety and negative attitudes toward robots. Ieee T Robot 24(2):442–451. https://doi.org/10.1109/TRO.2007.914004
Article  Google Scholar 
Nowak KL, Biocca F (2003) The effect of the agency and anthropomorphism on users’ sense of telepresence, copresence, and social presence in virtual environments. Presence-Teleop Virt. 12(5):481–494. https://doi.org/10.1162/105474603322761289
Article  Google Scholar 
Park G, Yim MC, Chung JY, Lee S (2022). Effect of AI chatbot empathy and identity disclosure on willingness to donate: the mediation of humanness and social presence. Behav Inform Technol. https://doi.org/10.1080/0144929X.2022.2105746
Pelau C, Dabija DC, Ene I (2021). What makes an AI device human-like? The role of interaction quality, empathy and perceived psychological anthropomorphic characteristics in the acceptance of artificial intelligence in the service industry. Comput Hum Behav 122. https://doi.org/10.1016/j.chb.2021.106855
Pelau C, Ene I, Pop MI (2021) The impact of artificial intelligence on consumers’ identity and human skills. Amfiteatru Econ. 23(56):33–45
Article  Google Scholar 
Podsakoff PM, MacKenzie SB, Lee JY, Podsakoff NP (2003) Common method biases in behavioral research: A critical review of the literature and recommended remedies. J. Appl Psychol. 88(5):879–903. https://doi.org/10.1037/0021-9010.88.5.879
Article  PubMed  Google Scholar 
Puntoni S, Reczek RW, Giesler M, Botti S (2021) Consumers and Artificial Intelligence: An Experiential Perspective. J. Mark. 85(1):131–151. https://doi.org/10.1177/0022242920953847
Article  Google Scholar 
Reeves B, Nass C (1996) How people treat computers, television, and new media like real people and places. Cambridge University Press, Cambridge, England
Google Scholar 
Roy R, Naidoo V (2021) Enhancing chatbot effectiveness: The role of anthropomorphic conversational styles and time orientation. J. Bus. Res 126:23–34. https://doi.org/10.1016/j.jbusres.2020.12.051
Article  Google Scholar 
Rozin P, Royzman EB (2001) Negativity bias, negativity dominance, and contagion. Pers. Soc. Psychol. Rev. 5(4):296–320. https://doi.org/10.1207/S15327957PSPR0504_2
Article  Google Scholar 
Sands S, Ferraro C, Campbell C, Tsao HY (2021) Managing the human-chatbot divide: how service scripts influence service experience. J. Serv. Manag. 32(2):246–264. https://doi.org/10.1108/JOSM-06-2019-0203
Article  Google Scholar 
Shahzad MF, Xu S, An X et al. (2024) Assessing the impact of AI-chatbot service quality on user e-brand loyalty through chatbot user trust, experience and electronic word of mouth. J. Retail Consum Serv. 79:103867. https://doi.org/10.1016/j.jretconser.2024.103867
Article  Google Scholar 
Shawar BA, Atwell ES (2005) Using corpora in machine-learning chatbot systems. Int J. Corpus Linguis 10(4):489–516. https://doi.org/10.1075/ijcl.10.4.06sha
Article  Google Scholar 
Sheehan B, Jin HS, Gottlieb U (2020) Customer service chatbots: Anthropomorphism and adoption. J. Bus. Res 115:14–24. https://doi.org/10.1016/j.jbusres.2020.04.030
Article  Google Scholar 
Shin H, Bunosso I, Levine LR (2023) The influence of chatbot humour on consumer evaluations of services. Int J. Consum Stud. 47(2):545–562. https://doi.org/10.1111/ijcs.12849
Article  Google Scholar 
Simon F (2013) The influence of empathy in complaint handling: Evidence of gratitudinal and transactional routes to loyalty. J. Retail Consum Serv. 20:599–608. https://doi.org/10.1016/j.jretconser.2013.05.003
Article  Google Scholar 
Skjuve M, A Folstad, KI Fostervold, PB Brandzaeg (2021). My Chatbot Companion-a Study of Human-Chatbot Relationships. Int J Hum-Comput St 149. https://doi.org/10.1016/j.ijhcs.2021.102601
Song CS, Kim YK (2022) The role of the human-robot interaction in consumers? acceptance of humanoid retail service robots. J. Bus. Res 146:489–503. https://doi.org/10.1016/j.jbusres.2022.03.087
Article  Google Scholar 
Su J, Ng DTK, Chu SKW (2023) Artificial intelligence (AI) literacy in early childhood education: The challenges and opportunities. Computers Educ.: Artif. Intell. 4:100124. https://doi.org/10.1016/j.caeai.2023.100124
Article  Google Scholar 
Venkatesh V, Thong JYL, Xu X (2016) Unified Theory of Acceptance and Use of Technology: A Synthesis and the Road Ahead. J. Assoc. Inf. Syst. 17(5):328–376
Google Scholar 
Wang WH (2017) Smartphones as Social Actors? Social dispositional factors in assessing anthropomorphism. Comput Hum. Behav. 68:334–344. https://doi.org/10.1016/j.chb.2016.11.022
Article  Google Scholar 
Xu K, XB Chen, LL Huang (2022). Deep mind in social responses to technologies: A new approach to explaining the Computers are Social Actors phenomena. Comput Hum Behav 134. https://doi.org/10.1016/j.chb.2022.107321
Yang Y, Liu Y, Lv X et al. (2022) Anthropomorphism and customers’ willingness to use artificial intelligence service agents. J. Hosp. Mark. Manag 31(1):1–23. https://doi.org/10.1080/19368623.2021.1926037
Article  Google Scholar 
Yogeeswaran K, Złotowski J, Livingstone M et al. (2016) The interactive effects of robot anthropomorphism and robot ability on perceived threat and support for robotics research. J. Hum.-Robot Interact. 5(2):29–47. https://doi.org/10.5898/JHRI.5.2.Yogeeswaran
Article  Google Scholar 
Youn S, SV Jin (2021) “In AI we trust?” The effects of parasocial interaction and technopian versus luddite ideological views on chatbot-based customer relationship management in the emerging “feeling economy”. Comput Hum Behav 119. https://doi.org/10.1016/j.chb.2021.106721
Yu S, Xiong J, Shen H (2024) The rise of chatbots: The effect of using chatbot agents on consumers’ responses to request rejection. J. Consum Psychol. 34(1):35–48. https://doi.org/10.1002/jcpy.1330
Article  Google Scholar 
Zadro L, Williams KD, Richardson R (2004) How low can you go? Ostracism by a computer is sufficient to lower self-reported levels of belonging, control, self-esteem, and meaningful existence. J Exp Soc Psychol 40(4):560–567
Download references
The authors thank all the participants of this study. Special thanks to Dr. Guo Lei from Fuzhou University, Dr. Li Zitian from Xiamen University of Technology, and Dr. Wei Juan from Xiamen University for their assistance in this research. The participants were all informed about the purpose and content of the study and voluntarily agreed to participate. Funding for this study was provided by Minjiang University Research Start-up Funds (No. 324-32404314).
These authors contributed equally: Chenyu Gu, Linhao Zeng.
School of Journalism and Communication, Minjiang University, Fuzhou, China
Chenyu Gu & Yu Zhang
Fujian Digital Media Economy Research Center, Fujian Social Science Research Base, Fuzhou, China
Chenyu Gu
School of Journalism and Communication, Renmin University of China, Beijing, China
Linhao Zeng
PubMed Google Scholar
PubMed Google Scholar
PubMed Google Scholar
Conceptualization, C.G. L.Z.; methodology, C.G.; software, C.G.; validation, C.G., L.Z.; formal analysis, C.G.; investigation, C.G.; resources, C.G., Y.Z.; data curation, C.G., Y.Z.; writing—original draft preparation, C.G., L.Z.; writing—review and editing, C.G., Y.Z.; visualization, C.G.; project administration, C.G., L.Z. All authors have read and agreed to the published version of the manuscript.
Correspondence to Linhao Zeng.
The questionnaire and methodology for this study were approved by the School of Journalism and Communication, Minjiang University, Committee on Ethical Research (Ref: MJUCER20240211). The procedures used in this study adhere to the tenets of the Declaration of Helsinki.
Informed consent for this study was obtained in writing through the Wenjuan Star platform prior to data collection. The consent process was initiated on 09/01/2023, and participants were required to review the details of the study and provide their agreement to participate by selecting the consent option provided within the online questionnaire. Participants in this research have been fully informed of the nature and purpose of the study. All data collected will be used solely for academic purposes, and participants’ anonymity is fully assured. Their responses will remain confidential, with no personally identifiable information being collected or stored. There are no anticipated risks associated with their participation, and they were informed of their right to withdraw from the study at any time without any consequences.
The authors declare no competing interests.
Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Open Access This article is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License, which permits any non-commercial use, sharing, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if you modified the licensed material. You do not have permission under this licence to share adapted material derived from this article or parts of it. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by-nc-nd/4.0/.
Reprints and permissions
Gu, C., Zhang, Y. & Zeng, L. Exploring the mechanism of sustained consumer trust in AI chatbots after service failures: a perspective based on attribution and CASA theories. Humanit Soc Sci Commun 11, 1400 (2024). https://doi.org/10.1057/s41599-024-03879-5
Download citation
Received:
Accepted:
Published:
DOI: https://doi.org/10.1057/s41599-024-03879-5
Anyone you share the following link with will be able to read this content:
Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative
Advertisement
Humanities and Social Sciences Communications (Humanit Soc Sci Commun)
ISSN 2662-9992 (online)
© 2025 Springer Nature Limited

source

Exploring the mechanism of sustained consumer trust in AI chatbots after service failures: a perspective based on attribution and CASA theories – Nature

AI Examples & Business Use Cases –

Exploring the mechanism of sustained consumer trust in AI chatbots after service failures: a perspective based on attribution and CASA theories – Nature

Scarlett Johansson hits out at AI after