Knowledge/Research

[JM] Blame the Bot: Anthropomorphism and Anger in Customer-Chatbot Interactions (2021)

이제니 2022. 3. 21. 03:10
728x90

 

© rocknrollmonkey, 출처 Unsplash

When Humanlike Chatbots Miss the Mark in Customer Service Interactions

10.20.2021 Cammy Crolic, Felipe Thomaz, Rhonda Hadi and Andrew T. Stephen

 

Chatbots are increasingly replacing human customer-service agents on companies’ websites, social media pages, and messaging services. Designed to mimic humans, these bots often have human names (e.g., Amazon’s Alexa), humanlike appearances (e.g., avatars), and the capability to converse like humans. The assumption is that having humanlike qualities makes chatbots more effective in customer service roles. However, a new Journal of Marketing study suggests that this is not always the case.

Our research team finds that when customers are angry, deploying humanlike chatbots can negatively impact customer satisfaction, overall firm evaluation, and subsequent purchase intentions. Why? Because humanlike chatbots raise unrealistic expectations of how helpful they will be.

To better understand how humanlike chatbots impact customer service, we conducted five studies.

In Study 1, we analyzed nearly 35,000 chat sessions between an international mobile telecommunications company’s chatbot and its customers. We found that when a customer was angry, the humanlike appearance of the chatbot had a negative effect on the customer’s satisfaction.

In Study 2, we created a series of mock customer-service scenarios and chats where 201 participants were either neutral or angry and the chatbot was either humanlike or non-humanlike. Again, we saw that angry customers displayed lower overall satisfaction when the chatbot was humanlike than when it was not.

In Study 3, we demonstrated that the negative effect extends to overall company evaluations, but not when the chatbot effectively resolves the problem (i.e., meets expectations). We had 419 angry participants engage in a simulated chat with a humanlike or non-humanlike chatbot, but their problems were either effectively resolved or not during the interaction. As expected, when their problems were not effectively resolved, participants reported lower evaluations of the company when they interacted with a humanlike chatbot compared to a non-humanlike one. Yet, when their problems were effectively resolved, the company evaluations were higher, with no difference based on the type of chatbot.

In Study 4, we conducted an experiment with 192 participants that provided evidence that this negative effect is driven by the increased expectations of the humanlike chatbot. People expect humanlike chatbots to be able to perform better than non-humanlike ones; those expectations are not met, leading to reduced purchase intentions.

In Study 5, we showed that explicitly lowering customer’s expectations of the humanlike chatbot prior to the chat reduced the negative response of angry customers to humanlike chatbots. When people no longer had unrealistic expectations of how helpful the humanlike chatbot would be, angry customers no longer penalized them with negative ratings.

Our findings provide a clear roadmap for how best to deploy chatbots when dealing with hostile, angry or complaining customers. It is important for marketers to carefully design chatbots and consider the context in which they are used, particularly when it comes to handling customer complaints or resolving problems. Firms should attempt to gauge whether a customer is angry before they enter the chat (e.g., via natural language processing), and then deploy the most effective, either humanlike or not humanlike, chatbot. If the customer is not angry, assign a humanlike chatbot; but if the customer is angry, assign a non-humanlike chatbot. If this sophisticated strategy is not technically feasible, companies could assign non-humanlike chatbots in customer service situations where customers tend to angry, such as complaint centers. Or companies could downplay the capabilities of humanlike chatbots (e.g., Slack’s chatbot introduces itself by saying “I try to be helpful (But I’m still just a bot. Sorry!” or “I am not a human. Just a bot, a simple bot, with only a few tricks up my metaphorical sleeve!”). These strategies should help avoid or mitigate the lower customer satisfaction, overall firm evaluation, and subsequent purchase intentions reported by angry customers towards humanlike chatbots.

 

Read the full article.

Read the authors’ slides for sharing this material in your classroom.

From: Cammy Crolic, Felipe Thomaz, Rhonda Hadi, and Andrew Stephen, “Blame the Bot: Anthropomorphism and Anger in Customer-Chatbot Interactions,” Journal of Marketing.

Go to the Journal of Marketing

 

References: AMA

https://www.ama.org/2021/10/20/when-humanlike-chatbots-miss-the-mark-in-customer-service-interactions/?utm_medium=email&utm_source=rasa_io&PostID=40028237&MessageRunDetailID=6797113301


Blame the Bot: Anthropomorphism and Anger in Customer–Chatbot Interactions

Cammy Crolic, Felipe Thomaz, Rhonda Hadi, Andrew T. Stephen

First Published November 3, 2021 Research Article

https://doi.org/10.1177/00222429211045687

 

Abstract

Chatbots have become common in digital customer service contexts across many industries. While many companies choose to humanize their customer service chatbots (e.g., giving them names and avatars), little is known about how anthropomorphism influences customer responses to chatbots in service settings. Across five studies, including an analysis of a large real-world data set from an international telecommunications company and four experiments, the authors find that when customers enter a chatbot-led service interaction in an angry emotional state, chatbot anthropomorphism has a negative effect on customer satisfaction, overall firm evaluation, and subsequent purchase intentions. However, this is not the case for customers in nonangry emotional states. The authors uncover the underlying mechanism driving this negative effect (expectancy violations caused by inflated pre-encounter expectations of chatbot efficacy) and offer practical implications for managers. These findings suggest that it is important to both carefully design chatbots and consider the emotional context in which they are used, particularly in customer service interactions that involve resolving problems or handling complaints.

 

Keywords

customer service, artificial intelligence, conversational agents, chatbots, anthropomorphism, anger, expectancy violations

 

Citation

Crolic C, Thomaz F, Hadi R, Stephen AT. Blame the Bot: Anthropomorphism and Anger in Customer–Chatbot Interactions. Journal of Marketing. 2022;86(1):132-148. doi:10.1177/00222429211045687

 

References: SAGE journals

https://journals.sagepub.com/doi/10.1177/00222429211045687

 

Translation

챗봇은 많은 산업 분야의 디지털 고객 서비스 맥락에서 보편화되었다. 많은 회사가 고객 서비스 챗봇을 인간화하기로 선택하지만(예: 이름과 아바타 제공) 의인화가 서비스 설정에서 챗봇에 대한 고객 응답에 어떻게 영향을 미치는지에 대해서는 알려진 바가 거의 없다. 국제 통신 회사의 대규모 실제 데이터 분석과 4개의 실험을 포함하여 5개의 연구에서 저자는 고객이 화난 감정 상태에서 챗봇 주도 서비스 상호 작용에 들어갈 때 챗봇 의인화가 고객 만족도, 전반적인 기업 평가 및 후속 구매 의도에 부정적인 영향을 미친다는 것을 발견했다. 단, 화를 내지 않은 감정 상태의 고객은 그렇지 않다. 저자는 이러한 부정적인 효과(챗봇 효율성에 대한 부풀려진 사전 기대치로 인한 기대 위반)를 유발하는 기본 메커니즘을 밝히고 관리자에게 실질적인 의미를 제공한다. 이러한 결과는 특히 문제 해결 또는 불만 처리와 관련된 고객 서비스 상호 작용에서 챗봇을 신중하게 설계하고 챗봇이 사용되는 감정적 맥락을 고려하는 것이 중요함을 시사한다.

 

Key Points

- 챗봇 서비스의 인간화가 고객응답에 미치는 영향을 통신사 실제 데이터와 4개의 실험을 바탕으로 살펴봄

- 고객이 화난 상태에서 챗봇 서비스에서 상호작용 할 경우 고객 만족도, 기업평가, 구매의도에 부정적인 영향을 미치는 것으로 나타났지만, 고객이 화나지 않은 상태에서는 그렇지 않은 것으로 나타남

- 이러한 부정적인 효과는 챗봇 효율성에 대한 사전 기대치가 증가하였고, 이러한 기대치를 충족시키지 못해 오히려 부정적으로 영향을 미친다는 메커니즘을 규명함

- 따라서 고객 서비스 상호작용 상황에서 문제해결과 불만처리와 같은 맥락에서는 챗봇 서비스의 설계의 감정 적 맥락을 고려해야 함을 시사함