Latest FOMI's study on humanised chatbots
When customers are angry, humanlike chatbots could make them even angrier with a negative effect on customer satisfaction and brand reputation.
This is the result of a study carried out by a research team at the Oxford Future of Marketing Initiative (FOMI) at Saïd Business School.
The study, published in the Journal of Marketing, is titled “Blame the Bot: Anthropomorphism and Anger in Customer-Chatbot Interactions” and is authored by L’Oréal Professor of Marketing Andrew Stephen, and Associate Professors of Marketing Cammy Crolic, Felipe Thomaz and Rhonda Hadi.
Chatbots are increasingly replacing human customer-service agents on companies’ websites, social media pages, and messaging services. Designed to mimic humans, these bots often have human names (e.g., Amazon’s Alexa), humanlike appearances (e.g., avatars), and the capability to converse like humans. The assumption is that having human qualities makes chatbots more effective in customer service roles.
However, FOMI’s research team demonstrated that humanised chatbots raise unrealistic expectations of how helpful they will be. This can increase the frustration in customers who are angry when they shop, whereas there is no significant impact on customers who are in a neutral mood.
The research was conducted through five studies in which the team analysed nearly 35,000 chat sessions and tested hundreds of participants in mock- customer service scenarios or in simulated conversations with humanised chatbots.
A final study where the humanlike chatbot explicitly lowered customers’ expectations before the chat demonstrated that when people no longer had unrealistic expectations of how helpful the humanlike chatbot would be, the response from angry customers is no longer negative.
Professor Andrew Stephen, Director of FOMI, said: “The emotional state of consumers when they shop is a factor that cannot be underestimated. People often shop to lift their mood after a bad day. If they encounter any issue in their customer journey, they expect the friendly customer service advisor on the other side to have the problem-solving skills to help them satisfy their needs. Chatbots, as much as humanised they can be, haven’t got that level of human sophistication, yet.
“Our findings present a clear roadmap for how best to deploy chatbots when dealing with hostile, angry or complaining customers. It is important for marketers to carefully design chatbots and consider the context in which they are used, particularly when it comes to handling customer complaints or resolving problems.”
FOMI recommendations include:
- Assessing first whether a customer is angry before they enter the chat (e.g., via natural language processing) and then deploying the most effective (either humanlike or not humanlike) chatbot. If the customer is not angry, assign a humanlike chatbot; but if the customer is angry, assign a non-humanlike chatbot.
- Where a customised chatbot strategy is not technically feasible, assigning as a default non-humanlike chatbots in customer service situations where customers tend to be angry, such as complaint centres.
- Where humanlike chatbots are the only option, downplaying the capabilities of humanlike chatbots (e.g., Slack’s chatbot introduces itself by saying “I try to be helpful. But I’m still just a bot. Sorry!” or “I am not a human. Just a bot, a simple bot, with only a few tricks up my metaphorical sleeve!”).