Reply Bots – The Risk-Reward Tradeoff

Businesses today use AI-powered chatbots to offer responsive customer service online, at all times of the day, on their company’s website and as a live chat on the mobile app. They are considered a great alternative to human employees who are pressed for time and bored with the tedium of digital work. They are also considered useful to predict safety concerns, manage maintenance schedules and streamline performance. The trend is heading for AI-powered market targeting which is about creating personalized emails and promotions based off collected customer data.

Chatbots are currently deployed in customer interaction whether in sales, customer service and marketing. They are expected to improve service quality and automate administrative tasks.

  • By deploying a bot, the business is offering to be available to customers at all hours of the day, and even during hours when human intervention is not possible.
  • It adds predictability to responsiveness
  • With a chatbot, a customer interaction can have a faster Turn-around-Time (TAT)

However, they constitute a risk to a business, even without adding cybersecurity concerns and the threat of unencrypted channels, unauthorized access to stored conversations or hackers using the bot for phishing attacks to the mix. This is because:

  • Your customers are looking for a human behind the chat. Bots are the anti-thesis to everything ‘social’ stands for and the last thing to enthuse a stressed or irate customer would be a bot, offering to deal with their issue.
  • A chatbot which is meant for human interaction is also vulnerable to human intervention with malicious intentions.
  • The bot could be taught to adopt terrible communication skills, much like Microsoft’s Tay, a chatbot which started spewing offensive, racist language thanks to its machine learning skills and the ‘training’ it received from trolls. With enough interactions to convince it, a bot could be trained to say that the Sun rises in the West or divert customers to malware sites. Such potential for sabotage and subversion should make any business adopt caution before using a chatbot.
  • A stolen system with the chatbot open could yield sensitive customer information which could prove detrimental to the business and its interests.

In summary, launching a chatbot is easy but ensuring that it works effectively is not so easy.When using a chatbot, the risks make anything beyond listening to the customer and responding assuring with a “We will get back to you …” not worth incurring.

The way forward with this technology, which is still evolving, would be to develop workflows which alert the right people within a response team. It can also be trained to escalate to the right level by detecting the urgency of the situation. Establishing distributed ownership to responses, coupled with oversight to the response process, is a far better alternative to the use of automated responses when using chatbots. With Auris, our listening platform, we have done just that.