Let’s assume that you have a very simple business, and you want to deploy a chatbot for customer support. Let’s assume 100 questions and answers (Q/A)s would cover all your issues. It looks very simple, and you may be tempted to deploy one of the deep learning methods to build your chatbot. Here are the problems you are going to face:
COMBINATORY EXPLOSION IN NATURAL LANGUAGES
Unless you are a trained linguist, you might easily undermine how flexible natural language can be, and how explosive the combinations will emerge out of a single question. Let’s say your first Q/A starts with a basic complaint the users will have something like “I have a problem with my cable.” This simple statement can be expressed in more than a dozen ways as shown below, and the combinations do not end there!
If we take one of the possible expressions above, there can be another dozen combinations only by morphological and synonymous variations:
As you can see, this is only the first Q/A from your set of 100. Just imagine if some of the (Q/A)s you have are more complicated than this simple starting expression.
Your set of 100 (Q/A)s can easily mount to 10,000 different equivalent expressions the users may type which must be detected and understood by your chatbot software.
SO, WHAT IS THE PROBLEM?
The problem is not the deep learning method itself, but what it needs to function properly. You need to have a data set of 10,000 questions, if not more, that are linguistically equivalent expressions as shown above. Also, these 10,000 questions should map to 100 answers in this hypothetical case. Unless someone sits down and types them one by one, such a data set will be a nightmare to acquire.
If you already have a customer support system and collected, let’s say, 1 million (Q/A)s, there is still no guarantee that this 1 million (Q/A)s will cover the 10,000 linguistic variations to detect the 100 main (Q/A)s. Considering the Gaussian distribution of a typical user response analytics, 1 million (Q/A)s would cover less than 30% of your required data set. Your chatbot solution will remain vulnerable to undetected responses after all that trouble.
Consequently, someone who is deploying a deep learning method will find himself/herself in a data crises situation quickly. No matter what type of deep learning method you deploy, the data requirement described here holds. Neural networks cannot discover themselves equivalent variations of natural language without being provided ample examples. And I want to underline the word “ample” here.
OTHER TYPES OF DATA CRISES WITH DEEP LEARNING
Going back to the hypothetical case where you have a service operation and you can pull 1 million (Q/A)s. To make sure this data set will not cause any harm, someone must manually go through the set to clean it up. You cannot just dump data to a deep learning system without verifying it. Remember the Microsoft case, where Tay, the chatbot developed using twitter feeds, started to produce racial statements.
Learn-as-you-go approach also poses problems. Deep learning methods require a training process and convergence before deployment. This can be a long process. Once trained, the system cannot simply absorb new data in an addition mode. The entire data set must be trained again. As a result, if you plan to add new data to your chatbot every week, you need a team of AI specialists training the system every week and re-deploy. As one can imagine, this does not seem like a scalable business solution.
WILL USING BUTTONS SOLVE THE PROBLEM?
Facebook, when they launched the chatbot platform, assumed that buttonizing conversations could solve part of the combinatory explosion problem described here. First of all, let’s make one thing clear:
If the user is not allowed to enter free expressions any time during conversation, it is no longer a chatbot, or conversational AI. It is a toy.
Most Facebook chatbot developers jumped on the idea of buttonizing entire conversations, thus yielding nothing more than a toy. Most of the 30,000 plus chatbots developed in this fashion flopped big time, only few succeeded as reported in several recent articles prior to Facebook’s recent summit meeting. Entirely buttonized conversations can rarely provide successful solutions for very particular business types. If buttons are used alongside free expressions successfully detected, then this combination can be powerful.
WHERE IS THE SOLUTION?
I intend to write more about the solutions later. However, in a nut shell, solutions to the chatbot problem require independent NLP solutions before a deep learning methods can be used. One thing is for sure, deep learning alone is not a good fit, and has no future with this “silver bullet” engineering mentality.
——- FOLLOW US ———-
Join our CHATBOTS linkedin group
For exClone’s Platform, click here for free trial.