Getting computers to understand and react appropriately to natural language is very difficult. Humans have evolved sophisticated natural language capabilities over the past 150,000 or so years. Emulating it in computer systems, which are far less advanced, is challenging. Given that we’re not much closer to artificial general intelligence than we were decades ago, chatbots have to be constrained to solving specific problems. Many common front-line customer support applications, for instance, can be effectively automated with the use of a chatbot.
Despite the fact that chatbots have nowhere near the capabilities of humans when it comes to speech and language, their capabilities are steadily expanding and they are now useful for a variety of applications that were a pipe dream a few years ago. Any kind of repetitive, relatively formulaic interaction can now oftentimes be replaced by artificial intelligence.
That said, building a useful chatbot framework that can be adapted to a variety of circumstances is an unfinished endeavor. As natural language processing (NLP) technology continues to advance, more and more specific uses are becoming possible with chatbot technology.
What does a good chatbot framework have to do?
To make a chatbot framework that is as broadly applicable as possible, there’s a variety of features and other requirements that should be implemented:
An advanced natural language parser. Humans are bad at typing, especially in a hurry, so we don’t always spell things correctly. Parsers that rely on specific and correct spellings and phrasings are going to miss out on a lot of natural human interactions.
Contextual information gathering. When two people talk to one another, they don’t forget what the other one was saying midway through the conversation. Similarly, chatbots should always have an idea of the topic of conversation so that the human can use pronouns and write normally. This is somewhat rare in common chatbot software. Even applications like Siri from huge companies are just beginning to integrate advanced contextual awareness. It’s a surprisingly hard problem.
Ease of extension. New ways of integrating chatbot technology are always being created. Building a chatbot framework in a modular way make it easier to extend with new concepts for new applications.
Ease of use at scale. NLP applications are often used with lots of users at the same time or in large companies that require the framework to scale. Using certain architectural decisions, like modular API design and containerization, make some chatbot frameworks scale better than others.
Why is NLP so hard?
Humans communicate using hundreds of different intricate written and spoken languages. Even if a chatbot only works in English, for instance, the myriad ways to express a particular concept make it very difficult to determine meaning without human intuition. Over the years, there has been a variety of attempts to create generic natural language understanding and natural language generation systems.
Traditional symbolic artificial intelligence systems tried to describe the way that language works from a logical perspective. The idea was that writing out rules for how language worked would allow the computer to understand and generate convincingly realistic language if the rules were specific enough. It turns out that natural language is complicated enough that it is practically impossible to specify rules that work any significant fraction of the time.
More recently, now that computers are much more powerful, machine learning based on statistics has taken over as the dominant way to parse and generate natural language. The full-sized GPT-2 model, for example, does an impressive job of generating text. Still, it cannot generate anything that makes perfect sense, even if the text it produces is grammatically correct and logical-sounding.
Building a machine learning model for natural language requires training the model on millions of pieces of text from diverse authors. It usually also requires a ton of GPU power to perform the training process, which only recently became cost-effective.
How does a good chatbot framework work?
Since chatbots will not be as fluent as humans across a wide range of topics for a very long time, we have to select specific tasks that can be automated with chatbots to produce a usable product.
In order to build a chatbot using a framework, the developer must:
Define a dictionary that maps different types of user input to their meanings.
Define a set of behavioral rules that perform certain actions when certain conditions are triggered.
Build templates to output appropriate responses.
Package it all up and deploy it.
For example, a chatbot for redirecting customers to the appropriate customer support division in a large company might ask questions about the type of product the user wants help with. Depending on the customer’s answer (using the dictionary and behavioral rules), it might then ask additional questions to refine its understanding of the customer’s desires. Finally, it would dispatch the user to an appropriate support channel. This job would be very boring and repetitive for a human, so it’s a prime target for automation using a powerful chatbot framework.
A well-designed chatbot framework makes it easy to specify these parameters for simple, common applications and makes it possible to define them for novel and complex projec
ts. For its part, the framework should automatically check user input for typos, retain contextual information for subsequent questions, make it easy to integrate the chatbot API into various types of apps, and make deployment and scaling a breeze.
Given the complexity of natural language processing and the difficulties inherent in designing a modular, scalable, easy-to-use chatbot framework, good ones are rare. Frameworks that are also free and open source software and not dependent on a particular provider’s cloud infrastructure are even more rare. That’s why E.D.D.I from Labs.ai is so cool: it’s an advanced chatbot framework provided for free with the source code.