Ever since IBM’s Deep Blue beat reigning world champion Garry Kasparov in a chess match, the world has been fascinated with machines’ potential to outwit humans. But while artificial intelligence (AI) technology still has a long way to go, that’s not stopping tech titans from waging their own battle to create the next generation of AI-powered virtual assistants. Meanwhile, brands and businesses are simply trying to understand how to build a smart bot.
Apple was an early entrant in voice-activated assistants by adding Siri to iOS several years ago, and more recently Amazon has made great strides in conversational agents with its Alexa, which lives on the Echo voice-controlled connected speaker.
During its developer conference in mid-May, Google unveiled Google Assistant, an upgraded version of Google Now that allows users to carry on two-way “conversations” with their own Google-powered assistant, whether it’s pulling up photos taken over the weekend, making a restaurant reservation or mapping directions to your next business meeting.
Then there’s Viv, an AI-powered platform from the creators of Siri that promises to take personal digital assistants to the next level. Rather than simply providing weather conditions in response to basic queries, Viv uses natural language processing to figure out a user’s true intent.
And in April, Facebook announced the beta launch of Bot Engine, a tool that lets developers create conversational bots that interact with users in a variety of ways. The hope is that Bot Engine will enable more complex conversations by continuously learning over time, interpreting the user’s intent and developing a deeper understanding of context.
Smart Bot: the basics
In light of these advancements, it’s not surprising that in the last three years alone there’s been more progress made in the field of AI than over the past two decades.
Natural language processing, voice recognition, automatic speech recognition, question and intent analysis — they’re all components of a new generation of AI-powered tools that can help brands eventually build smart bots, and create better experiences for their customers.
But when it comes to making a truly smart bot, these are still early days. To help explain, we’ve broken down bot intelligence into four different levels — from beginner kindergarten stage to advanced grad school levels — that will enable different experiences today and tomorrow.
But beyond basic if/then style logic, the majority of today’s bots don’t have user context awareness nor the ability to learn from previous interactions. In fact, because these bots rely on basic logic, they are ill-equipped to gracefully handle unplanned paths, aggregate information, or predict outcomes.
Grade School: Language processing
A grade-school level smart bot boasts some language processing capabilities that enable them to understand basic requests using natural language input and can deal with some of the complexities and ambiguities that come with it. These slightly more sophisticated bots can also create a vocabulary and begin to build a knowledge base that more closely mimics how people communicate.
But despite their ability to string together a sequence of words, these bots still fall short of the classic definition of artificial intelligence, which typically includes elements of learning or problem solving.
Consider, for example, Hi Poncho, a Facebook Messenger bot that delivers personalized weather forecasts. It’s a quirky weather service limited in its ability to create complex interactions and a more personalized experience. That’s because it lacks contextual information — the bot can’t see mobile phone OS level data on you, nor have access to any data beyond the Messenger chat session you are currently in. With this lack of data, the bot is incapable of creating an in-depth user knowledge database that could lead to more personalized and relevant interactions.
College: Natural language processing, speech recognition and context
To get to this level, a smart bot needs to understand user context — which means having access to device, system, and user information. Bots that live on messaging platforms have yet to graduate to this level because they simply can’t see the data that lives outside the application.
Personal assistants like Google Now and Siri can tap into the user data stored on the phone to better understand a user’s context, such as location information, past behavior, personal files, photos, emails, calendar events, and more. They are also capable of performing natural language processing and have speech recognition technology to understand the spoken word. Amazon’s Echo and Google Home use similar intelligence to power their home assistants.
Armed with this information, a smart chatbot can then proactively provide details on how long it might take a user to commute to his or her next meeting and what time they need to wake up in the morning. Or, it might let you know when a contact from LinkedIn or friend from Facebook is at the same event you are.
But the best part about these smart assistants is that they are moving towards being able to learn over time — and anticipate a user’s needs for a truly AI-powered experience.
Graduate School: AI-driven, truly smart bot
We may be years away from developing a deep-learning truly smart bot with a vast knowledge base. But with all the research being done, and tech investments being made, chatbots are well on their way to moving beyond a basic understanding of human questions to simulating the human mind.
Amazon has a team of more than 1000 workers dedicated to its Alexa project. Startup Hu:toma is working on using deep learning technology to help companies and developers create “emotionally evolved” conversations with users via text or voice. What makes Hu:toma different is that it actually learns from past conversations so that it can assemble answers on the fly rather than resort to pre-programmed, stock answers. These are clear signs of our industry’s focus on offering smart bot experiences and some of the more innovative developments we can expect from AI.