Teaching your chatbot concepts

One of the first steps a Conversation Designer will take when starting a conversational project is to establish just what queries people will be making of the chatbot or voice app in question. If they’re lucky they’ll have access to a nice clean set of real-life queries that users have made to humans through live chat, telephone or other channels. But even then, those queries won’t necessarily account for more abstract versions of the same queries.

For example, someone saying: “My iPhone 8 has a cracked screen” is a very explicit thing to say as it provides a lot of context that is useful to a bot:

iPhone = Product

8 = Model of product

Cracked = description of damage

Screen = Part of product that is damaged

Chances are, with this much context a Conversation Designer can easily design the dialogue flow and content that is required to resolve this specific query.

However, what happens if far less context is provided for the same query?

“My phone is busted”

In this instance, in order to get the user to the same nicely worded resolution that the designer has created for the more explicit query above, the machine will need to gather more context. It will need to establish what product and model of product are in question as well as what part of the product is damaged.

As you can see from this, a chatbot or voice app understands things in ‘layers’ of abstraction. Just because someone has phrased something abstractly (“My phone is busted”) doesn’t mean they don’t share the same query or intent with the user who has supplied much more context to the machine.

Any well-designed conversation will need to account for several layers of abstraction in order to achieve the best result for the end user.

Twyla’s Canvas software provides two powerful features that work in unison to enable Conversation Designers to design for abstractions:

1.Dialogue Layers

2. Flow Connectors

Creating concepts using dialogue layers

So, what does it even mean to “teach a concept”? Well, let’s start by examining the two layers of abstraction described above:

Layer 1: “My phone is busted”

Layer 2: “My iPhone 8 has a cracked screen” -> Resolution

Layer 2 is the version of the query with the most context and therefore it has the resolution attached to it, the nice content written by the designer to resolve the user’s query.

Layer 1 will need a response to clarify the missing pieces of context in order to get the user to the same resolution:

Human: “My phone is busted”

Bot: “Oh no! What type of phone is it?”

Human: “iPhone”

Bot: “And what model of iPhone?”

Human: “8”

Bot: “Got it, now please describe the issue with your iPhone 8”

Human: “The screen cracked after I dropped it on the sidewalk”

This dialogue flow acts as navigation from one layer of abstraction down to the layer where the resolution lies.

Now let’s look at the concepts contained within these two layers:

What Twyla Canvas provides is a discrete “layer” in the bot’s natural language logic where Conversation Designers can design responses to these really high-level concepts, where no other specific context is supplied by the user.

So, what if someone says something that contains “iPhone” but no other contextual information relating to the concept of the iPhone?

In Canvas you can create a conceptual query called “iPhone” and a response that provides the first step in navigating down through the layers of abstraction:

Human: “Blablabla iPhone yadayada”

Bot: “Okay, I see you’re referring to an iPhone here but I’m not quite getting the specifics. Do you have a problem with your iPhone?”

Using a Flow Connector in Canvas, we can now tell the bot to connect this dialogue to the one we established earlier, which clarifies what model of iPhone is in question:

Human: “Yes”

Bot: “And what model of iPhone?”

Human: “8”

Bot: “Got it, now please describe the issue with your iPhone 8”

Human: “The screen cracked after I dropped it on the sidewalk”

You’ll see that the two responses from the bot were the same as described in the earlier example because we connected these existing flows, making the content (knowledge) of the bot much more streamlined and easier to manage or repurpose.

So, we now have a response available for the most abstract way of a user querying the bot, that navigates to the layer containing the resolution:

Layer 1: “iPhone”

Layer 2: “My iPhone 8 has a cracked screen” -> Resolution

It’s worth mentioning here that providing buttons to steer a user is a good idea in addition to simply detecting natural language. If you know, for example, that your bot has limited coverage in terms of the types of issues it can resolve you can manage a user’s expectations by showing them buttons that essentially list what the bot knows and enable them to speak to a human or to go read other content elsewhere if their issue is not contained in one of the buttons.

Create a conceptual map before starting your project

Establishing these conversational layers of abstraction is an interesting challenge but it does provide a good framework for starting a conversational project, especially if you don’t have the benefit of any pre-existing queries to base your dialogue design on.

A good place to start is with a simple mindmap that describes what the ‘things’ are in the subject matter at hand, what the characteristics are of those things and what actions can be performed against them.

And so on and so forth, this method gives you a basis from which to teach your bot the core concepts of the subject matter before it even necessarily needs to understand precise natural language queries.

At Twyla we’re constantly looking to push a deeper understanding of Conversation Design as a practice, and to develop features in our platform to better enable Conversation Design professionals to apply that understanding.

Feel free to reach out for more information about any of these techniques and features and let us know what you think, especially if you’re a Conversation Design practitioner.

The future is conversational.