Yellow Messenger’s Robust NLP Engine | Part 2

Aavi Mathaun

Welcome back, readers!

Last few weeks have been nothing less than eventful for us at Yellow Messenger. The number of conversations on every platform has risen tremendously. People are pouring in a dime a dozen. Interests surrounding our services are flooding in (Not to toot our own horn) While it seemed as though we bit off more than we could chew, Billy here took control of it all, and today’s second phase discusses the NLP concepts that made Billy this monster salesperson today. Sometimes, it feels like he’s running the show.

Sales folks reading this, Billy says, ‘You might learn something today, kid. To share this, add it to your dating bio’.

On this note, the sales team called Billy, braggadocious.  Marketing has no comments.

So let’s begin.

Last time, we tested the performance of Yellow Messenger NLP engine versus Google’s Dialogflow and Microsoft’s Luis. As you know, our NLP engine outperformed the two and provided accurate results even when we fed the Yellow Messenger NLP engine with 50% fewer data compared to the other two. 

The power of a conversational interface lies in its capacity to process infinite input. If not fed on copious data, the bot accuracy and adoption can go down drastically. But we have tackled this issue with a zero-shot learning model for our NLP, which means when no data is available, our chatbots are still capable of finding an accurate answer, no matter what the domain is.

But how does the bot give accurate answers when no concrete data is available?

  1. Data modeling via Knowledge Graph - A knowledge graph is a conceptual organization of data specific to a domain. It defines relationships between unique entities. To be specific, entities, intents, and relationships between them. How this helps is that when there is no concrete answer, the virtual assistant tries to get an answer based on relationships between entities.  To explain this better, let’s open our sideways game - Bots against humanity.

 No humans were harmed in the making of this game.

Data modelling via Knowledge Graph
The decks symbolize a stack of data.

In the above image, we have 3 decks of cards. The decks symbolize a stack of data.

  • Set A - Destructive Programs: This is the bots deck of cards. Donald Glover, Kanye West, Taylor Swift, Priyanka Chopra are bots that are up to no good. These are called entities. You may also think of it as objects going by object-oriented programming. 
  • Set B - Intent: As the name suggests, intent is the deck of cards of actions. Every action is also stored as an object. 
  • Set C - Events: This deck of cards stores events.

Imagine you, dear reader, are playing against us. Events cards are pulled at random for each round. The set A & B are shuffled and distributed amongst the two of us. Ready?

The first card drawn at random is Mental Health Awareness. 

The rule is simple. Combine any two cards, one from set A, one from B to make a hilariously wrong statement for this event. The funniest answer wins. We’ll go first.

Set A                                             Set B                                  Set C
Donald Glover was found skateboarding at the mental health awareness event.

What did you make?

Since the criteria of the game is hilarity, we made absurd connections between entities and intents. But what if we want to make connections between objects for the human resource department of a company?

Knowledge Graph

We built this for one of the world’s largest oilfield services companies with a presence in 100+ countries. Since the bot has the relationships between intents and entities modeled, it handles queries even if they are unclear or ambiguous (e.g. my eligibility, maternity, etc.).

This solves the following problems -

  • Speedy deployment - Just add the responses in the Graph, upload the bot and it’s on!  
  • Flexible - The same schema or framework can be reused for different country policies, requirements, etc.
  • Domain knowledge modeling - The Knowledge graph as a framework allows us to model key relationships of a domain.
  • Business user friendly - The pre-built app has everything ready for you! All you need to do is update the schema. No code needed.

In a case where data is finite, it is important that we have multiple levels of fallbacks to ensure that users always find some relevant responses to their queries.

2. Intelligent Query Handling - Fallbacks

multiple levels of fallbacks to ensure that users always find some relevant response to their query

It would be disappointing for a user who asks a customer support bot for the order status and the response was a dead / no response. Just the way real conversations are in pursuit of acknowledgment, so should a chat with a virtual assistant be. Let’s look at the same knowledge graph from our previous example. Imagine this is a database of a media company.

Kanye West was crying at Drake’s birthday party and Taylor Swift lost her belongings at a charity event; two facts that we know of as per this data. I query the virtual assistant on these data points and I ask -

‘Did someone weep at Drake’s birthday’?

Since the bot knows what the fact is yet it doesn’t necessarily match with the query, it uses a fallback and says -

“Did you mean Kanye West crying at Drake’s Birthday Party?”

How a fallback is handled depends on us. A good conversation would be when all fallbacks are taken care of in a way that it seems like the bot has all the answers. Some other scenarios would look like this.

  1. Me - ‘Was Kanye crying?’

           Bot - “Yes. Kanye West was crying at Drake’s birthday party”

2. Me - ‘Taylor Swift was lost at the charity event?’

Bot - “No. Taylor Swift lost her belongings at the charity event. I was sorry to hear it the first time”

3. Let’s take a different case where my question to the Yellow Messenger’s virtual assistant is - 

Me - ‘Does YM offer customer support’

In the initial days of deploying the bot, it would use a fallback and respond with - 

Bot - “Did you mean any of the following?”

[Customer Support Automation] [Customer Engagement]

When the user selects an option - we know what the user meant by his/her query. Therefore, we now have the user's initial question --> and where it should have gone or under which entity/intent it falls (based on users choice)  

We use this information to retrain our model, so that next time the user (or any other user) types it, instead of going to the fallback (with "did you mean") it just pulls up the correct response.

Now, our monster salesperson, Billy says -

Billy - “Yellow Messenger offers customer support automation amongst many other solutions. Our virtual assistant deployed for a large Indian NBFC has helped get 2300% more revenue in upselling! Would you want to get a demo?”

Sounds like a real conversation?

Want to see what else can the virtual assistant do? Talk to Billy, on the bottom-right corner here, and get a full demo, today!

More Posts on Chatbot Technology

Aavi Mathaun

Aavi Mathuan is a marketer, web dev and content creator at Yellow Messenger. Her core interests lie in the space of technology and brand psychology. Her natural affinity towards design, content and storytelling is refreshingly perceptive and insightful. She lives with her happy puppy in Bangalore, India.

Latest insights

Build your first NLP powered
intelligent chatbot in under 10 clicks