“Alexa, play Kanye West”
Playing music is one of the main perks of owning an Echo. It’s feasible that, at some point, you’ve asked your Echo to play Kanye West (no judgment). In order to play a requested song for the first time, your device likely suggested that you need to connect Pandora, Spotify, or Amazon Music. This is also a completely normal interaction.
However, has your Echo ever replied, “Kanye West is playing a concert tonight at Philips Arena. There are tickets available on Stubhub. Would you like to purchase tickets to this concert?”
Trick question, I already know that has never happened. Until now, you were having a very simple conversation with your Echo. You asked Alexa to do something and she did it. It’s as simple as “If A, then B.”
Did you notice I said, “until now” a few seconds ago? That’s because it is now possible to have a more involved conversation with Alexa.
The second instance is a little more complex because it accounts for and creates context. Typically, the context that comes more naturally in human dialogue is far too complex for a bot to process.
For example, if a person asks a device to tell them about New York, the bot needs to understand whether they are asking about New York the state, the university, the song by Taylor Swift, or New York City, before determining how to respond. The advanced natural language processing algorithms required for an analysis like this can be difficult to build and expensive to implement.
Bots, Bots, Bots!
Thanks to the Amazon Lex bot creation console, applications can now leverage this level of deep-learning. You can build a device, design its conversations, and test it as an end user would experience it.
If you want to build a bot capable of making hotel reservations, you can design it in the Amazon Lex console to respond with follow-up questions. For example, you can customize the user’s room preferences, check-in times, locations, pet accommodations, etc. Once the device is customized, you can utilize AWS Lambda functions to fulfill the user’s request for a booking.
When people interact with your bot, Amazon Lex will intelligently process your user’s text, search for keywords from sample data, and then determine how to respond. A response may be triggered by prompting the user for more information or executing an action to complete their request.
When you’re ready to deploy your bot, you can integrate it with your App right from the Lex interface. Since this service runs in AWS Cloud, you have a direct path to hosting, managing, and scaling your bot’s infrastructure.
What kind of bots will you build?
Always be in the Know, Subscribe to the Relus Cloud Blog!
- Amazon’s Deep-Learning Can Power Your App - April 25, 2017
- Natural Language Understanding and Alexa - April 25, 2017
- The Cloud is Driving Tech Hiring - March 31, 2017
- 31 Flavors of Data Science - March 8, 2017
- 5 Questions Cloud Developers Should Crush In An Interview - February 15, 2017