If you use Amazon Alexa, you’re probably aware of the limits of Alexa’s voice recognition skills, which are only compatible specific applications. If you’re paranoid, try to forget the always-on microphone for a moment to consider the ability to build Alexa skills and custom commands into enterprise applications. It’s intriguing.

Amazon Lex, which was introduced at AWS re:Invent 2016, uses automatic speech recognition and natural language understanding to process speech and respond accordingly to a request. The service, which is the core system that powers Alexa, can expand voice recognition capabilities for developers who want full-fledged conversational interfaces built into an application. Amazon Lex aims to open up Alexa’s technology for broader developmental use.

With Alexa, users need a keyword and the application name to trigger a response. While AWS provides application programming interfaces (APIs) to run Alexa on devices, they are mostly limited to listening to audio, converting to text, and running a command. Users must know about a particular skill to enable it, and that’s how Amazon is preparing to leverage it and use this powerful AI to change their current call center and customer engagement strategy.  

More about that later.

What’s Natural Language Understanding?

Amazon AI services bring natural language understanding (NLU), automatic speech recognition (ASR), visual search and image recognition, text-to-speech (TTS), and machine learning (ML) technologies within the reach of every developer. Based on the same proven, highly scalable products and services built by the thousands of deep learning and machine learning experts across Amazon, Amazon AI services provide high-quality, high-accuracy AI capabilities that are scalable and cost-effective.

In addition, the AWS Deep Learning AMI provides a way for AI developers and researchers to quickly and easily begin using any of the major deep learning frameworks to train sophisticated, custom AI models; experiment with new algorithms, and acquire new deep learning skills and techniques on AWS’ massive compute infrastructure.

Inside Lex’s Head

Alexa skills and Amazon Lex bots are nearly identical in terms of terminology and overall structure. Accessible through the AWS Management Console, this developer preview shows us what’s next for Lex.

The top level of Lex is a bot, which is the first thing you have to handle to get things going. Developers can have multiple bots, each with its own set of skills, and populate them with Intents, or requests to an application. A user triggers an Intent with one or more spoken words or phrases called Utterances. Developers program Utterances to accept any range of inputs, like a state name, generic location or time. This enables them to specify a template of a sentence to respond to, such as, “Play {song} at {time}.” That phrase would match a string, such as, “Play Bruno Mars at 5 p.m. tonight.” It also allows a developer to prompt for exact parameters if they’re not specified. If the user doesn’t specify what time to play the song, Lex would ask, “When would you like me to play Bruno Mars?” The user then responds and completes the Intent.

Lex can also return enriched responses, known as Cards, in addition to plain text. This enables developers to show images, videos, or chosen responses instead of requiring users to respond with text. Not all platforms support Cards, so it’s important to ensure all applications can function without them.

Amazon Lex connects to different platforms via Channels. Facebook is currently the only supported channel, however AWS plans to add more in the future, including Slack, Skype and Twilio. These Channels allow developers to provide one singular chatbot application that connects to users on different platforms.

Developers can also use an API to submit text or audio to the platform to build custom AI logic directly into any web or mobile application. This will potentially enable enterprise IT to build Lex into any standard search system to provide answers to common questions. The API can also help developers integrate custom help systems into their platforms.

You Rang?

A few months ago, I suggested that Amazon could have the ability to leverage Lex and Lambda to drive innovation in cloud-based IP call centers. Well, AWS just unveiled a new service for running call centers. Dubbed Amazon Connect, the service leverages the same technology used by Amazon.com’s own customer service system to route and manage calls with automatic speech recognition and AI.

Integrating with existing AWS services, such as DynamoDB, Amazon Redshift, or Amazon Aurora, as well as third-party CRM and analytics services, has Amazon customers jumping to utilize the new offering. Salesforce says it’s integrating its Service Cloud Einstein with Amazon Connect, which uses a graphical interface to enable companies to set up a workflow for calls without coding.

Just like other AWS services, Amazon Connect is charged by time used with no long-term commitments or upfront charges. There is no minimum monthly fee. You are charged based on the number of minutes you use Amazon Connect to engage with your end customers, at the specified per minute rate. Pricing is not based on capacity, agent seats, or maintenance.

What We’ll Be Hiring For Next

If Salesforce is working to connect their AI to Amazon Connect and Lex, then we should get ready for the next big hiring push. Some early adopters are building proofs of concept with Alexa that use internet-of-things (IoT) tags to manage conference rooms, control interactions with lab equipment, and improve healthcare workflows. In the long term, a wide variety of enterprise use cases will develop, owing to the flexibility of voice user interfaces.

Conversational platforms seem like an obvious choice for the next generation of enterprise mobile and IoT applications. However, in order to effectively venture into the enterprise, a conversational interface platform needs to provide the right combination of capabilities that can adapt voice processing interfaces to business scenarios. Amazon Echo’s DNA combines characteristics that can make the platform a foundational piece of the next wave of business applications.

The Growing AWS Ecosystem

There are several popular voice and conversational ecosystems coming to market, including Apple’s Siri and HomeKit, Google Assistant and Google Home, as well as Microsoft Cortana. Amazon has placed a significant bet on Alexa as the future of conversational interfaces with heavy venture funding and backing of more than 1,000 developers.

Amazon is empowering IT professionals to develop voice apps using the Alexa Skills Kit (ASK – funny, right?)  which makes it easy to develop voice commands that work with any web service or AWS Lambda functions for specific tasks. Developers can use ASK to query software as a service, drive business processes, or simplify the control of devices in the office.

The Amazon Alexa Voice Service enables enterprise device makers to add new input capabilities to office equipment, such as conferencing systems, kiosks, and medical facilities that could benefit from touch-free control. These capabilities come with tight integration into the AWS ecosystem.

Eating The Cloud

In doing this, Amazon is moving into a business area that is already pretty crowded with companies offering different aspects of cloud-based contact center solutions. Some of these, including Zendesk, Zoho, and Freshdesk, are actually partnering with Amazon for this service.

Taken together, these services are collectively disrupting a lot of the more costly, traditional ways of serving customers both for technical and other kinds of support. Many traditional solutions would not be cloud-based and might incorporate outsourcing or in-house teams, and infrastructure, for a business that is projected to be worth nearly $10 billion by 2019, with some of the current market leaders including Avaya, Cisco and Genesys.

How are you utilizing these services? Share your experience in the comments section below!


Leave a Comment

Always be in the Know, Subscribe to the Relus Cloud Blog!


About Brian Fink

As a member of Relus' recruiting team, Brian Fink focuses on driving talent towards opportunity. Eager to help stretch the professional capabilities of everyone he works with, he's helping startups grow and successfully scale their IT, Recruiting, Big Data, Product, and Executive Leadership teams. An active keynote speaker and commentator, Fink thrives on discovery and building a better recruiting mousetrap.

Pin It on Pinterest