As Alexa-enabled devices continue to expand into new countries, finding information across languages that use different scripts becomes a more pressing challenge. For example, a Japanese music catalogue may contain names written in English or the various scripts used in Japanese — Kanji, Katakana, or Hiragana. When an Alexa customer, from anywhere in the world, asks for a certain song, album, or artist, we could have a mismatch between Alexa’s transcription of the request and the script used in the corresponding catalogue.Read More
Contextual Clues Can Help Improve Alexa’s Speech Recognizers
Automatic speech recognition systems, which convert spoken words into text, are an important component of conversational agents such as Alexa. These systems generally comprise an acoustic model, a pronunciation model, and a statistical language model. The role of the statistical language model is to assign a probability to the next word in a sentence, given the previous ones. For instance, the phrases “Pulitzer Prize” and “pullet surprise” may have very similar acoustic profiles, but statistically, one is far more likely to conclude a question that begins “Alexa, what playwright just won a … ?”Read More
How Alexa can use song-playback duration to learn customers’ preferences
To be as useful as possible to customers, Alexa should be able to make educated guesses about the meanings of ambiguous utterances. If, for instance, a customer says, “Alexa, play the song ‘Hello’”, Alexa should be able to infer from the customer’s listening history whether the song requested is the one by Adele or the one by Lionel Richie.Read More
2018 Amazon Research Awards CFP launch announcement
This month, Amazon announced the 11 focus areas of the 2018 Amazon Research Awards.Read More
HypRank: How Alexa determines what skill can best meet a customer’s need
Amazon Alexa currently has more than 40,000 third-party skills, which customers use to get information, perform tasks, play games, and more. To make it easier for customers to find and engage with skills, we are moving toward skill invocation that doesn’t require mentioning a skill by name (as highlighted in a recent post).Read More
The Scalable Neural Architecture behind Alexa’s Ability to Select Skills
Alexa is a cloud-based service with natural-language-understanding capabilities that powers devices like Amazon Echo, Echo Show, Echo Plus, Echo Spot, Echo Dot, and more. Alexa-like voice services traditionally have supported small numbers of well-separated domains, such as calendar or weather. In an effort to extend the capabilities of Alexa, Amazon in 2015 released the Alexa Skills Kit, so third-party developers could add to Alexa’s voice-driven capabilities. We refer to new third-party capabilities as skills, and Alexa currently has more than 40,000.Read More
New way to annotate training data should enable more sophisticated Alexa interactions
Developing a new Alexa skill typically means training a machine-learning system with annotated data, and the skill’s ability to “understand” natural-language requests is limited by the expressivity of the semantic representation used to do the annotation. So far, the techniques used to represent natural language have been fairly simple, so Alexa has been able to handle only relatively simple requests.Read More
Machine translation accelerates how Alexa learns new languages
As Alexa-enabled devices continue to expand into new countries, we propose an approach for quickly bootstrapping machine-learning models in new languages, with the aim of more efficiently bringing Alexa to new customers around the world.Read More
Amazon scientists use transfer learning to accelerate development of new Alexa capabilities
Amazon scientists are continuously expanding Alexa’s natural-language-understanding (NLU) capabilities to make Alexa smarter, more useful, and more engaging.Read More
Amazon Scientist Outlines Multilayer System For Smart Speaker Echo Cancellation And Voice Enhancement
Smart speakers, such as the Amazon Echo family of products, are growing in popularity among consumer and business audiences. In order to improve the automatic speech recognition (ASR) and full-duplex voice communication (FDVC) performance of these smart speakers, acoustical echo cancellation (AEC) and noise reduction systems are required. These systems reduce the noises and echoes that can impact operation, such as an Echo device accurately hearing the wake word “Alexa.”Read More