Embedding entity names from diverse skills in a shared representations space enables system to suggest neglected entity names with 88.5% accuracy.Read More
More-Efficient Machine Learning Models for On-Device Operation
Neural networks are responsible for most recent advances in artificial intelligence, including many of Alexa’s latest capabilities. But neural networks tend to be large and unwieldy, and in recent years, the Alexa team has been investigating techniques for making them efficient enough to run on-device.Read More
Representing Data at Three Levels of Generality Improves Multitask Machine Learning
Alexa currently has more than 90,000 skills, or abilities contributed by third-party developers — the Uber ride-sharing skill, the Jeopardy! trivia game skill, the Starbucks drink-ordering skill, and so on.Read More
Who’s on First? How Alexa Is Learning to Resolve Referring Terms
This year, at the Association for Computational Linguistics’ Workshop on Natural-Language Processing for Conversational AI, my colleagues and I won one of two best-paper awards for our work on slot carryover.Read More
Teaching computers to answer complex questions
Computerized question-answering systems usually take one of two approaches. Either they do a text search and try to infer the semantic relationships between entities named in the text, or they explore a hand-curated knowledge graph, a data structure that directly encodes relationships among entities.Read More
2019 Amazon Research Awards CFP launch announcement
This month, Amazon announced the 11 focus areas of the 2019 Amazon Research Awards.Read More
Bringing the Power of Neural Networks to the Problem of Search
Using machine learning to train information retrieval models — such as Internet search engines — is difficult because it requires so much manually annotated data. Of course, training most machine learning systems requires manually annotated data, but because information retrieval models must handle such a wide variety of queries, they require a lot of data. Consequently, most information retrieval systems rely primarily on mechanisms other than machine learning.Read More
Amazon Mentors Help UMass Graduate Students Make Concrete Advances on Vital Machine Learning Problems
Earlier this month, Varun Sharma and Akshit Tyagi, two master’s students from the University of Massachusetts Amherst, began summer internships at Amazon, where, like many other scientists in training, they will be working on Alexa’s spoken-language-understanding systems.Read More
How to do fast, accurate multi-category classification
Many of today’s most useful AI systems are multilabel classifiers: they map input data into multiple categories at once. An object recognizer, for instance, might classify a given image as containing sky, sea, and boats but not desert or clouds.Read More
Active learning: Algorithmically selecting training data to improve Alexa’s natural-language understanding
Alexa’s ability to respond to customer requests is largely the result of machine learning models trained on annotated data. The models are fed sample texts such as “Play the Prince song 1999” or “Play River by Joni Mitchell”. In each text, labels are attached to particular words — SongName for “1999” and “River”, for instance, and ArtistName for Prince and Joni Mitchell. By analyzing annotated data, the system learns to classify unannotated data on its own.Read More