Many of today’s most popular AI systems are, at their core, classifiers. They classify inputs into different categories: this image is a picture of a dog, not a cat; this audio signal is an instance of the word “Boston”, not the word “Seattle”; this sentence is a request to play a video, not a song. But what happens if you need to add a new class to your classifier — if, say, someone releases a new type of automated household appliance that your smart-home system needs to be able to control?Read More
More-efficient “kernel methods” dramatically reduce training time for natural-language-understanding systems
Machine learning systems often act on “features” extracted from input data. In a natural-language-understanding system, for instance, the features might include words’ parts of speech, as assessed by an automatic syntactic parser, or whether a sentence is in the active or passive voice.Read More
AlphaStar: Mastering the Real-Time Strategy Game StarCraft II
Games have been used for decades as an important way to test and evaluate the performance of artificial intelligence systems. As capabilities have increased, the research community has sought games with increasing complexity that capture different elements of intelligence required to solve scientific and real-world problems. In recent years, StarCraft, considered to be one of the most challenging Real-Time Strategy (RTS) games and one of the longest-played esports of all time, has emerged by consensus as a grand challenge for AI research.Read More
Leveraging unannotated data to bootstrap Alexa functions more quickly
Developing a new natural-language-understanding system usually requires training it on thousands of sample utterances, which can be costly and time-consuming to collect and annotate. That’s particularly burdensome for small developers, like many who have contributed to the library of more than 70,000 third-party skills now available for Alexa.Read More
New method for compressing neural networks better preserves accuracy
Neural networks have been responsible for most of the top-performing AI systems of the past decade, but they tend to be big, which means they tend to be slow. That’s a problem for systems like Alexa, which depend on neural networks to process spoken requests in real time.Read More
Manifold: A Model-Agnostic Visual Debugging Tool for Machine Learning at Uber
Machine learning (ML) is widely used across the Uber platform to support intelligent decision making and forecasting for features such as ETA prediction and fraud detection. For optimal results, we invest a lot of resources in developing accurate predictive …
The post Manifold: A Model-Agnostic Visual Debugging Tool for Machine Learning at Uber appeared first on Uber Engineering Blog.
Creating a Zoo of Atari-Playing Agents to Catalyze the Understanding of Deep Reinforcement Learning
This research was conducted with valuable help from collaborators at Google Brain and OpenAI.
A selection of trained agents populating the Atari zoo.
Some of the most exciting advances in AI recently have come from the field of deep reinforcement …
The post Creating a Zoo of Atari-Playing Agents to Catalyze the Understanding of Deep Reinforcement Learning appeared first on Uber Engineering Blog.
POET: Endlessly Generating Increasingly Complex and Diverse Learning Environments and their Solutions through the Paired Open-Ended Trailblazer
Jeff Clune and Kenneth O. Stanley were co-senior authors.
We are interested in open-endedness at Uber AI Labs because it offers the potential for generating a diverse and ever-expanding curriculum for machine learning entirely on its own. Having vast amounts …
The post POET: Endlessly Generating Increasingly Complex and Diverse Learning Environments and their Solutions through the Paired Open-Ended Trailblazer appeared first on Uber Engineering Blog.