(French translation below)Three months ago we announced the opening of DeepMinds first ever international AI research laboratory in Edmonton, Canada. Today, we are thrilled to announce that we are strengthening our commitment to the Canadian AI community with the opening of a DeepMind office in Montreal, in close collaboration with McGill University.Opening a second office is a natural next step for us in Canada, a country that is globally recognised as a leader in artificial intelligence research. We have always had strong links with the thriving research community in Canada and Montreal, where large companies, startups, incubators and government come together with ground-breaking teams, such as those at the Montreal Institute for Learning Algorithms (MILA) and McGill University.We are delighted that DeepMind Montreal will be led by one of the pioneers of this community,Doina Precup, Associate Professor in the School of Computer Science at McGill, Senior Fellow of the Canadian Institute for Advanced Research, and a member of MILA. Doinas expertise is in reinforcement learning – one of DeepMinds specialities – which is critical for areas such as reasoning and planning.In her new position, Doina will continue to focus on fundamental research at McGill, MILA, and DeepMind.Read More
WaveNet launches in the Google Assistant
Just over a year ago we presented WaveNet, a new deep neural network for generating raw audio waveforms that is capable of producing better and more realistic-sounding speech than existing techniques. At that time, the model was a research prototype and was too computationally intensive to work in consumer products. But over the last 12 months we have worked hard to significantly improve both the speed and quality of our model and today we are proud to announce that an updated version of WaveNet is being used to generate the Google Assistant voices for US English and Japanese across all platforms.Using the new WaveNet model results in a range of more natural sounding voices for the Assistant.Read More
Why we launched DeepMind Ethics & Society
At DeepMind, were proud of the role weve played in pushing forward the science of AI, and our track record of exciting breakthroughs and major publications. We believe AI can be of extraordinary benefit to the world, but only if held to the highest ethical standards. Technology is not value neutral, and technologists must take responsibility for the ethical and social impact of their work.As history attests, technological innovation in itself is no guarantee of broader social progress. The development of AI creates important and complex questions. Its impact on societyand on all our livesis not something that should be left to chance. Beneficial outcomes and protections against harms must be actively fought for and built-in from the beginning. But in a field as complex as AI, this is easier said than done.As scientists developing AI technologies, we have a responsibility to conduct and support open research and investigation into the wider implications of our work. At DeepMind, we start from the premise that all AI applications should remain under meaningful human control, and be used for socially beneficial purposes. Understanding what this means in practice requires rigorous scientific inquiry into the most sensitive challenges we face.Read More
The hippocampus as a predictive map
Think about how you choose a route to work, where to move house, or even which move to make in a game like Go. All of these scenarios require you to estimate the likely future reward of your decision. This is tricky because the number of possible scenarios explodes as one peers farther and farther into the future. Understanding how we do this is a major research question in neuroscience, while building systems that can effectively predict rewards is a major focus in AI research.In our new paper, in Nature Neuroscience, we apply a neuroscience lens to a longstanding mathematical theory from machine learning to provide new insights into the nature of learning and memory. Specifically, wepropose that the area of the brain known as the hippocampus offers a unique solution to this problem by compactly summarising future events using what we call a predictive map.The hippocampus has traditionally been thought to only represent an animals current state, particularly in spatial tasks, such as navigating a maze. This view gained significant traction with thediscovery of place cells in the rodent hippocampus, which fire selectively when the animal is in specific locations. While this theory accounts for many neurophysiological findings, it does not fully explain why the hippocampus is also involved in other functions, such as memory, relational reasoning, and decision making.Read More
DeepMind and Blizzard open StarCraft II as an AI research environment
DeepMind’s scientific mission is to push the boundaries of AI by developing systems that can learn to solve complex problems. To do this, we design agents and test their ability in a wide range of environments from the purpose-built DeepMind Lab to established games, such as Atari and Go.Testing our agents in games that are not specifically designed for AI research, and where humans play well, is crucial to benchmark agent performance. That is why we, along with our partner Blizzard Entertainment, are excited to announce the release of SC2LE, a set of tools that we hope will accelerate AI research in the real-time strategy game StarCraft II. The SC2LE release includes:A Machine Learning API developedby Blizzard that gives researchers and developers hooks into the game. This includes the release of tools for Linux for the first time.Adataset of anonymised game replays, which will increase from 65k to more than half a million in the coming weeks.An open source version of DeepMinds toolset, PySC2, to allow researchers to easily use Blizzards feature-layer API with their agents.A series of simple RL mini-games to allow researchers to test the performance of agents on specific tasks.A joint paperthat outlines the environment, and reports initial baseline results on the mini-games, supervised learning from replays, and the full 1v1 ladder game against the built-in AI.Read More
DeepMind papers at ICML 2017 (part one)
The first of our three-part series, which gives brief descriptions of the papers we are presenting at the ICML 2017 Conference in Sydney, Australia.Read More
DeepMind papers at ICML 2017 (part three)
The final part of our three-part series that gives an overview of the papers we are presenting at the ICML 2017 Conference in Sydney, Australia.Read More
DeepMind papers at ICML 2017 (part two)
The second of our three-part series, which gives an overview of the papers we are presenting at the ICML 2017 Conference in Sydney, Australia.Read More
AI and Neuroscience: A virtuous circle
Recent progress in AI has been remarkable.Artificial systems now outperform expert humans at Atari video games, the ancient board game Go, and high-stakes matches of heads-up poker. They can also produce handwriting and speech indistinguishable from those of humans, translate between multiple languages and even reformat your holiday snaps in the style of Van Gogh masterpieces.These advances are attributed to several factors, including the application of new statistical approaches and the increased processing power of computers. But in a recent Perspective in the journal Neuron, we argue that one often overlooked contribution is the use of ideas from experimental and theoretical neuroscience.Psychology and neuroscience have played a key role in the history of AI. Founding figures such as Donald Hebb, Warren McCulloch, Marvin Minsky and Geoff Hinton were all originally motivated by a desire to understand how the brain works. In fact, throughout the late 20th Century, much of the key work developing neural networks took place not in mathematics or physics labs, but in psychology and neurophysiology departments.Read More
Going beyond average for reinforcement learning
Consider the commuter who toils backwards and forwards each day on a train. Most mornings, her train runs on time and she reaches her first meeting relaxed and ready. But she knows that once in awhile the unexpected happens: a mechanical problem, a signal failure, or even just a particularly rainy day. Invariably these hiccups disrupt her pattern, leaving her late and flustered.Randomness is something we encounter everyday and has a profound effect on how we experience the world. The same is true in reinforcement learning (RL) applications, systems that learn by trial and error and are motivated by rewards. Typically, an RL algorithm predicts the average reward it receives from multiple attempts at a task, and uses this prediction to decide how to act. But random perturbations in the environment can alter its behaviour by changing the exact amount of reward the system receives.Ina new paper, we show it is possible to model not only the average but also the full variation of this reward, what we call the value distribution.Read More