In a world of fiercely complex, emergent, and hard-to-master systems – from our climate to the diseases we strive to conquer – we believe that intelligent programs will help unearth new scientific knowledge that we can use for social benefit. To achieve this, we believe well need general-purpose learning systems that are capable of developing their own understanding of a problem from scratch, and of using this to identify patterns and breakthroughs that we might otherwise miss. This is the focus of our long-term research mission at DeepMind.Read More
Bringing the best of mobile technology to Imperial College Healthcare NHS Trust
Were really excited to announce that weve agreed a five year partnership with Imperial College Healthcare NHS Trust, helping them make the most of the opportunity for mobile clinical applications to improve care. This is now our second NHS partnership for clinical apps, following a similar partnership we announced last month with the Royal Free London NHS Foundation Trust.Over the last two years, the Trust has moved from paper to electronic patient records, and mobile technology is the natural next stage of this work. By giving clinicians access to cutting-edge healthcare apps that link to electronic patient records, theyll be able to access information on the move, react quickly in response to changing patient needs, and ultimately provide even better care.Well be working with the Trust to deploy our clinical app, Streams, which supports clinicians in caring for patients at risk of deterioration, particularly with conditions where early intervention can make all the difference. Like breaking news alerts on a mobile phone, the technology will notify nurses and doctors immediately when test results show a patient is at risk of becoming seriously ill. It will also enable clinicians at the Trust to securely assign and communicate about clinical tasks, and give them the information they need to make diagnoses and decisions.Read More
DeepMind Papers @ NIPS (Part 3)
Scaling Memory-Augmented Neural Networks with Sparse Reads and WritesAuthors:J Rae, JJ Hunt, T Harley, I Danihelka, A Senior, G Wayne, A Graves, T LillicrapWe can recall vast numbers of memories, making connections between superficially unrelated events. As you read a novel, youll likely remember quite precisely the last few things youve read, but also plot summaries, connections and character traits from far back in the novel.Many machine learning models of memory, such as Long Short Term Memory, struggle at these sort of tasks. The computational cost of these models scales quadratically with the number of memories they can store so they are quite limited in how many memories they can have. More recently, memory augmented neural networks such as the Differentiable Neural Computer or Memory Networks, have shown promising results by adding memory separate from the computation and solving tasks such as reading short stories and answering questions [e.g. Babi].However, while these new architectures show promising results on small tasks, they use “soft-attention for accessing their memories, meaning that at every timestep they touch every word in memory. So while they can scale to short stories, theyre a long way from reading novels.In this work, we develop a set of techniques to use sparse approximations of such models to dramatically improve their scalability.Read More
DeepMind Papers @ NIPS (Part 2)
The second blog post in this series, sharing brief descriptions of the papers we are presenting at NIPS 2016 Conference in Barcelona.Sequential Neural Models with Stochastic LayersAuthors:Marco Fraccaro, Sren Kaae Snderby, Ulrich Paquet, Ole WintherMuch of our reasoning about the world is sequential, from listening to sounds and voices and music, to imagining our steps to reach a destination, to tracking a tennis ball through time. All these sequences have some amount of latent random structure in them. Two powerful and complementary models, recurrent neural networks (RNNs) and stochastic state space models (SSMs), are widely used to model sequential data like these. RNNs are excellent at capturing longer-term dependencies in data, while SSMs model uncertainty in the sequence’s underlying latent random structure, and are great for tracking and control.Is it possible to get the best of both worlds? In this paper we show how you can, by carefully layering deterministic (RNN) and stochastic (SSM) layers. We show how you can efficiently reason about a sequences present latent structure, given its past (filtering) and also its past and future (smoothing).For further details and related work, please see the paper https://arxiv.org/abs/1605.07571Check it out at NIPS:Tue Dec 6th 05:20 – 05:40 PM @ Area 1+2 (Oral) in Deep LearningTue Dec 6th 06:00 – 09:30 PM @ Area 5+6+7+8 #179Read More
Open-sourcing DeepMind Lab
DeepMind’s scientific mission is to push the boundaries of AI, developing systems that can learn to solve any complex problem without needing to be taught how. To achieve this, we work from the premise that AI needs to be general. Agents should operate across a wide range of tasks and be able to automatically adapt to changing circumstances. That is, they should not be pre-programmed, but rather, able to learn automatically from their raw inputs and reward signals from the environment. There are two parts to this research program: (1) designing ever-more intelligent agents capable of more-and-more sophisticated cognitive skills, and (2) building increasingly complex environments where agents can be trained and evaluated.Read More
DeepMind Papers @ NIPS (Part 1)
Over the next three blogposts, we’re going to share with you brief descriptions of the papers we are presenting at the NIPS 2016 Conference in Barcelona.Read More
Working with the NHS to build lifesaving technology
Were very proud to announce a groundbreaking five year partnership with the Royal Free London NHS Foundation Trust.Doctors and nurses in the NHS do a phenomenal job caring for patients, but theyre being badly let down by technology. Pagers, fax machines and paper records are still standard in most NHS hospitals, and too often top-down IT systems dont meet clinical needs because they are built far away from the frontline of patient care.This slow andoutdated technology means thatimportant changes in apatients condition often dont get brought to the attention of the right clinician in time to prevent further serious illness.When this doesnt happen, the consequences for patients can be severe, and even fatal. At least ten thousand people a year die in UK hospitals through entirely preventable causes, and some 40% of patients could avoid being admitted to intensive care, if the right clinician was able to take the right action sooner.Our partnership aims to change that, by taking a very different approach to building IT for patient care.Together we are creating world-leading technology, in close collaboration withclinicians themselves, to ensure thatthe right patient information gets to the right clinicians at the right time, reducing preventable deaths and illnesses.Read More
Reinforcement learning with unsupervised auxiliary tasks
Our primary mission at DeepMind is to push the boundaries of AI, developing programs that can learn to solve any complex problem without needing to be taught how. Our reinforcement learning agents have achieved breakthroughs in Atari 2600 games and the game of Go. Such systems, however, can require a lot of data and a long time to learn so we are always looking for ways to improve our generic learning algorithms.Read More
DeepMind and Blizzard to release StarCraft II as an AI research environment
Today at BlizzCon 2016 in Anaheim, California, we announced our collaboration with Blizzard Entertainment to open up StarCraft II to AI and Machine Learning researchers around the world.Read More
Differentiable neural computers
In a recent study in Nature, we introduce a form of memory-augmented neural network called a differentiable neural computer, and show that it can learn to use its memory to answer questions about complex, structured data, including artificially generated stories, family trees, and even a map of the London Underground. We also show that it can solve a block puzzle game using reinforcement learning.Read More