Learning to write programs that generate images

Through a humans eyes, the world is much more than just the images reflected in our corneas. For example, when we look at a building and admire the intricacies of its design, we can appreciate the craftsmanship it requires. This ability to interpret objects through the tools that created them gives us a richer understanding of the world and is an important aspect of our intelligence.We would like our systems to create similarly rich representations of the world. For example, when observing an image of a painting we would like them to understand the brush strokes used to create it and not just the pixels that represent it on a screen.In this work, we equipped artificial agents with the same tools that we use to generate images and demonstrate that they can reason about how digits, characters and portraits are constructed. Crucially, they learn to do this by themselves and without the need for human-labelled datasets. This contrasts with recent research which has so far relied on learning from human demonstrations, which can be a time-intensive process.Read More

Understanding deep learning through neuron deletion

Deep neural networks are composed of many individual neurons, which combine in complex and counterintuitive ways to solve a wide range of challenging tasks. This complexity grants neural networks their power but also earns them their reputation as confusing and opaque black boxes.Understanding how deep neural networks function is critical for explaining their decisions and enabling us to build more powerful systems. For instance, imagine the difficulty of trying to build a clock without understanding how individual gears fit together. One approach to understanding neural networks, both in neuroscience and deep learning, is to investigate the role of individual neurons, especially those which are easily interpretable.Our investigation intothe importance of single directions for generalisation, soonto appear at the Sixth International Conference on Learning Representations (ICLR), uses an approach inspired by decades of experimental neuroscience exploring the impact of damage to determine: how important are small groups of neurons in deep neural networks? Are more easily interpretable neurons also more important to the networks computation?Read More

Stop, look and listen to the people you want to help

I like to take things slow. Take it slowly and get it right first time, one participant said, but was quickly countered by someone else around the table: But Im impatient, I want to see the benefits now. This exchange neatly captures many of the conversations I heard at DeepMind Healths recent Collaborative Listening Summit. It also represents, in laymans terms, the debate that tech thinkers and policy-makers are having right now about the future of artificial intelligence.The Collaborative Listening Summit brought together members of the public, patient representatives and stakeholder, and was facilitated by Ipsos MORI. The objective of the Summit: to explore how principles, co-created in earlier events with the public, patients and stakeholders, should govern DeepMind Healths operating practices and engagement with the NHS. These principles ranged from the technical for example, how evidence should inform DeepMinds practice to the societal for example, operating in the best interests of society.The challenge of how technology companies and the NHS should interact has had many of us, including myself, cautious about the risk of big technology firms leveraging their finance and power over an NHS that is under seemingly endless pressure.Read More

Learning by playing

Getting children (and adults) to tidy up after themselves can be a challenge, but we face an even greater challenge trying to get our AI agents to do the same. Success depends on the mastery of several core visuo-motor skills: approaching an object, grasping and lifting it, opening a box and putting things inside of it. To make matters more complicated, these skills must be applied in the right sequence.Control tasks, like tidying up a table or stacking objects, require an agent to determine how, when and where to coordinate the nine joints of its simulated arms and fingers to move correctly and achieve its objective. The sheer number of possible combinations of movements at any given time, along with the need to carry out a long sequence of correct actions constitute a serious exploration problemmaking this a particularly interesting area for reinforcement learning research.Techniques like reward shaping, apprenticeship learning or learning from demonstrations can help with the exploration problem. However, these methods rely on a considerable amount of knowledge about the taskthe problem of learning complex control problems from scratch with minimal prior knowledge is still an open challenge.Our new paper proposes a new learning paradigm called Scheduled Auxiliary Control (SAC-X) which seeks to overcome this exploration issue.Read More

Researching patient deterioration with the US Department of Veterans Affairs

Were excited to announce a medical research partnership with the US Department of Veterans Affairs (VA), one of the worlds leading healthcare organisations responsible for providing high-quality care to veterans and their families across the United States.This project will see us analyse patterns from historical, depersonalised medical records to predict patient deterioration.Patient deterioration is a significant global health problem that often has fatal consequences. Studies estimate that 11% of all in-hospital deaths are due to patient deterioration not being recognised early enough or acted on in the right way.Alongside world-renowned clinicians and researchers at the VA, we are analysing patterns from approximately 700,000 historical, depersonalised medical records in order to determine if machine learning can accurately identify the risk factors for patient deterioration and correctly predict its onset.Were focusing on Acute Kidney Injury (AKI), one of the most common conditions associated with patient deterioration, and an area where DeepMind and the VA both have expertise. This is a complex challenge, because predicting AKI is far from easy. Not only is the onset of AKI sudden and often asymptomatic, but the risk factors associated with it are commonplace throughout hospitals.Read More

Scalable agent architecture for distributed training

Deep Reinforcement Learning (DeepRL) has achieved remarkable success in a range of tasks, from continuous control problems in robotics to playing games like Go and Atari. The improvements seen in these domains have so far been limited to individual tasks where a separate agent has been tuned and trained for each task.In our most recent work, we explore the challenge of training a single agent on many tasks.Today we are releasing DMLab-30, a set of new tasks that span a large variety of challenges in a visually unified environment with a common action space.Training an agent to perform well on many tasks requires massive throughput and making efficient use of every data point. To this end, we have developed a new, highly scalable agent architecture for distributed training called Importance Weighted Actor-Learner Architecture that uses a new off-policy correction algorithm called V-trace.DMLab-30DMLab-30 is a collection of new levels designed using our open source RL environment DeepMind Lab. These environments enable any DeepRL researcher to test systems on a large spectrum of interesting tasks either individually or in a multi-task setting.Read More

Learning explanatory rules from noisy data

Suppose you are playing football. The ball arrives at your feet, and you decide to pass it to the unmarked striker. What seems like one simple action requires two different kinds of thought.First, you recognise that there is a football at your feet. This recognition requires intuitive perceptual thinking -you cannot easily articulate how you come to know that there is a ball at your feet, you just see that it is there. Second, you decide to pass the ball to a particular striker. This decision requires conceptual thinking. Your decision is tied to a justification – the reason you passed the ball to the striker is because she was unmarked.The distinction is interesting to us because these two types of thinking correspond to two different approaches to machine learning: deep learning and symbolic program synthesis. Deep learning concentrates on intuitive perceptual thinking whereas symbolic program synthesis focuses on conceptual, rule-based thinking. Each system has different merits – deep learning systems are robust to noisy data but are difficult to interpret and require large amounts of data to train, whereas symbolic systems are much easier to interpret and require less training data but struggle with noisy data.Read More

Open-sourcing Psychlab

Consider the simple task of going shopping for your groceries. If you fail to pick-up an item that is on your list, what does it tell us about the functioning of your brain? It might indicate that you have difficulty shifting your attention from object to object while searching for the item on your list. It might indicate a difficulty with remembering the grocery list. Or it could it be something to do with executing both skills simultaneously.Read More

Game-theory insights into asymmetric multi-agent games

As AI systems start to play an increasing role in the real world it is important to understand how different systems will interact with one another.In our latest paper, published in the journal Scientific Reports, we use a branch of game theory to shed light on this problem. In particular, we examine how two intelligent systems behave and respond in a particular type of situation known as an asymmetric game, which include Leduc poker and various board games such as Scotland Yard. Asymmetric games also naturally model certain real-world scenarios such as automated auctions where buyers and sellers operate with different motivations. Our results give us new insights into these situations and reveal a surprisingly simple way to analyse them. While our interest is in how this theory applies to the interaction of multiple AI systems, we believe the results could also be of use in economics, evolutionary biology and empirical game theory among others.Read More

2017: DeepMind’s year in review

In July, the world number one Go player Ke Jie spoke after a streak of 20 wins. It was two months after he had played AlphaGo at the Future of Go Summit in Wuzhen, China.After my match against AlphaGo, I fundamentally reconsidered the game, and now I can see that this reflection has helped me greatly, he said. I hope all Go players can contemplate AlphaGos understanding of the game and style of thinking, all of which is deeply meaningful. Although I lost, I discovered that the possibilities of Go are immense and that the game has continued to progress.Read More