Deep neural networks have learnt to do an amazing array of tasks – from recognising and reasoning about objects in images to playing Atari and Go at super-human levels. As these tasks and network architectures become more complex, the solutions that neural networks learn become more difficult to understand.Read More
Enhancing patient safety at Taunton and Somerset NHS Foundation Trust
Were delighted to announce our first partnership outside of London to help doctors and nurses break new ground in the NHSs use of digital technology.Streams is our secure mobile app that helps doctors and nurses give faster urgent care to patients showing signs of deterioration by giving them the right information more quickly. Over the next five years, well be rolling it out at Taunton and Somerset NHS Foundation Trust as part of a new partnership. You can find out more on the trusts website.Our collaboration with Taunton and Somerset follows on from our work with Imperial College Healthcare NHS Trust and the Royal Free NHS Foundation Trust. Nurses already using Streams at the Royal Free tell us that the app is saving them up to two hours a day, allowing them to redirect valuable time back into targeted patient care.Where some current systems can take hours, Streams uses breaking news style alerts to notify clinicians within seconds when a test results indicates that one of their patients shows signs of becoming ill.Read More
Learning through human feedback
We believe that Artificial Intelligence will be one of the most important and widely beneficial scientific advances ever made, helping humanity tackle some of its greatest challenges, from climate change to delivering advanced healthcare. But for AI to deliver on this promise, we know that the technology must be built in a responsible manner and that we must consider all potential challenges and risks. That is why DeepMind co-founded initiatives like the Partnership on AI to Benefit People and Society and why we have a team dedicated to technical AI Safety. Research in this field needs to be open and collaborative to ensure that best practices are adopted as widely as possible, which is why we are also collaborating with OpenAI on research in technical AI Safety. One of the central questions in this field is how we allow humans to tell a system what we want it to do and – importantly – what we dont want it to do. This is increasingly important as the problems we tackle with machine learning grow more complex and are applied in the real world.The first results from our collaboration demonstrate one method to address this, by allowing humans with no technical experience to teach a reinforcement learning (RL) system – an AI that learns by trial and error – a complex goal. This removes the need for the human to specify a goal for the algorithm in advance.Read More
A neural approach to relational reasoning
Consider the reader who pieces together the evidence in an Agatha Christie novel to predict the culprit of the crime, a child who runs ahead of her ball to prevent it rolling into a stream or even a shopper who compares the relative merits of buying kiwis or mangos at the market.We carve our world into relations between things. And we understand how the world works through our capacity to draw logical conclusions about how these different things – such as physical objects, sentences, or even abstract ideas – are related to one another. This ability is called relational reasoning and is central to human intelligence.We construct these relations from the cascade of unstructured sensory inputs we experience every day. For example, our eyes take in a barrage of photons, yet our brain organises this blooming, buzzing confusion into the particular entities that we need to relate.Read More
AlphaGo’s next move
With just three stones on the board, it was clear that this was going to be no ordinary game of Go. Chinese Go Grandmaster and world number one Ke Jie departed from his typical style of play and opened with a 3:3 point strategy – a highly unusual approach aimed at quickly claiming corner territory at the start of the game. The placement is rare amongst Go players, but its a favoured position of our program AlphaGo. Ke Jie was playing it at its own game.Ke Jies thoughtful positioning of that single black stone was a fitting motif for the opening match of The Future of Go Summit in Wuzhen, China, an event dedicated to exploring the truth of this beautiful and ancient game. Over the last five days we have been honoured to witness games of the highest calibre.Read More
Exploring the mysteries of Go with AlphaGo and China’s top players
Just over a year ago, we saw a major milestone in the field of artificial intelligence: DeepMinds AlphaGo took on and defeated one of the worlds top Go players, the legendary Lee Sedol. Even then, we had no idea how this moment would affect the 3,000 year old game of Go and the growing global community of devotees to this beautiful board game.Read More
Innovations of AlphaGo
One of the great promises of AI is its potential to help us unearth new knowledge in complex domains. Weve already seen exciting glimpses of this, when our algorithms found ways to dramatically improve energy use in data centres – as well as of course with our program AlphaGo.Read More
Open sourcing Sonnet – a new library for constructing neural networks
Its now nearly a year since DeepMind made the decision to switch the entire research organisation to using TensorFlow (TF). Its proven to be a good choice – many of our models learn significantly faster, and the built-in features for distributed training have hugely simplified our code. Along the way, we found that the flexibility and adaptiveness of TF lends itself to building higher level frameworks for specific purposes, and weve written one for quickly building neural network modules with TF. We are actively developing this codebase, but what we have so far fits our research needs well, and were excited to announce that today we are open sourcing it. We call this framework Sonnet.Read More
Distill: Communicating the science of machine learning
Like every field of science, the importance of clear communication in machine learning research cannot be over-emphasised: it helps to drive forward the state-of-the art by allowing the research community to share, discuss and build upon new findings.For this reason, we at DeepMind are enthusiastic supporters of Distill, a new independent, web-based medium for clear and open – demystified – machine learning research, comprising a journal, prizesrecognising outstanding work,and tools to create interactive essays.Read More
Enabling Continual Learning in Neural Networks
Computer programs that learn to perform tasks also typically forget them very quickly. We show that the learning rule can be modified so that a program can remember old tasks when learning a new one. This is an important step towards more intelligent programs that are able to learn progressively and adaptively.Read More