We believe that Artificial Intelligence will be one of the most important and widely beneficial scientific advances ever made, helping humanity tackle some of its greatest challenges, from climate change to delivering advanced healthcare. But for AI to deliver on this promise, we know that the technology must be built in a responsible manner and that we must consider all potential challenges and risks. That is why DeepMind co-founded initiatives like the Partnership on AI to Benefit People and Society and why we have a team dedicated to technical AI Safety. Research in this field needs to be open and collaborative to ensure that best practices are adopted as widely as possible, which is why we are also collaborating with OpenAI on research in technical AI Safety. One of the central questions in this field is how we allow humans to tell a system what we want it to do and – importantly – what we dont want it to do. This is increasingly important as the problems we tackle with machine learning grow more complex and are applied in the real world.The first results from our collaboration demonstrate one method to address this, by allowing humans with no technical experience to teach a reinforcement learning (RL) system – an AI that learns by trial and error – a complex goal. This removes the need for the human to specify a goal for the algorithm in advance.Read More
A neural approach to relational reasoning
Consider the reader who pieces together the evidence in an Agatha Christie novel to predict the culprit of the crime, a child who runs ahead of her ball to prevent it rolling into a stream or even a shopper who compares the relative merits of buying kiwis or mangos at the market.We carve our world into relations between things. And we understand how the world works through our capacity to draw logical conclusions about how these different things – such as physical objects, sentences, or even abstract ideas – are related to one another. This ability is called relational reasoning and is central to human intelligence.We construct these relations from the cascade of unstructured sensory inputs we experience every day. For example, our eyes take in a barrage of photons, yet our brain organises this blooming, buzzing confusion into the particular entities that we need to relate.Read More
AlphaGo’s next move
With just three stones on the board, it was clear that this was going to be no ordinary game of Go. Chinese Go Grandmaster and world number one Ke Jie departed from his typical style of play and opened with a 3:3 point strategy – a highly unusual approach aimed at quickly claiming corner territory at the start of the game. The placement is rare amongst Go players, but its a favoured position of our program AlphaGo. Ke Jie was playing it at its own game.Ke Jies thoughtful positioning of that single black stone was a fitting motif for the opening match of The Future of Go Summit in Wuzhen, China, an event dedicated to exploring the truth of this beautiful and ancient game. Over the last five days we have been honoured to witness games of the highest calibre.Read More
Exploring the mysteries of Go with AlphaGo and China’s top players
Just over a year ago, we saw a major milestone in the field of artificial intelligence: DeepMinds AlphaGo took on and defeated one of the worlds top Go players, the legendary Lee Sedol. Even then, we had no idea how this moment would affect the 3,000 year old game of Go and the growing global community of devotees to this beautiful board game.Read More
Innovations of AlphaGo
One of the great promises of AI is its potential to help us unearth new knowledge in complex domains. Weve already seen exciting glimpses of this, when our algorithms found ways to dramatically improve energy use in data centres – as well as of course with our program AlphaGo.Read More
Open sourcing Sonnet – a new library for constructing neural networks
Its now nearly a year since DeepMind made the decision to switch the entire research organisation to using TensorFlow (TF). Its proven to be a good choice – many of our models learn significantly faster, and the built-in features for distributed training have hugely simplified our code. Along the way, we found that the flexibility and adaptiveness of TF lends itself to building higher level frameworks for specific purposes, and weve written one for quickly building neural network modules with TF. We are actively developing this codebase, but what we have so far fits our research needs well, and were excited to announce that today we are open sourcing it. We call this framework Sonnet.Read More
Distill: Communicating the science of machine learning
Like every field of science, the importance of clear communication in machine learning research cannot be over-emphasised: it helps to drive forward the state-of-the art by allowing the research community to share, discuss and build upon new findings.For this reason, we at DeepMind are enthusiastic supporters of Distill, a new independent, web-based medium for clear and open – demystified – machine learning research, comprising a journal, prizesrecognising outstanding work,and tools to create interactive essays.Read More
Enabling Continual Learning in Neural Networks
Computer programs that learn to perform tasks also typically forget them very quickly. We show that the learning rule can be modified so that a program can remember old tasks when learning a new one. This is an important step towards more intelligent programs that are able to learn progressively and adaptively.Read More
Trust, confidence and Verifiable Data Audit
Data can be a powerful force for social progress, helping our most important institutions to improve how they serve their communities. As cities, hospitals, and transport systems find new ways to understand what people need from them, theyre unearthing opportunities to change how they work today and identifying exciting ideas for the future.Data can only benefit society if it has societys trust and confidence, and here we all face a challenge. Now that you can use data for so many more purposes, people arent just asking about whos holding information and whether its being kept securely they also want greater assurances about precisely what is being done with it.In that context, auditability becomes an increasingly important virtue. Any well-built digital tool will already log how it uses data, and be able to show and justify those logs if challenged. But the more powerful and secure we can make that audit process, the easier it becomes to establish real confidence about how data is being used in practice.Imagine a service that could give mathematical assurance about what is happening with each individual piece of personal data, without possibility of falsification or omission. Imagine the ability for the inner workings of that system to be checked in real-time, to ensure that data is only being used as it should be.Read More
A milestone for DeepMind Health and Streams
In November we announced a groundbreaking five year partnership with the Royal Free London to deploy and expand on Streams, our secure clinical app that aims to improve care by getting the right information to the right clinician at the right time.The first version of Streams has now been deployed at the Royal Free and were delighted that the early feedback from nurses, doctors and patients has so far been really positive. Some of the nurses using Streams at the hospital estimate that the app is saving them up to two hours per day, giving them more time to spend with patients in need. And were starting to hear the first stories of patients whose conditions were picked up and acted on faster thanks to Streams alerts.Patients likeAfia Ahmed, who was seen more quickly thanks to the instant alerts. You can read more about the deployment and some of the early positive signs over on the Royal Frees website.Read More