Deep neural networks have learnt to do an amazing array of tasks – from recognising and reasoning about objects in images to playing Atari and Go at super-human levels. As these tasks and network architectures become more complex, the solutions that neural networks learn become more difficult to understand.Read More
Enhancing patient safety at Taunton and Somerset NHS Foundation Trust
Were delighted to announce our first partnership outside of London to help doctors and nurses break new ground in the NHSs use of digital technology.Streams is our secure mobile app that helps doctors and nurses give faster urgent care to patients showing signs of deterioration by giving them the right information more quickly. Over the next five years, well be rolling it out at Taunton and Somerset NHS Foundation Trust as part of a new partnership. You can find out more on the trusts website.Our collaboration with Taunton and Somerset follows on from our work with Imperial College Healthcare NHS Trust and the Royal Free NHS Foundation Trust. Nurses already using Streams at the Royal Free tell us that the app is saving them up to two hours a day, allowing them to redirect valuable time back into targeted patient care.Where some current systems can take hours, Streams uses breaking news style alerts to notify clinicians within seconds when a test results indicates that one of their patients shows signs of becoming ill.Read More
Learning through human feedback
We believe that Artificial Intelligence will be one of the most important and widely beneficial scientific advances ever made, helping humanity tackle some of its greatest challenges, from climate change to delivering advanced healthcare. But for AI to deliver on this promise, we know that the technology must be built in a responsible manner and that we must consider all potential challenges and risks. That is why DeepMind co-founded initiatives like the Partnership on AI to Benefit People and Society and why we have a team dedicated to technical AI Safety. Research in this field needs to be open and collaborative to ensure that best practices are adopted as widely as possible, which is why we are also collaborating with OpenAI on research in technical AI Safety. One of the central questions in this field is how we allow humans to tell a system what we want it to do and – importantly – what we dont want it to do. This is increasingly important as the problems we tackle with machine learning grow more complex and are applied in the real world.The first results from our collaboration demonstrate one method to address this, by allowing humans with no technical experience to teach a reinforcement learning (RL) system – an AI that learns by trial and error – a complex goal. This removes the need for the human to specify a goal for the algorithm in advance.Read More
A neural approach to relational reasoning
Consider the reader who pieces together the evidence in an Agatha Christie novel to predict the culprit of the crime, a child who runs ahead of her ball to prevent it rolling into a stream or even a shopper who compares the relative merits of buying kiwis or mangos at the market.We carve our world into relations between things. And we understand how the world works through our capacity to draw logical conclusions about how these different things – such as physical objects, sentences, or even abstract ideas – are related to one another. This ability is called relational reasoning and is central to human intelligence.We construct these relations from the cascade of unstructured sensory inputs we experience every day. For example, our eyes take in a barrage of photons, yet our brain organises this blooming, buzzing confusion into the particular entities that we need to relate.Read More