When we set up DeepMind Health we believed that pioneering technology should be matched with pioneering oversight. Thats why when we launched in February 2016, we did so with an unusual and additional mechanism: a panel of Independent Reviewers, who meet regularly throughout the year to scrutinise our work. This is an innovative approach within tech companies – one that forces us to question not only what we are doing, but how and why we are doing it – and we believe that their robust challenges make us better. In their report last year, the Independent Reviewers asked us important questions about our engagement with stakeholders, data governance, and the behavioural elements that need to be considered when deploying new technologies in clinical environments. Weve done a lot over the past twelve months to address these questions, and were really proud that this years Annual Report recognises the progress weve made.Of course, this years report includes a series of new recommendations for areas where we can continue to improve, which well be working on in the coming months. In particular:Were developing our longer-term business model and roadmap, and look forward to sharing our ideas once theyre further ahead. Rather than charging for the early stages of our work, our first priority has been to prove that our technologies can help improve patient care and reduce costs.Read More
Neural scene representation and rendering
There is more than meets the eye when it comes to how we understand a visual scene: our brains draw on prior knowledge to reason and to make inferences that go far beyond the patterns of light that hit our retinas. For example, when entering a room for the first time, you instantly recognise the items it contains and where they are positioned. If you see three legs of a table, you will infer that there is probably a fourth leg with the same shape and colour hidden from view. Even if you cant see everything in the room, youll likely be able to sketch its layout, or imagine what it looks like from another perspective.These visual and cognitive tasks are seemingly effortless to humans, but they represent a significant challenge to our artificial systems. Today, state-of-the-art visual recognition systems are trained using large datasets of annotated images produced by humans. Acquiring this data is a costly and time-consuming process, requiring individuals to label every aspect of every object in each scene in the dataset. As a result, often only a small subset of a scenes overall contents is captured, which limits the artificial vision systems trained on that data.Read More
Royal Free London publishes findings of legal audit in use of Streams
Last July, the Information Commissioner concluded an investigation into the use of the Streams app at the Royal Free London NHS Foundation Trust. As part of the investigation the Royal Free signed up to a set of undertakings one of which was to commission a third party to audit the Royal Frees current data processing arrangements with DeepMind, to ensure that they fully complied with data protection law and respected the privacy and confidentiality rights of its patients.You can read the full report on the Royal Frees website here, and the Information Commissioners Offices response here. The report also has three recommendations that relate to DeepMind Health:It recommends a minor amendment to our information processing agreement to contain an express obligation on us to inform the Royal Free if, in our opinion, the Royal Frees instructions infringe data protection laws. Were working with the Royal Free to make this change to the agreement.It recommends that we continue to review and audit the activity of staff who have been approved access to these systems remotely.It recommends that the Royal Free terminate the historical memorandum of understanding (MOU) with DeepMind. This was originally signed in January 2016 to detail the services that we then planned to develop with the Trust.Read More
HypRank: How Alexa determines what skill can best meet a customer’s need
Amazon Alexa currently has more than 40,000 third-party skills, which customers use to get information, perform tasks, play games, and more. To make it easier for customers to find and engage with skills, we are moving toward skill invocation that doesn’t require mentioning a skill by name (as highlighted in a recent post).Read More
The Scalable Neural Architecture behind Alexa’s Ability to Select Skills
Alexa is a cloud-based service with natural-language-understanding capabilities that powers devices like Amazon Echo, Echo Show, Echo Plus, Echo Spot, Echo Dot, and more. Alexa-like voice services traditionally have supported small numbers of well-separated domains, such as calendar or weather. In an effort to extend the capabilities of Alexa, Amazon in 2015 released the Alexa Skills Kit, so third-party developers could add to Alexa’s voice-driven capabilities. We refer to new third-party capabilities as skills, and Alexa currently has more than 40,000.Read More
New way to annotate training data should enable more sophisticated Alexa interactions
Developing a new Alexa skill typically means training a machine-learning system with annotated data, and the skill’s ability to “understand” natural-language requests is limited by the expressivity of the semantic representation used to do the annotation. So far, the techniques used to represent natural language have been fairly simple, so Alexa has been able to handle only relatively simple requests.Read More