Many novel machine learning innovations contribute to AlphaFold’s current level of accuracy. We give a high-level overview of the system below; for a technical description of the network architecture see our AlphaFold methods paper and especially its extensive Supplementary Information.Read More
Putting the power of AlphaFold into the world’s hands
In partnership with EMBL-EBI, were incredibly proud to be launching the AlphaFold Protein Structure Database.Read More
Melting Pot: an evaluation suite for multi-agent reinforcement learning
Here we introduce Melting Pot, a scalable evaluation suite for multi-agent reinforcement learning. Melting Pot assesses generalisation to novel social situations involving both familiar and unfamiliar individuals, and has been designed to test a broad range of social interactions such as: cooperation, competition, deception, reciprocation, trust, stubbornness and so on. Melting Pot offers researchers a set of 21 MARL “substrates” (multi-agent games) on which to train agents, and over 85 unique test scenarios on which to evaluate these trained agents.Read More
An update on our racial justice efforts
To help combat racism and advance racial equity, we’ve made donations to organisations that support Black communities in the AI/ML space.Read More
Advancing sports analytics through AI research
Sports analytics is in the midst of a remarkably important era, offering interesting opportunities for AI researchers and sports leaders alike.Read More
Game theory as an engine for large-scale data analysis
Our research explored a new approach to an old problem: we reformulated principal component analysis (PCA), a type of eigenvalue problem, as a competitive multi-agent game we call EigenGame.Read More
Alchemy: A structured task distribution for meta-reinforcement learning
There has been rapidly growing interest in developing methods for meta-learning within deep RL. Although there has been substantive progress toward such ‘meta-reinforcement learning,’ research in this area has been held back by a shortage of benchmark tasks. In the present work, we aim to ease this problem by introducing (and open-sourcing) Alchemy, a useful new benchmark environment for meta-RL, along with a suite of analysis tools.Read More
Data, Architecture, or Losses: What Contributes Most to Multimodal Transformer Success?
In this work, we examine what aspects of multimodal transformers – attention, losses, and pretraining data – are important in their success at multimodal pretraining. We find that Multimodal attention, where both language and image transformers attend to each other, is crucial for these models’ success. Models with other types of attention (even with more depth or parameters) fail to achieve comparable results to shallower and smaller models with multimodal attention.Read More
MuZero: Mastering Go, chess, shogi and Atari without rules
Planning winning strategies in unknown environments is a step forward in the pursuit of general-purpose algorithms.Read More
Imitating Interactive Intelligence
We first create a simulated environment, the Playroom, in which virtual robots can engage in a variety of interesting interactions by moving around, manipulating objects, and speaking to each other. The Playroom’s dimensions can be randomised as can its allocation of shelves, furniture, landmarks like windows and doors, and an assortment of children’s toys and domestic objects. The diversity of the environment enables interactions involving reasoning about space and object relations, ambiguity of references, containment, construction, support, occlusion, partial observability. We embedded two agents in the Playroom to provide a social dimension for studying joint intentionality, cooperation, communication of private knowledge, and so on.Read More