Improvements come from new transfer learning method, new publicly released data set.Read More
OpenAI Standardizes on PyTorch
We are standardizing OpenAI’s deep learning framework on PyTorch. In the past, we implemented projects in many frameworks depending on their relative strengths. We’ve now chosen to standardize to make it easier for our team to create and share optimized implementations of our models.
As part of this move, we’ve just released a PyTorch-enabled version of Spinning Up in Deep RL, an open-source educational resource produced by OpenAI that makes it easier to learn about deep reinforcement learning. We are also in the process of writing PyTorch bindings for our highly-optimized blocksparse kernels, and will open-source those bindings in upcoming months.
The main reason we’ve chosen PyTorch is to increase our research productivity at scale on GPUs. It is very easy to try and execute new research ideas in PyTorch; for example, switching to PyTorch decreased our iteration time on research ideas in generative modeling from weeks to days. We’re also excited to be joining a rapidly-growing developer community, including organizations like Facebook and Microsoft, in pushing scale and performance on GPUs.
Going forward we’ll primarily use PyTorch as our deep learning framework but sometimes use other ones when there’s a specific technical reason to do so. Many of our teams have already made the switch, and we look forward to contributing to the PyTorch community in upcoming months.
OpenAI
Web Search and Data Mining conference is “extraordinarily selective”
Only 15% of submissions to “boutique” conference are accepted; four Amazon papers to be presented.Read More
Betty Mohler: Amazon is a great place to further the state-of-the-art in the world of digital humans
Principal research scientist Betty Mohler talks about virtual reality, digital humans and Amazon.Read More
7 must-see presentations from AWS re:Invent 2019
Spotting deepfakes, indoor farming, precision cancer treatment, and more.Read More
Preserving privacy in analyses of textual data
New “Mad Libs” technique for replacing words in individual sentences is grounded in metric differential privacy.Read More
How we taught Alexa to correct her own defects
Self-learning system uses customers’ rephrased requests as implicit error signals.Read More
Amazon’s internal conferences build a sense of community: Kevin Small
Kevin Small has been involved in organizing many of Amazon’s internal conferences in his more than five years at Amazon. In this conversation, Kevin explains how Amazon’s internal conferences facilitate important breakthroughs, forge collaborations between groups, and help advance one’s career.Read More
How AWS gets ideas for its new AI products and services
At re:Invent 2019, Amazon executive Swami Sivasubramanian spoke about a commitment to democratizing machine learning, and making its benefits available to all.Read More
The research behind Alexa’s popular whispered speech
According to listener tests, whispers produced by a new machine learning model sound as natural as vocoded human whispers.Read More