Amazon’s Bernhard Schölkopf and Dominik Janzing are first and second authors on “breakthrough 2012 paper”.Read More
Introducing the PlayTorch app: Rapidly Create Mobile AI Experiences
In December, we announced PyTorch Live, a toolkit for building AI-powered mobile prototypes in minutes. The initial release included a command-line interface to set up a development environment and an SDK for building AI-powered experiences in React Native. Today, we’re excited to share that PyTorch Live will now be known as PlayTorch. This new release provides an improved and simplified developer experience. PlayTorch development is independent from the PyTorch project and the PlayTorch code repository is moving into the Meta Research GitHub organization.
A New Workflow: The PlayTorch App
The PlayTorch team is excited to announce that we have partnered with Expo to change the way AI powered mobile experiences are built. Our new release simplifies the process of building mobile AI experiences by eliminating the need for a complicated development environment. You will now be able to build cross platform AI powered prototypes from the very browser you are using to read this blog.
In order to make this happen, we are releasing the PlayTorch app which is able to run AI-powered experiences built in the Expo Snack web based code editor.
The PlayTorch app can be downloaded from the Apple App Store and Google Play Store. With the app installed, you can head over to playtorch.dev/snack and write the code for your AI-powered PlayTorch Snack. When you want to try what you’ve built, you can use the PlayTorch app’s QR code scanner to scan the QR code on the Snack page and load the code to your device.
NOTE: PlayTorch Snacks will not work in the Expo Go app.
More to Explore in the PlayTorch App
AI Demos
The PlayTorch app comes with several examples of how you can build AI powered experiences with a variety of different machine learning models from object detection to natural language processing. See what can be built with the PlayTorch SDK and be inspired to make something of your own as you play with the examples.
Sharing Your Creations
Any PlayTorch Snack that you run in the PlayTorch app can be shared with others in an instant. When they open the link on their device, the PlayTorch app will instantly load what you’ve built from the cloud so they can experience it first hand.
When you have something you want to share, let us know on Discord or Twitter or embed the PlayTorch Snack on your own webpage.
SDK Overhaul
We learned a lot from the community after our initial launch in December and have been hard at work over the past several months to make the PlayTorch SDK (formerly known as PyTorch Live) simple, performant, and robust. In our initial version, the SDK relied on config files to define how a model ingested and output data.
Today, we are happy to announce the next version of our SDK can handle data processing in JavaScript for your prototypes with the new PlayTorch API that leverages the JavaScript Interface (JSI) to directly call C++ code. Not only have we completely redone the way you can interact with models, but we have also greatly expanded the variety of supported model architectures.
A New Data Processing API for Prototyping
With this JSI API, we now allow users direct access to tensors (data format for machine learning). Instead of only having access to predefined transformations, you can now manipulate tensors however you would like for your prototypes.
No more switching back and forth between code and config. You will now be able to write everything in JavaScript and have access to all of the type annotations and autocomplete features available to you in those languages.
Check out our tutorials to see the new Data Processing API in action, take a deeper dive in the API docs, or inspect the code yourself on GitHub.
Expanded Use Cases
With the new version of the SDK, we have added support for several cutting edge models.
Image-to-image transformations are now supported thanks to our robust JSI API, so you can see what your world would look like if it were an anime.
Translate French to English with an AI powered translator using the Seq2Seq model.
Use DeepLab V3 to segment images!
Start Playing
If you want to start creating AI experiences yourself, head over to playtorch.dev and try out our tutorials. Each tutorial will guide you through building a simple AI powered experience that you can instantly run on your phone and share with others.
How to Get Support
Join us on Discord, collaborate with us on GitHub, or follow us on Twitter. Got questions or feedback? We’d love to hear from you!
Meta Research PhD Fellowship Spotlight: Making the most of data with meta-learning
As a continuation of our Fellowship spotlight series, we’re highlighting Misha Khodak, a 2021 Meta Research PhD Fellow in machine learning.Read More
Explained: How to tell if artificial intelligence is working the way we want it to
About a decade ago, deep-learning models started achieving superhuman results on all sorts of tasks, from beating world-champion board game players to outperforming doctors at diagnosing breast cancer.
These powerful deep-learning models are usually based on artificial neural networks, which were first proposed in the 1940s and have become a popular type of machine learning. A computer learns to process data using layers of interconnected nodes, or neurons, that mimic the human brain.
As the field of machine learning has grown, artificial neural networks have grown along with it.
Deep-learning models are now often composed of millions or billions of interconnected nodes in many layers that are trained to perform detection or classification tasks using vast amounts of data. But because the models are so enormously complex, even the researchers who design them don’t fully understand how they work. This makes it hard to know whether they are working correctly.
For instance, maybe a model designed to help physicians diagnose patients correctly predicted that a skin lesion was cancerous, but it did so by focusing on an unrelated mark that happens to frequently occur when there is cancerous tissue in a photo, rather than on the cancerous tissue itself. This is known as a spurious correlation. The model gets the prediction right, but it does so for the wrong reason. In a real clinical setting where the mark does not appear on cancer-positive images, it could result in missed diagnoses.
With so much uncertainty swirling around these so-called “black-box” models, how can one unravel what’s going on inside the box?
This puzzle has led to a new and rapidly growing area of study in which researchers develop and test explanation methods (also called interpretability methods) that seek to shed some light on how black-box machine-learning models make predictions.
What are explanation methods?
At their most basic level, explanation methods are either global or local. A local explanation method focuses on explaining how the model made one specific prediction, while global explanations seek to describe the overall behavior of an entire model. This is often done by developing a separate, simpler (and hopefully understandable) model that mimics the larger, black-box model.
But because deep learning models work in fundamentally complex and nonlinear ways, developing an effective global explanation model is particularly challenging. This has led researchers to turn much of their recent focus onto local explanation methods instead, explains Yilun Zhou, a graduate student in the Interactive Robotics Group of the Computer Science and Artificial Intelligence Laboratory (CSAIL) who studies models, algorithms, and evaluations in interpretable machine learning.
The most popular types of local explanation methods fall into three broad categories.
The first and most widely used type of explanation method is known as feature attribution. Feature attribution methods show which features were most important when the model made a specific decision.
Features are the input variables that are fed to a machine-learning model and used in its prediction. When the data are tabular, features are drawn from the columns in a dataset (they are transformed using a variety of techniques so the model can process the raw data). For image-processing tasks, on the other hand, every pixel in an image is a feature. If a model predicts that an X-ray image shows cancer, for instance, the feature attribution method would highlight the pixels in that specific X-ray that were most important for the model’s prediction.
Essentially, feature attribution methods show what the model pays the most attention to when it makes a prediction.
“Using this feature attribution explanation, you can check to see whether a spurious correlation is a concern. For instance, it will show if the pixels in a watermark are highlighted or if the pixels in an actual tumor are highlighted,” says Zhou.
A second type of explanation method is known as a counterfactual explanation. Given an input and a model’s prediction, these methods show how to change that input so it falls into another class. For instance, if a machine-learning model predicts that a borrower would be denied a loan, the counterfactual explanation shows what factors need to change so her loan application is accepted. Perhaps her credit score or income, both features used in the model’s prediction, need to be higher for her to be approved.
“The good thing about this explanation method is it tells you exactly how you need to change the input to flip the decision, which could have practical usage. For someone who is applying for a mortgage and didn’t get it, this explanation would tell them what they need to do to achieve their desired outcome,” he says.
The third category of explanation methods are known as sample importance explanations. Unlike the others, this method requires access to the data that were used to train the model.
A sample importance explanation will show which training sample a model relied on most when it made a specific prediction; ideally, this is the most similar sample to the input data. This type of explanation is particularly useful if one observes a seemingly irrational prediction. There may have been a data entry error that affected a particular sample that was used to train the model. With this knowledge, one could fix that sample and retrain the model to improve its accuracy.
How are explanation methods used?
One motivation for developing these explanations is to perform quality assurance and debug the model. With more understanding of how features impact a model’s decision, for instance, one could identify that a model is working incorrectly and intervene to fix the problem, or toss the model out and start over.
Another, more recent, area of research is exploring the use of machine-learning models to discover scientific patterns that humans haven’t uncovered before. For instance, a cancer diagnosing model that outperforms clinicians could be faulty, or it could actually be picking up on some hidden patterns in an X-ray image that represent an early pathological pathway for cancer that were either unknown to human doctors or thought to be irrelevant, Zhou says.
It’s still very early days for that area of research, however.
Words of warning
While explanation methods can sometimes be useful for machine-learning practitioners when they are trying to catch bugs in their models or understand the inner-workings of a system, end-users should proceed with caution when trying to use them in practice, says Marzyeh Ghassemi, an assistant professor and head of the Healthy ML Group in CSAIL.
As machine learning has been adopted in more disciplines, from health care to education, explanation methods are being used to help decision makers better understand a model’s predictions so they know when to trust the model and use its guidance in practice. But Ghassemi warns against using these methods in that way.
“We have found that explanations make people, both experts and nonexperts, overconfident in the ability or the advice of a specific recommendation system. I think it is very important for humans not to turn off that internal circuitry asking, ‘let me question the advice that I am
given,’” she says.
Scientists know explanations make people over-confident based on other recent work, she adds, citing some recent studies by Microsoft researchers.
Far from a silver bullet, explanation methods have their share of problems. For one, Ghassemi’s recent research has shown that explanation methods can perpetuate biases and lead to worse outcomes for people from disadvantaged groups.
Another pitfall of explanation methods is that it is often impossible to tell if the explanation method is correct in the first place. One would need to compare the explanations to the actual model, but since the user doesn’t know how the model works, this is circular logic, Zhou says.
He and other researchers are working on improving explanation methods so they are more faithful to the actual model’s predictions, but Zhou cautions that, even the best explanation should be taken with a grain of salt.
“In addition, people generally perceive these models to be human-like decision makers, and we are prone to overgeneralization. We need to calm people down and hold them back to really make sure that the generalized model understanding they build from these local explanations are balanced,” he adds.
Zhou’s most recent research seeks to do just that.
What’s next for machine-learning explanation methods?
Rather than focusing on providing explanations, Ghassemi argues that more effort needs to be done by the research community to study how information is presented to decision makers so they understand it, and more regulation needs to be put in place to ensure machine-learning models are used responsibly in practice. Better explanation methods alone aren’t the answer.
“I have been excited to see that there is a lot more recognition, even in industry, that we can’t just take this information and make a pretty dashboard and assume people will perform better with that. You need to have measurable improvements in action, and I’m hoping that leads to real guidelines about improving the way we display information in these deeply technical fields, like medicine,” she says.
And in addition to new work focused on improving explanations, Zhou expects to see more research related to explanation methods for specific use cases, such as model debugging, scientific discovery, fairness auditing, and safety assurance. By identifying fine-grained characteristics of explanation methods and the requirements of different use cases, researchers could establish a theory that would match explanations with specific scenarios, which could help overcome some of the pitfalls that come from using them in real-world scenarios.
Putting the power of AlphaFold into the world’s hands
When we announced AlphaFold 2 last December, it was hailed as a solution to the 50-year old protein folding problem. Last week, we published the scientific paper and source code explaining how we created this highly innovative system, and today we’re sharing high-quality predictions for the shape of every single protein in the human body, as well as for the proteins of 20 additional organisms that scientists rely on for their research.Read More
Training Generalist Agents with Multi-Game Decision Transformers
Current deep reinforcement learning (RL) methods can train specialist artificial agents that excel at decision-making on various individual tasks in specific environments, such as Go or StarCraft. However, little progress has been made to extend these results to generalist agents that would not only be capable of performing many different tasks, but also upon a variety of environments with potentially distinct embodiments.
Looking across recent progress in the fields of natural language processing, vision, and generative models (such as PaLM, Imagen, and Flamingo), we see that breakthroughs in making general-purpose models are often achieved by scaling up Transformer-based models and training them on large and semantically diverse datasets. It is natural to wonder, can a similar strategy be used in building generalist agents for sequential decision making? Can such models also enable fast adaptation to new tasks, similar to PaLM and Flamingo?
As an initial step to answer these questions, in our recent paper “Multi-Game Decision Transformers” we explore how to build a generalist agent to play many video games simultaneously. Our model trains an agent that can play 41 Atari games simultaneously at close-to-human performance and that can also be quickly adapted to new games via fine-tuning. This approach significantly improves upon the few existing alternatives to learning multi-game agents, such as temporal difference (TD) learning or behavioral cloning (BC).
A Multi-Game Decision Transformer (MGDT) can play multiple games at desired level of competency from training on a range of trajectories spanning all levels of expertise. |
Don’t Optimize for Return, Just Ask for Optimality
In reinforcement learning, reward refers to the incentive signals that are relevant to completing a task, and return refers to cumulative rewards in a course of interactions between an agent and its surrounding environment. Traditional deep reinforcement learning agents (DQN, SimPLe, Dreamer, etc) are trained to optimize decisions to achieve the optimal return. At every time step, an agent observes the environment (some also consider the interactions that happened in the past) and decides what action to take to help itself achieve a higher return magnitude in future interactions.
In this work, we use Decision Transformers as our backbone approach to training an RL agent. A Decision Transformer is a sequence model that predicts future actions by considering past interactions between an agent and the surrounding environment, and (most importantly) a desired return to be achieved in future interactions. Instead of learning a policy to achieve high return magnitude as in traditional reinforcement learning, Decision Transformers map diverse experiences, ranging from expert-level to beginner-level, to their corresponding return magnitude during training. The idea is that training an agent on a range of experiences (from beginner to expert level) exposes the model to a wider range of variations in gameplay, which in turn helps it extract useful rules of gameplay that allow it to succeed under any circumstance. So during inference, the Decision Transformer can achieve any return value in the range it has seen during training, including the optimal return.
But, how do you know if a return is both optimal and stably achievable in a given environment? Previous applications of Decision Transformers relied on customized definitions of the desired return for each individual task, which required manually defining a plausible and informative range of scalar values that are appropriately interpretable signals for each specific game — a task that is non-trivial and rather unscalable. To address this issue, we instead model a distribution of return magnitudes based on past interactions with the environment during training. At inference time, we simply add an optimality bias that increases the probability of generating actions that are associated with higher returns.
To more comprehensively capture spatial-temporal patterns of agent-environment interactions, we also modified the Decision Transformer architecture to consider image patches instead of a global image representation. Patches allow the model to focus on local dynamics, which helps model game specific information in further detail.
These pieces together give us the backbone of Multi-Game Decision Transformers:
Training a Multi-Game Decision Transformer to Play 41 Games at Once
We train one Decision Transformer agent on a large (~1B) and broad set of gameplay experiences from 41 Atari games. In our experiments, this agent, which we call the Multi-Game Decision Transformer (MGDT), clearly outperforms existing reinforcement learning and behavioral cloning methods — by almost 2 times — on learning to play 41 games simultaneously and performs near human-level competency (100% in the following figure corresponds to the level of human gameplay). These results hold when comparing across training methods in both settings where a policy must be learned from static datasets (offline) as well as those where new data can be gathered from interacting with the environment (online).
This result indicates that Decision Transformers are well-suited for multi-task, multi-environment, and multi-embodiment agents.
A concurrent work, “A Generalist Agent”, shows a similar result, demonstrating that large transformer-based sequence models can memorize expert behaviors very well across many more environments. In addition, their work and our work have nicely complementary findings: They show it’s possible to train across a wide range of environments beyond Atari games, while we show it’s possible and useful to train across a wide range of experiences.
In addition to the performance shown above, empirically we found that MGDT trained on a wide variety of experience is better than MDGT trained only on expert-level demonstrations or simply cloning demonstration behaviors.
Scaling Up Multi-Game Model Size to Achieve Better Performance
Argurably, scale has become the main driving force in many recent machine learning breakthroughs, and it is usually achieved by increasing the number of parameters in a transformer-based model. Our observation on Multi-Game Decision Transformers is similar: the performance increases predictably with larger model size. In particular, its performance appears to have not yet hit a ceiling, and compared to other learning systems performance gains are more significant with increases in model size.
Performance of Multi-Game Decision Transformer (shown by the blue line) increases predictably with larger model size, whereas other models do not. |
Pre-trained Multi-Game Decision Transformers Are Fast Learners
Another benefit of MGDTs is that they can learn how to play a new game from very few gameplay demonstrations (which don’t need to all be expert-level). In that sense, MGDTs can be considered pre-trained models capable of being fine-tuned rapidly on small new gameplay data. Compared with other popular pre-training methods, it clearly shows consistent advantages in obtaining higher scores.
Multi-Game Decision Transformer pre-training (DT pre-training, shown in light blue) demonstrates consistent advantages over other popular models in adaptation to new tasks. |
Where Is the Agent Looking?
In addition to the quantitative evaluation, it’s insightful (and fun) to visualize the agent’s behavior. By probing the attention heads, we find that the MGDT model consistently places weight in its field of view to areas of the observed images that contain meaningful game entities. We visualize the model’s attention when predicting the next action for various games and find it consistently attends to entities such as the agent’s on screen avatar, agent’s free movement space, non-agent objects, and key environment features. For example, in an interactive setting, having an accurate world model requires knowing how and when to focus on known objects (e.g., currently present obstacles) as well as expecting and/or planning over future unknowns (e.g., negative space). This diverse allocation of attention to many key components of each environment ultimately improves performance.
Here we can see the amount of weight the model places on each key asset of the game scene. Brighter red indicates more emphasis on that patch of pixels. |
The Future of Large-Scale Generalist Agents
This work is an important step in demonstrating the possibility of training general-purpose agents across many environments, embodiments, and behavior styles. We have shown the benefit of increased scale on performance and the potential with further scaling. These findings seem to point to a generalization narrative similar to other domains like vision and language — we look forward to exploring the great potential of scaling data and learning from diverse experiences.
We look forward to future research towards developing performant agents for multi-environment and multi-embodiment settings. Our code and model checkpoints can soon be accessed here.
Acknowledgements
We’d like to thank all remaining authors of the paper including Igor Mordatch, Ofir Nachum Menjiao Yang, Lisa Lee, Daniel Freeman, Sergio Guadarrama, Ian Fischer, Eric Jang, Henryk Michalewski.
Organize your machine learning journey with Amazon SageMaker Experiments and Amazon SageMaker Pipelines
The process of building a machine learning (ML) model is iterative until you find the candidate model that is performing well and is ready to be deployed. As data scientists iterate through that process, they need a reliable method to easily track experiments to understand how each model version was built and how it performed.
Amazon SageMaker allows teams to take advantage of a broad range of features to quickly prepare, build, train, deploy, and monitor ML models. Amazon SageMaker Pipelines provides a repeatable process for iterating through model build activities, and is integrated with Amazon SageMaker Experiments. By default, every SageMaker pipeline is associated with an experiment, and every run of that pipeline is tracked as a trial in that experiment. Then your iterations are automatically tracked without any additional steps.
In this post, we take a closer look at the motivation behind having an automated process to track experiments with Experiments and the native capabilities built into Pipelines.
Why is it important to keep your experiments organized?
Let’s take a step back for a moment and try to understand why it’s important to have experiments organized for machine learning. When data scientists approach a new ML problem, they have to answer many different questions, from data availability to how they will measure model performance.
At the start, the process is full of uncertainty and is highly iterative. As a result, this experimentation phase can produce multiple models, each created from their own inputs (datasets, training scripts, and hyperparameters) and producing their own outputs (model artifacts and evaluation metrics). The challenge then is to keep track of all these inputs and outputs of each iteration.
Data scientists typically train many different model versions until they find the combination of data transformation, algorithm, and hyperparameters that results in the best performing version of a model. Each of these unique combinations is a single experiment. With a traceable record of the inputs, algorithms, and hyperparameters that were used by that trial, the data science team can find it easy to reproduce their steps.
Having an automated process in place to track experiments improves the ability to reproduce as well as deploy specific model versions that are performing well. The Pipelines native integration with Experiments makes it easy to automatically track and manage experiments across pipeline runs.
Benefits of SageMaker Experiments
SageMaker Experiments allows data scientists organize, track, compare, and evaluate their training iterations.
Let’s start first with an overview of what you can do with Experiments:
- Organize experiments – Experiments structures experimentation with a top-level entity called an experiment that contains a set of trials. Each trial contains a set of steps called trial components. Each trial component is a combination of datasets, algorithms, and parameters. You can picture experiments as the top-level folder for organizing your hypotheses, your trials as the subfolders for each group test run, and your trial components as your files for each instance of a test run.
- Track experiments – Experiments allows data scientists to track experiments. It offers the possibility to automatically assign SageMaker jobs to a trial via simple configurations and via the tracking SDKs.
- Compare and evaluate experiments – The integration of Experiments with Amazon SageMaker Studio makes it easy to produce data visualizations and compare different trials. You can also access the trial data via the Python SDK to generate your own visualization using your preferred plotting libraries.
To learn more about Experiments APIs and SDKs, we recommend the following documentation: CreateExperiment and Amazon SageMaker Experiments Python SDK.
If you want to dive deeper, we recommend looking into the amazon-sagemaker-examples/sagemaker-experiments GitHub repository for further examples.
Integration between Pipelines and Experiments
The model building pipelines that are part of Pipelines are purpose-built for ML and allow you to orchestrate your model build tasks using a pipeline tool that includes native integrations with other SageMaker features as well as the flexibility to extend your pipeline with steps run outside SageMaker. Each step defines an action that the pipeline takes. The dependencies between steps are defined by a direct acyclic graph (DAG) built using the Pipelines Python SDK. You can build a SageMaker pipeline programmatically via the same SDK. After a pipeline is deployed, you can optionally visualize its workflow within Studio.
Pipelines automatically integrate with Experiments by automatically creating an experiment and trial for every run. Pipelines automatically create an experiment and a trial for every run of the pipeline before running the steps unless one or both of these inputs are specified. While running the pipeline’s SageMaker job, the pipeline associates the trial with the experiment, and associates to the trial every trial component that is created by the job. Specifying your own experiment or trial programmatically allows you to fine-tune how to organize your experiments.
The workflow we present in this example consists of a series of steps: a preprocessing step to split our input dataset into train, test, and validation datasets; a tuning step to tune our hyperparameters and kick off training jobs to train a model using the XGBoost built-in algorithm; and finally a model step to create a SageMaker model from the best trained model artifact. Pipelines also offers several natively supported step types outside of what is discussed in this post. We also illustrate how you can track your pipeline workflow and generate metrics and comparison charts. Furthermore, we show how to associate the new trial generated to an existing experiment that might have been created before the pipeline was defined.
SageMaker Pipelines code
You can review and download the notebook from the GitHub repository associated with this post. We look at the Pipelines-specific code to understand it better.
Pipelines enables you to pass parameters at run time. Here we define the processing and training instance types and counts at run time with preset defaults:
Next, we set up a processing script that downloads and splits the input dataset into train, test, and validation parts. We use SKLearnProcessor
for running this preprocessing step. To do so, we define a processor object with the instance type and count needed to run the processing job.
Pipelines allows us to achieve data versioning in a programmatic way by using execution-specific variables like ExecutionVariables.PIPELINE_EXECUTION_ID
, which is the unique ID of a pipeline run. We can, for example, create a unique key for storing the output datasets in Amazon Simple Storage Service (Amazon S3) that ties them to a specific pipeline run. For the full list of variables, refer to Execution Variables.
Then we move on to create an estimator object to train an XGBoost model. We set some static hyperparameters that are commonly used with XGBoost:
We do hyperparameter tuning of the models we create by using a ContinuousParameter
range for lambda
. Choosing one metric to be the objective metric tells the tuner (the instance that runs the hyperparameters tuning jobs) that you will evaluate the training job based on this specific metric. The tuner returns the best combination based on the best value for this objective metric, meaning the best combination that minimizes the best root mean square error (RMSE).
The tuning step runs multiple trials with the goal of determining the best model among the parameter ranges tested. With the method get_top_model_s3_uri
, we rank the top 50 performing versions of the model artifact S3 URI and only extract the best performing version (we specify k=0
for the best) to create a SageMaker model.
When the pipeline runs, it creates trial components for each hyperparameter tuning job and each SageMaker job created by the pipeline steps.
You can further configure the integration of pipelines with Experiments by creating a PipelineExperimentConfig
object and pass it to the pipeline object. The two parameters define the name of the experiment that will be created, and the trial that will refer to the whole run of the pipeline.
If you want to associate a pipeline run to an existing experiment, you can pass its name, and Pipelines will associate the new trial to it. You can prevent the creation of an experiment and trial for a pipeline run by setting pipeline_experiment_config
to None
.
We pass on the instance types and counts as parameters, and chain the preceding steps in order as follows. The pipeline workflow is implicitly defined by the outputs of a step being the inputs of another step.
The full-fledged pipeline is now created and ready to go. We add an execution role to the pipeline and start it. From here, we can go to the SageMaker Studio Pipelines console and visually track every step. You can also access the linked logs from the console to debug a pipeline.
The preceding screenshot shows in green a successfully run pipeline. We obtain the metrics of one trial from a run of the pipeline with the following code:
Compare the metrics for each trial component
You can plot the results of hyperparameter tuning in Studio or via other Python plotting libraries. We show both ways of doing this.
Explore the training and evaluation metrics in Studio
Studio provides an interactive user interface where you can generate interactive plots. The steps are as follows:
- Choose Experiments and Trials from the SageMaker resources icon on the left sidebar.
- Choose your experiment to open it.
- Choose (right-click) the trial of interest.
- Choose Open in trial component list.
- Press Shift to select the trial components representing the training jobs.
- Choose Add chart.
- Choose New chart and customize it to plot the collected metrics that you want to analyze. For our use case, choose the following:
- For Data type¸ select Summary Statistics.
- For Chart type¸ select Scatter Plot.
- For X-axis, choose
lambda
. - For Y-axis, choose
validation:rmse_last
.
The new chart appears at the bottom of the window, labeled as ‘8’.
You can include more or fewer training jobs by pressing Shift and choosing the eye icon for a more interactive experience.
Analytics with SageMaker Experiments
When the pipeline run is complete, we can quickly visualize how different variations of the model compare in terms of the metrics collected during training. Earlier, we exported all trial metrics to a Pandas DataFrame
using ExperimentAnalytics
. We can reproduce the plot obtained in Studio by using the Matplotlib library.
Conclusion
The native integration between SageMaker Pipelines and SageMaker Experiments allows data scientists to automatically organize, track, and visualize experiments during model development activities. You can create experiments to organize all your model development work, such as the following:
- A business use case you’re addressing, such as creating an experiment to predict customer churn
- An experiment owned by the data science team regarding marketing analytics, for example
- A specific data science and ML project
In this post, we dove into Pipelines to show how you can use it in tandem with Experiments to organize a fully automated end-to-end workflow.
As a next step, you can use these three SageMaker features – Studio, Experiments and Pipelines – for your next ML project.
Suggested readings
- Amazon SageMaker now supports cross-account lineage tracking and multi-hop lineage querying
- Announcing Amazon SageMaker Inference Recommender
- Introducing the Well-Architected Framework for Machine Learning
- Machine Learning Lens: AWS Well-Architected Framework
- Roundup of re:Invent 2021 Amazon SageMaker announcements
About the authors
Paolo Di Francesco is a solutions architect at AWS. He has experience in the telecommunications and software engineering. He is passionate about machine learning and is currently focusing on using his experience to help customers reach their goals on AWS, in particular in discussions around MLOps. Outside of work, he enjoys playing football and reading.
Mario Bourgoin is a Senior Partner Solutions Architect for AWS, an AI/ML specialist, and the global tech lead for MLOps. He works with enterprise customers and partners deploying AI solutions in the cloud. He has more than 30 years of experience doing machine learning and AI at startups and in enterprises, starting with creating one of the first commercial machine learning systems for big data. Mario spends his free time playing with his three Belgian Tervurens, cooking dinner for his family, and learning about mathematics and cosmology.
Ganapathi Krishnamoorthi is a Senior ML Solutions Architect at AWS. Ganapathi provides prescriptive guidance to startup and enterprise customers helping them to design and deploy cloud applications at scale. He is specialized in machine learning and is focused on helping customers leverage AI/ML for their business outcomes. When not at work, he enjoys exploring outdoors and listening to music.
Valerie Sounthakith is a Solutions Architect for AWS, working in the Gaming Industry and with Partners deploying AI solutions. She is aiming to build her career around Computer Vision. During her free time, Valerie spends it to travel, discover new food spots and change her house interiors.
Shifting Into High Gear: Lunit, Maker of FDA-Cleared AI for Cancer Analysis, Goes Public in Seoul
South Korean startup Lunit, developer of two FDA-cleared AI models for healthcare, went public this week on the country’s Kosdaq stock market.
The move marks the maturity of the Seoul-based company — which was founded in 2013 and has for years been part of the NVIDIA Inception program that nurtures cutting-edge startups.
Lunit’s AI software for chest X-rays and mammograms are used in 600 healthcare sites across 40 countries. In its home market alone, around 4 million chest X-rays a year are analyzed by Lunit AI models.
Lunit has partnered with GE Healthcare, Fujifilm, Philips and Guardant Health to deploy its AI products. Last year, it achieved FDA clearance for two AI tools: one that analyzes mammograms for signs of breast cancer, and another that triages critical findings in chest X-rays. It’s also received the CE mark in Europe for these, as well as a third model that analyzes tumors in cancer tissue samples.
“By going public, which is just one step in our long journey, I strongly believe that we will succeed and accomplish our mission to conquer cancer through AI,” said Brandon Suh, CEO of Lunit.
Lunit raised $60 million in venture capital funding late last year, and its current market cap is some $320 million, based on its latest closing price. Following its recent regulatory approvals, the startup is expanding its presence in the U.S. and the European Union. It’s also developing additional AI models for 3D mammography.
Forging Partnerships to Deploy AI for Radiology, Oncology
Lunit has four AI products to help radiologists and pathologists detect cancer and deliver care:
- INSIGHT CXR: Trained on a dataset of 3.5 million cases, this tool detects 10 of the most common findings in chest X-rays with 97-99% accuracy.
- INSIGHT MMG: This product reduces the chance that physicians overlook breast cancer in the screening mammography by 50%.
- SCOPE IO: Demonstrating 94% accuracy, this AI helps identify 50% more patients eligible for immunotherapy by analyzing tissue slide images of more than 15 types of cancer, including lung, breast and colorectal cancer.
- SCOPE PD-L1: Trained on more than 1 million annotated cell images, the tool helps accurately quantify expression levels of PD-L1, a protein that influences immune response.
GE Healthcare made eight AI algorithms from INSIGHT CXR available through its Thoracic Care Suite to flag abnormalities in lung X-rays, including pneumonia, tuberculosis and lung nodules.
Fujifilm incorporated INSIGHT CXR into its AI-powered product to analyze chest X-rays. Lunit AI connects to Fujufilm’s X-ray devices and PACS imaging system, and is already used in more than 130 sites across Japan to detect chest nodules, collapsed lung, and fluid or other foreign substances in the lungs.
Philips, too, is adopting INSIGHT CXR, making the software accessible to users of its diagnostic X-ray solutions. And Guardant Health, a liquid biopsy company, made a $26 million strategic investment in Lunit to support the company’s innovation in precision oncology through the Lunit SCOPE tissue analysis products.
Accelerating Insights With NVIDIA AI
Lunit develops its AI models using various NVIDIA Tensor Core GPUs, including NVIDIA A100 GPUs, in the cloud. Its customers can deploy Lunit’s AI with an NVIDIA GPU-powered server on premises or in the cloud — or within a medical imaging device using the NVIDIA Jetson edge AI platform.
The company also uses NVIDIA TensorRT software to optimize its trained AI models for real-world deployment.
“The goal here is to optimize our AI in actual user settings — for the specific NVIDIA GPUs that operate the AI,” said Donggeun Yoo, chief of research at Lunit.
Over the years, Lunit has presented its work at NVIDIA GTC and as an NVIDIA Inception member at the prestigious RSNA conference for radiology.
“It was very helpful for us to build credibility as a startup,” said Yoo. “I believe joining Inception helped trigger the bigger acknowledgements that followed from the healthcare industry.”
Join the NVIDIA Inception community of over 10,000 technology startups, and register for NVIDIA GTC, running online Sept. 19-22, to hear more from leaders in healthcare AI.
Subscribe to NVIDIA healthcare news.
The post Shifting Into High Gear: Lunit, Maker of FDA-Cleared AI for Cancer Analysis, Goes Public in Seoul appeared first on NVIDIA Blog.
ICML: Where causality meets machine learning
Amazon’s Dominik Janzing on the history and promise of the young field of causal machine learning.Read More