How Amazon conducted customer-obsessed science research and engineering to release a vastly improved experience.Read More
Learn from the winner of the AWS DeepComposer Chartbusters Spin the Model Challenge
AWS is excited to announce the winner of the second AWS DeepComposer Chartbusters challenge, Lena Taupier. AWS DeepComposer gives developers a creative way to get started with machine learning (ML). In June, we launched the Chartbusters challenge, a global competition where developers use AWS DeepComposer to create original compositions and compete to showcase their ML and generative AI skills. The second challenge, Spin the Model, required developers to bring their own data and create a custom genre model using a sample Amazon SageMaker notebook.
When Lena Taupier first attended the AWS DeepComposer workshop at re:Invent 2019, she had no idea she would be the winner of the Spin the Model challenge. Lena, a software developer for Blubrry, helps lead the company’s cloud infrastructure and applications development team. She also has her own blog in which she creates tutorials to make AWS skills more accessible. She describes herself as an ML novice and never would have thought she’d be experimenting with machine learning today.
We interviewed Lena about her experience competing in the second Chartbusters challenge, which ran from July 31 to August 23, and asked her to tell us more about how she created her winning composition.
Lena with her AWS DeepComposer keyboard
Getting started with machine learning
Lena has a background in classical piano, so when she first learned about AWS DeepComposer, she was intrigued to learn more.
“When I was younger, I studied classical piano pretty seriously and I still enjoy playing piano very much. I was at re:Invent last year when AWS DeepComposer was announced, and I was so excited by the thought of learning about AI while creating music. I ended up waiting in line for several hours to attend one of the demo sessions, but I was so eager to try it out that I didn’t even mind!”
Lena first heard about the AWS DeepComposer Chartbusters challenge through the AWS blog, and thought the challenge was a great way to get started with ML.
Building in AWS DeepComposer
To get started, Lena used the AWS DeepComposer learning capsules to learn more about AR-CNN models. The learning capsules provide easy-to-consume, bite-size content to help you learn the concepts of generative AI algorithms.
“The first thing I did was to go through the learning capsules about autoregressive convolutional neural networks and how to train AR-CNN models. It was a great resource for learning about different generative AI techniques.”
The Chartbusters Spin the Model challenge required developers to get creative and make a custom genre model by bringing their own dataset to train. Lena drew from her own background, having grown up in St. Lucia, a city with a history of oral and folk traditional music.
“Once I had a good understanding, I started brainstorming about what kind of music I wanted to use to train my model. I’m from St. Lucia, a small island in the Caribbean, where there is a rich history of unique music, so I thought it would be interesting to incorporate songs from there. I decided to create some of my own music clips inspired by Calypso and St. Lucian folk music to supplement my dataset.”
Lena’s workstation for the AWS DeepComposer Chartbusters challenge
Next, Lena began training her model using Amazon SageMaker.
“Once I had my dataset, I created a Jupyter notebook within Amazon SageMaker, using the repository provided as a starting point. I experimented with the hyperparameters and then let the training run overnight because I knew it would take many hours to process. The next day, I was finally able to use my trained model to make new music!”
Lena used her AWS DeepComposer keyboard and the music studio to generate different melodies and compositions until she was satisfied with her two final compositions.
“I submitted two AI-generated songs. The main theme in “Little Banjo” was inspired by a famous St. Lucian folk song. Layered on top of the melody generated by my AR-CNN model, I also used the MuseGAN Rock model to generate additional instruments for accompaniment. The other song is meant to resemble the style of Calypso, and has a rich beat with trumpet lines to complement the melody. I named it “Home Sweet Home” because I started feeling nostalgic about home after listening to so much St. Lucian music for this project!”
Lena working on her compositions in the AWS DeepComposer console
You can listen to Lena’s winning composition, “Home Sweet Home,” on the AWS DeepComposer SoundCloud page.
Conclusion
The AWS DeepComposer Chartbusters challenge Spin the Model helped Lena learn about generative AI through a hands-on and fun experience.
“By participating in this challenge, I was able to learn a lot about different generative AI techniques in a very hands-on way, which is the best way to learn. As someone with very little experience in AI and machine learning, it was a great feeling of accomplishment to be able to train a custom AR-CNN model and actually generate results.”
The Chartbusters challenge empowered Lena to go from beginner knowledge ML to creating winning compositions with AWS DeepComposer.
“I think AWS DeepComposer is such a great tool for reducing the barrier of entry into machine learning and making those concepts accessible to more people […] Even just a few months ago, I never would have thought I’d be experimenting with AI/ML. This challenge was such a great learning experience! I know there’s so much more to learn so I will definitely continue to explore and dive deeper.”
Her advice to future competitors? Now is the time to get started with ML.
“As a developer, I think it’s such an exciting time to have access to the cloud, because it really widens your horizons on what you can do […] The Chartbusters challenge is the perfect opportunity to get involved and start learning in a fun, creative, and hands-on manner!”
Congratulations to Lena for her well-deserved win!
We hope Lena’s story has inspired you to learn more about ML and get started with AWS DeepComposer. Check out the next AWS DeepComposer Chartbusters challenge, The Sounds of Science, running now until September 23.
About the Author
Paloma Pineda is a Product Marketing Manager for AWS Artificial Intelligence Devices. She is passionate about the intersection of technology, art, and human centered design. Out of the office, Paloma enjoys photography, watching foreign films, and cooking French cuisine.
Amazon Personalize now available in EU (Frankfurt) Region
Amazon Personalize is a machine learning (ML) service that enables you to personalize your website, app, ads, emails, and more with private, custom ML models that you can create with no prior ML experience. We’re excited to announce the general availability of Amazon Personalize in the EU (Frankfurt) Region. You can use Amazon Personalize to create higher-quality recommendations that respond to the specific needs, preferences, and changing behavior of your users, improving engagement and conversion. For more information, see Amazon Personalize Is Now Generally Available.
To use Amazon Personalize, you need to provide the service user interaction(events) data (such as page views, sign-ups, purchases etc.) from your applications, along with optional user demographic information (such as age, location) and a catalog of the items you want to recommend (such as articles, products, videos, or music). This data can be provided via Amazon S3 or be sent as a stream of user events via a JavaScript tracker or a server-side integration (learn more). Amazon Personalize then automatically processes and examines the data, identifies what is meaningful, and trains and optimizes a personalization model that is customized for your data. You can then easily invoke Amazon Personalize APIs from your business application and fetch personalized recommendations for your users.
Learn how our customers are using Amazon Personalize to improve product and content recommendations and for targeted marketing communications.
For more information about all the Regions Amazon Personalize is available in, see the AWS Region Table. Get started with Amazon Personalize by visiting the Amazon Personalize console and Developer Guide.
About the Author
Vaibhav Sethi is the Product Manager for Amazon Personalize. He focuses on delivering products that make it easier to build machine learning solutions. In his spare time, he enjoys hiking and reading.
Reducing training time with Apache MXNet and Horovod on Amazon SageMaker
Amazon SageMaker is a fully managed service that provides every developer and data scientist with the ability to build, train, and deploy machine learning (ML) models quickly. Amazon SageMaker removes the heavy lifting from each step of the ML process to make it easier to develop high-quality models. As datasets continue to increase in size, additional compute is required to reduce the amount of time it takes to train. One method to scale horizontally and add these additional resources on Amazon SageMaker is through the use of Horovod and Apache MXNet. In this post, we show how you can reduce training time with MXNet and Horovod on Amazon SageMaker. We also demonstrate how to further improve performance with advanced sections on Horovod autotuning, Horovod Timeline, Horovod Fusion, and MXNet optimization.
Distributed training
Distributed training of neural networks for computer vision (CV) and natural language processing (NLP) applications has become ubiquitous. With Apache MXNet, you only need to modify a few lines of code to enable distributed training.
Distributed training allows you to reduce training time by scaling horizontally. The goal is to split training tasks into independent subtasks and run these across multiple devices. There are primarily two approaches for training in parallel:
- Data parallelism – You distribute the data and share the model across multiple compute resources
- Model parallelism – You distribute the model and share transformed data across multiple compute resources.
In this post, we focus on data parallelism. Specifically, we discuss how Horovod and MXNet allow you to train efficiently on Amazon SageMaker.
Horovod overview
Horovod is an open-source distributed deep learning framework. It uses efficient inter-GPU and inter-node communication methods such as NVIDIA Collective Communications Library (NCCL) and Message Passing Interface (MPI) to distribute and aggregate model parameters between workers. Horovod makes distributed deep learning fast and easy by using a single-GPU training script and scaling it across many GPUs in parallel. It’s built on top of the ring-allreduce
communication protocol. This approach allows each training process (such as a process running on a single GPU device) to talk to its peers and exchange gradients by averaging (called reduction) on a subset of gradients. The following diagram illustrates how ring-allreduce
works.
Apache MXNet is integrated with Horovod through the distributed training APIs defined in Horovod, and you can convert the non-distributed training by following the higher level code skeleton, which we show in this post.
Although this greatly simplifies the process of using Horovod, you must consider other complexities. For example, you may need to install additional software and libraries to resolve your incompatibilities for making distributed training work. Horovod requires a certain version of Open MPI, and if you want to use high-performance training on NVIDIA GPUs, you need to install NCCL libraries. These complexities are amplified when you scale across multiple devices, because you need to make sure all the software and libraries in the new nodes are properly installed and configured. Amazon SageMaker includes all the required libraries to run distributed training with MXNet and Horovod. Prebuilt Amazon SageMaker Docker images come with popular open-source deep learning frameworks and pre-configured CUDA, cuDNN, MPI, and NCCL libraries. Amazon SageMaker manages the difficult process of properly installing and configuring your cluster. Amazon SageMaker and MXNet simplify training with Horovod by managing the complexities to support distributed training at scale.
Test problem and dataset
To benchmark the efficiencies realized by Horovod, we trained the notoriously resource-intensive model architectures Mask-RCNN and Faster-RCNN. These model architectures were first introduced in 2018 and 2016, respectively, and are currently considered the baseline model architectures for two popular CV tasks: instance segmentation (Mask-RCNN) and object detection (Faster-RCNN). Mask-RCNN builds upon Faster-RCNN by adding a mask for segmentation. Apache MXNet provides pre-built Mask-RCNN and Faster-RCNN models as part of the GluonCV model zoo, simplifying the process of training these models.
To train our object detection and instance segmentation models, we used the popular COCO2017 dataset. This dataset provides more than 200,000 images and their corresponding labels. The COCO2017 dataset is considered an industry standard for benchmarking CV models.
GluonCV is a CV toolkit built on top of MXNet. It provides out-of-the-box support for various CV tasks, including data loading and preprocessing for many common algorithms available within its model zoo. It also provides a tutorial on getting the COCO2017 dataset.
To make this process replicable for Amazon SageMaker users, we show an entire end-to-end process for training Mask-RCNN and Faster-RCNN with Horovod and MXNet. To begin, we first open the Jupyter environment in your Amazon SageMaker notebook and use the conda_mxnet_p36
kernel. Next, we install the required Python packages:
! pip install gluoncv
! pip install pycocotools
We use the GluonCV toolkit to download the COCO2017 dataset onto our Amazon SageMaker notebook:
import gluoncv as gcv
gcv.utils.download('https://gluon-cv.mxnet.io/_downloads/b6ade342998e03f5eaa0f129ad5eee80/mscoco.py',path='./')
#Now to install the dataset. Warning, this may take a while
! python mscoco.py --download-dir data
We upload COCO2017 to the specified Amazon Simple Storage Service (Amazon S3) bucket using the following command:
! aws s3 cp './data/' s3://<INSERT BUCKET NAME>/ --recursive –quiet
Training script with Horovod Support
To use Horovod in your training script, you only need to make a few modifications. For code samples and instructions, see Horovod with MXNet. In addition, many GluonCV models in the model zoo have scripts that already support Horovod out of the box. In this section, we review the key changes required for Horovod to correctly work on Amazon SageMaker with Apache MXNet. The following code follows directly from the Horovod documentation:
import mxnet as mx
import horovod.mxnet as hvd
from mxnet import autograd
# Initialize Horovod, this has to be done first as it activates Horovod.
hvd.init()
# GPU setup
context =[mx.gpu(hvd.local_rank())] #local_rank is the specific gpu on that
# instance
num_gpus = hvd.size() #This is how many total GPUs you will be using.
#Typically, in your data loader you will want to shard your dataset. For
# example, in the train_mask_rcnn.py script
train_sampler =
gcv.nn.sampler.SplitSortedBucketSampler(...,
num_parts=hvd.size() if args.horovod else 1,
part_index=hvd.rank() if args.horovod else 0)
#Normally, we would shard the dataset first for Horovod.
val_loader = mx.gluon.data.DataLoader(dataset, len(ctx), ...) #... is for your # other arguments
# You build and initialize your model as usual.
model = ...
# Fetch and broadcast the parameters.
params = model.collect_params()
if params is not None:
hvd.broadcast_parameters(params, root_rank=0)
# Create DistributedTrainer, a subclass of gluon.Trainer.
trainer = hvd.DistributedTrainer(params, opt)
# Create loss function and train your model as usual.
Training job configuration
The Amazon SageMaker MXNet estimator class supports Horovod via the distributions
parameter. We need to add a predefined mpi
parameter with the enabled
flag, and define the following additional parameters:
- processes_per_host (int) – Number of processes MPI should launch on each host. This parameter is usually equal to the number of GPU devices available on any given instance.
- custom_mpi_options (str) – Any custom
mpirun
flags passed in this field are added to the mpirun command and run by Amazon SageMaker for Horovod training.
The follow example code initializes the distributions parameters:
distributions = {'mpi': {
'enabled': True,
'processes_per_host': 8, #Each instance has 8 gpus
'custom_mpi_options': '-verbose --NCCL_DEBUG=INFO'
}
}
Next, we need to configure other parameters of our training job, such as hyperparameters, and the input and output Amazon S3 locations. To do this, we use the MXNet estimator class from the Amazon SageMaker Python SDK:
#Define the basic configuration of your Horovod-enabled Sagemaker training
# cluster.
num_instances = 2 # How many nodes you want to use.
instance_family = 'ml.p3dn.24xlarge' # Which instance type you want to use.
estimator = MXNet(
entry_point=<source_name>.py, #Script entry point.
source_dir='./source', #Script Location
role=role,
train_instance_type=instance_family,
train_instance_count=num_instances,
framework_version='1.6.0', #MXNet version.
train_volume_size=100, #Size for the dataset.
py_version='py3', #Python version.
hyperparameters=hyperparameters,
distributions=distributions #For use with Horovod.
We’re now ready to start our first Horovod-powered training job with the following command:
estimator.fit(
{'data':'s3://' + bucket_name + '/data'}
)
Results
We performed these benchmarks on two similar GPU instance types: the p3.16xlarge and the more powerful p3dn.24xlarge. Although both have 8 NVIDIA V100 GPUs, the latter instance is designed with distributed training in mind. In addition to a high-throughput network interface amenable to the inter-node data transfers inherent in distributed training, the p3dn.24xlarge boasts more compute and additional memory over the p3.16xlarge.
We ran benchmarks in three different use cases. In the first and second use cases, we trained the models on a single instance using all 8 local GPUs, to demonstrate the efficiencies gained by using Horovod to manage local training across multiple GPUs. In the third use case, we used Horovod for distributed training across multiple instances, each with 8 local GPUs, to demonstrate the additional efficiency increase by scaling horizontally.
The following table summarizes the time and accuracy for each training scenario.
Model | Instance Type | 1 Instance, 8 GPUs w/o Horovod | 1 Instance, 8 GPUs with Horovod | 3 Instances, 8 GPUs with Horovod | |||
Training Time | Accuracy | Training Time | Accuracy | Training Time | Accuracy | ||
Faster RCNN | p3.16xlarge | 35 h 47 m | 37.6 | 8 h 26 m | 37.5 | 4 h 58 m | 37.4 |
Faster RCNN | p3dn.24xlarge | 32 h 24 m | 37.5 | 7 h 27 m | 37.5 | 3 h 37 m | 37.3 |
Mask RCNN | p3.16xlarge | 45 h 28 m |
38.5 (bbox) 34.8 (segm) |
10 h 28 m |
34.4 (bbox) 31.3 (segm) |
5 h 34 m |
36.8 (bbox) 33.5 (segm) |
Mask RCNN | p3dn.24xlarge | 40 h 49 m |
38.3 (bbox) 34.8 (segm) |
8 h 41 m | 34.6 (bbox) 31.5 (segm) |
4 h 2 m |
37.0 (bbox) 33.4 (segm) |
As expected, when using Horovod to distribute training across multiple instances, the time to convergence is significantly reduced. Additionally, even when training on a single instance, Horovod substantially increases training efficiency when using multiple local GPUs, as compared to the default parameter-server approach. Horovod’s simplified APIs and abstractions enable you to unlock efficiency gains when training across multiple GPUs, both on a single machine or many. For more information about using this approach for scaling batch size and learning rate, see Accurate, Large Minibatch SGD: Training ImageNet in 1 Hour.
With the improvement in training time enabled by Horovod and Amazon SageMaker, you can focus more on improving your algorithms instead of waiting for jobs to finish training. You can train in parallel across multiple instances with marginal impact to mean Average Precision (mAP).
Optimizing Horovod training
Horovod provides several additional utilities that allow you to analyze and optimize training performance.
Horovod autotuning
Finding the optimal combinations of parameters for a given combination of model and cluster size may require several iterations of trial and error.
The autotune feature allows you to automate this trial-and-error activity within a single training job, and uses Bayesian optimization to search through the parameter space for the most performant combination of parameters. Horovod searches for the best combination of parameters in the first cycles of a training job. When it defines the best combination, Horovod writes it in the autotune log and uses this combination for the remainder of the training job. For more information, see Autotune: Automated Performance Tuning.
To enable autotuning and capture the search log, pass the following parameters in your MPI configuration:
{
'mpi':
{
'enabled': True,
'custom_mpi_options': '-x HOROVOD_AUTOTUNE=1 -x HOROVOD_AUTOTUNE_LOG=/opt/ml/output/autotune_log.csv'
}
}
Horovod Timeline
Horovod Timeline is a report available after training completion that captures all activities in the Horovod ring. This is useful to understand which operations are taking the longest and identify optimization opportunities. For more information, see Analyze Performance.
To generate a timeline file, add the following parameters in your MPI command:
{
'mpi':
{
'enabled': True,
'custom_mpi_options': '-x HOROVOD_TIMELINE=/opt/ml/output/timeline.json'
}
}
The /opt/ml/output
is a directory with a specific purpose. After the training job is complete, Amazon SageMaker automatically archives all files in this directory and uploads it to an Amazon S3 location that you define in the Python Amazon SageMaker SDK API.
Tensor Fusion
The Tensor Fusion feature allows you to perform batch allreduce
operations at training time. This typically results in better overall performance. For more information, see Tensor Fusion. By default, Tensor Fusion is enabled and has a buffer size of 64 MB. You can modify buffer size using a custom MPI flag as follows (for our use case, we override the default 64 MB buffer value with 32 MB):
{
'mpi':
{
'enabled': True,
'custom_mpi_options': '-x HOROVOD_FUSION_THRESHOLD=33554432'
}
}
You can also adjust batch cycles using the HOROVOD_CYCLE_TIME
parameter. Cycle time is defined in milliseconds. See the following code:
{
'mpi':
{
'enabled': True,
'custom_mpi_options': '-x HOROVOD_CYCLE_TIME=5'
}
}
Optimizing MXNet models
Another optimization technique is related to optimizing the MXNet model itself. We recommend running the code with os.environ['MXNET_CUDNN_AUTOTUNE_DEFAULT'] = '1'
. Then you can copy the best OS environment variables for future training. In our testing, we found the following to be the best results:
os.environ['MXNET_GPU_MEM_POOL_TYPE'] = 'Round'
os.environ['MXNET_GPU_MEM_POOL_ROUND_LINEAR_CUTOFF'] = '26'
os.environ['MXNET_EXEC_BULK_EXEC_MAX_NODE_TRAIN_FWD'] = '999'
os.environ['MXNET_EXEC_BULK_EXEC_MAX_NODE_TRAIN_BWD'] = '25'
os.environ['MXNET_GPU_COPY_NTHREADS'] = '1'
os.environ['MXNET_OPTIMIZER_AGGREGATION_SIZE'] = '54'
Conclusion
In this post, we demonstrated how to reduce training time with Horovod and Apache MXNet on Amazon SageMaker. You can train your model out of the box without worrying about any additional complexities.
For more information about deep learning and MXNet, see the MXNet crash course and Dive into Deep Learning book. You can also get started on the MXNet website and MXNet GitHub examples directory. If you’re new to distributed training and want to dive deeper, we highly recommend reading the paper Horovod: fast and easy distributed deep learning in TensorFlow. If you use the AWS Deep Learning Containers and AWS Deep Learning AMIs, you can learn how to set up this workflow in that environment in our recent post How to run distributed training using Horovod and MXNet on AWS DL containers and AWS Deep Learning AMIs.
About the Authors
Vadim Dabravolski is AI/ML Solutions Architect with FinServe team. He is focused on Computer Vision and NLP technologies and how to apply them to business use cases. After hours Vadim enjoys jogging in NYC boroughs, reading non-fiction (business, history, culture, politics, you name it), and rarely just doing nothing.
Corey Barrett is a Data Scientist in the Amazon ML Solutions Lab. As a member of the ML Solutions Lab, he leverages Machine Learning and Deep Learning to solve critical business problems for AWS customers. Outside of work, you can find him enjoying the outdoors, sipping on scotch, and spending time with his family.
Chaitanya Bapat is a Software Engineer with the AWS Deep Learning team. He works on Apache MXNet and integrating the framework with Amazon Sagemaker, DLC and DLAMI. In his spare time, he loves watching sports and enjoys reading books and learning Spanish.
Karan Jariwala is a Software Development Engineer on the AWS Deep Learning team. His work focuses on training deep neural networks. Outside of work, he enjoys hiking, swimming, and playing tennis.
The importance of forgetting in artificial and animal intelligence
The surprising dynamics related to learning that are common to artificial and biological systems.Read More
Using the Amazon SageMaker Studio Image Build CLI to build container images from your Studio notebooks
The new Amazon SageMaker Studio Image Build convenience package allows data scientists and developers to easily build custom container images from your Studio notebooks via a new CLI. The new CLI eliminates the need to manually set up and connect to Docker build environments for building container images in Amazon SageMaker Studio.
Amazon SageMaker Studio provides a fully integrated development environment for machine learning (ML). Amazon SageMaker offers a variety of built-in algorithms, built-in frameworks, and the flexibility to use any algorithm or framework by bringing your own container images. The Amazon SageMaker Studio Image Build CLI lets you build Amazon SageMaker-compatible Docker images directly from your Amazon SageMaker Studio environments. Prior to this feature, you could only build your Docker images from Amazon Studio notebooks by setting up and connecting to secondary Docker build environments.
You can now easily create container images directly from Amazon SageMaker Studio by using the simple CLI. The CLI abstracts the previous need to set up a secondary build environment and allows you to focus and spend time on the ML problem you’re trying to solve as opposed to creating workflows for Docker builds. The new CLI automatically sets up your reusable build environment that you interact with via high-level commands. You essentially tell the CLI to build your image, without having to worry about the underlying workflow orchestrated through the CLI, and the output is a link to your Amazon Elastic Container Registry (Amazon ECR) image location. The following diagram illustrates this architecture.
The CLI uses the following underlying AWS services:
- Amazon S3 – The new CLI packages your Dockerfile and container code, along with a buildspec.yml file used by AWS CodeBuild, into a .zip file stored in Amazon Simple Storage Service (Amazon S3). By default, this file is automatically cleaned up following the build to avoid unnecessary storage charges.
- AWS CodeBuild – CodeBuild is a fully managed build environment that allows you to build Docker images using a transient build environment. CodeBuild is dependent on a buildspec.yml file that contains build commands and settings that it uses to run your build. The new CLI takes care of automatically generating this file. The CLI automatically kicks off the container build using the packaged files from Amazon S3. CodeBuild pricing is pay-as-you-go and based on build minutes and the build compute used. By default, the CLI uses
general1.small
compute. - Amazon ECR – Built Docker images are tagged and pushed to Amazon ECR. Amazon SageMaker expects training and inference images to be stored in Amazon ECR, so after the image is successfully pushed to the repository, you’re ready to go. The CLI returns a link to the URI of the image that you can include in your Amazon SageMaker training and hosting calls.
Now that we’ve outlined the underlying AWS services and benefits of using the new Amazon SageMaker Studio Image Build convenience package to abstract your container build environments, let’s explore how to get started using the CLI!
Prerequisites
To use the CLI, we need to ensure the Amazon SageMaker execution role used by your Studio notebook environment (or another AWS Identity and Access Management (IAM) role, if you prefer) has the required permissions to interact with the resources used by the CLI, including access to CodeBuild and Amazon ECR.
Your role should have a trust policy with CodeBuild. See the following code:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Service": [
"codebuild.amazonaws.com"
]
},
"Action": "sts:AssumeRole"
}
]
}
You also need to make sure the appropriate permissions are included in your role to run the build in CodeBuild, create a repository in Amazon ECR, and push images to that repository. The following code is an example policy that you should modify as necessary to meet your needs and security requirements:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"codebuild:DeleteProject",
"codebuild:CreateProject",
"codebuild:BatchGetBuilds",
"codebuild:StartBuild"
],
"Resource": "arn:aws:codebuild:*:*:project/sagemaker-studio*"
},
{
"Effect": "Allow",
"Action": "logs:CreateLogStream",
"Resource": "arn:aws:logs:*:*:log-group:/aws/codebuild/sagemaker-studio*"
},
{
"Effect": "Allow",
"Action": [
"logs:GetLogEvents",
"logs:PutLogEvents"
],
"Resource": "arn:aws:logs:*:*:log-group:/aws/codebuild/sagemaker-studio*:log-stream:*"
},
{
"Effect": "Allow",
"Action": "logs:CreateLogGroup",
"Resource": "*"
},
{
"Effect": "Allow",
"Action": [
"ecr:CreateRepository",
"ecr:BatchGetImage",
"ecr:CompleteLayerUpload",
"ecr:DescribeImages",
"ecr:DescribeRepositories",
"ecr:UploadLayerPart",
"ecr:ListImages",
"ecr:InitiateLayerUpload",
"ecr:BatchCheckLayerAvailability",
"ecr:PutImage"
],
"Resource": "arn:aws:ecr:*:*:repository/sagemaker-studio*"
},
{
"Effect": "Allow",
"Action": "ecr:GetAuthorizationToken",
"Resource": "*"
},
{
"Effect": "Allow",
"Action": [
"s3:GetObject",
"s3:DeleteObject",
"s3:PutObject"
],
"Resource": "arn:aws:s3:::sagemaker-*/*"
},
{
"Effect": "Allow",
"Action": [
"s3:CreateBucket"
],
"Resource": "arn:aws:s3:::sagemaker*"
},
{
"Effect": "Allow",
"Action": [
"iam:GetRole",
"iam:ListRoles"
],
"Resource": "*"
},
{
"Effect": "Allow",
"Action": "iam:PassRole",
"Resource": "arn:aws:iam::*:role/*",
"Condition": {
"StringLikeIfExists": {
"iam:PassedToService": "codebuild.amazonaws.com"
}
}
}
]
}
You must also install the package in your Studio notebook environment to be able use the convenience package. To install, simply use pip install
within your notebook environment:
!pip install sagemaker-studio-image-build
Using the CLI
After completing these prerequisites, you’re ready to start taking advantage of the new CLI to easily build your custom bring-your-own Docker images from Amazon SageMaker Studio without worrying about the underlying setup and configuration of build services.
To use the CLI, you can navigate to the directory containing your Dockerfile and enter the following code:
sm-docker build .
Alternatively, you can explicitly identify the path to your Dockerfile using the --file
argument:
sm-docker build . --file /path/to/Dockerfile
It’s that simple! The command automatically logs build output to your notebook and returns the image URI of your Docker image. See the following code:
[Container] 2020/07/11 06:07:24 Phase complete: POST_BUILD State: SUCCEEDED
[Container] 2020/07/11 06:07:24 Phase context status code: Message:
Image URI: <account-id>.dkr.ecr.us-east-1.amazonaws.com/sagemaker-studio-<studioID>:default-<hash>
The CLI takes care of the rest. Let’s take a deeper look at what the CLI is actually doing. The following diagram illustrates this process.
The workflow contains the following steps:
- The CLI automatically zips the directory containing your Dockerfile, generates the buildspec for AWS CodeBuild, and adds the .zip package the final .zip file. By default, the final .zip package is put in the Amazon SageMaker default session S3 bucket. Alternatively, you can specify a custom bucket using the
--bucket
argument. - After packaging your files for build, the CLI creates an ECR repository if one doesn’t exist. By default, the ECR repository created has the naming convention of
sagemaker-studio-
<studioID>
. The final step performed by the CLI is to create a temporary build project in CodeBuild and start the build, which builds your container image, tags it, and pushes it to the ECR repository.
The great part about the CLI is you no longer have to set any of this up or worry about the underlying activities to easily build your container images from Amazon SageMaker Studio.
You can also optionally customize your build environment by using supported arguments such as the following code:
--repository mynewrepo:1.0 <== By default, the ECR repository uses the naming
sagemaker-studio-<studio-domainid>. You can set
this parameter to push to an existing repository
or create a new repository with your preferred
naming. The default tagging strategy uses *user-profile-name*.
This parameter can also be used to customize the
tagging strategy.
Usage: sm-docker build . --repository mynewrepo:1.0
--role <iam-role-name> <== By default, the CLI uses the SageMaker Execution
Role for interacting with the AWS Services the CLI
uses (CodeBuild, ECR). You can optionally specify
an alternative role that has the required permissions
specified in the prerequisites
Usage: sm-docker build . --role build-cli-role
--bucket <bucket-name>. <== By default, the CLI uses the SageMaker default
session bucket for storing your packaged input
sent to CodeBuild. You can optionally specify a
preferred S3 bucket to use.
Usage: sm-docker build . --bucket codebuild-tmp-build
--no-logs <== By default, the CLI will show the output logs of the
running CodeBuild build. This is typically useful
in case you need to debug the build; however, you
can optionally set this argument to suppress log
output.
Usage: sm-docker build . --no-logs
Changes from Amazon SageMaker classic notebooks
To help illustrate the changes required when moving from bring-your-own Amazon SageMaker example notebooks or your own custom developed notebooks, we’ve provided two example notebooks showing the changes required to use the Amazon SageMaker Studio Image Build CLI:
- The TensorFlow Bring Your Own example notebook is based on the existing TensorFlow Bring Your Own and adapted to use the new CLI with Amazon SageMaker Studio.
- The BYO XGBoost notebook demonstrates a typical data science user flow of data exploration and feature engineering, model training using a custom XGBoost container built using the CLI, and using Amazon SageMaker batch transform for offline or batch inference.
The key change required to adapt your existing notebooks to use the new CLI in Amazon SageMaker Studio removes the need for the build_and_push.sh
script in your directory structure. The build_and_push.sh
script used in classic notebook instances is used to build your Docker image and push it to Amazon ECR, which is now replaced by the new CLI for Studio. The following image compares the directory structures.
Summary
This post discussed how you can simplify the build of your Docker images from Amazon SageMaker Studio by using the new Amazon SageMaker Studio Image Build CLI convenience package. It abstracts the setup of your Docker build environments by automatically setting up the underlying services and workflow necessary for building Docker images. This package allows you to interact with an abstracted build environment through simple CLI commands in Amazon SageMaker Studio so you can focus on building models! For more information, see the GitHub repo.
About the Authors
Shelbee Eigenbrode is a solutions architect at Amazon Web Services (AWS). Her current areas of depth include DevOps combined with machine learning and artificial intelligence. She’s been in technology for 22 years, spanning multiple roles and technologies. In her spare time she enjoys reading, spending time with her family, friends and her fur family (aka. dogs).
Jaipreet Singh is a Senior Software Engineer on the Amazon SageMaker Studio team. He has been working on Amazon SageMaker since its inception in 2017 and has contributed to various Project Jupyter open-source projects. In his spare time, he enjoys hiking and skiing in the PNW.
Sam Liu is a product manager at Amazon Web Services (AWS). His current focus is the infrastructure and tooling of machine learning and artificial intelligence. Beyond that, he has 10 years of experience building machine learning applications in various industries. In his spare time, he enjoys making short videos for technical education or animal protection.
Stefan Natu is a Sr. Machine Learning Specialist at Amazon Web Services. He is focused on helping financial services customers build and operationalize end-to-end machine learning solutions on AWS. His academic background is in theoretical physics, and in the past, he worked on a number of data science problems in retail and energy verticals. In his spare time, he enjoys reading machine learning blogs, traveling, playing the guitar, and exploring the food scene in New York City.
Application period for next Alexa Prize challenge opens
University teams have until October 6, 2020 to submit their applications.Read More
Hear from the Alexa Prize SocialBot Grand Challenge 3 winners
Winning teams from the third annual Alexa Prize competition present their research in new video.Read More
How Kabbage improved the PPP lending experience with Amazon Textract
This is a guest post by Anthony Sabelli, Head of Data Science at Kabbage, a data and technology company providing small business cash flow solutions.
Kabbage is a data and technology company providing small business cash flow solutions. One way in which we serve our customers is by providing them access to flexible lines of credit through automation. Small businesses connect their real-time business data to Kabbage to receive a fully-automated funding decision in minutes, and this efficiency has led us to provide over 500,000 small businesses access to more than $16 billion of working capital, including the Paycheck Protection Program (PPP).
At the onset of COVID-19, when the nation was shutting down and small businesses were forced to close their doors, we had to overcome multiple technical challenges while navigating new and ever-changing underwriting criteria for what became the largest federal relief effort in the Small Business Administration’s (SBA) history. Prior to the PPP, Kabbage had never issued an SBA loan before. But in a matter of 2 weeks, the team stood up a fully automated system for any eligible small business—including new customers, regardless of size or stature—to access government funds.
Kabbage has always based its underwriting on the real-time business data and revenue performance of customers, not payroll and tax data, which were the primary criteria for the PPP. Without an established API to the IRS to help automate verification and underwriting, we needed to fundamentally adapt our systems to help small businesses access funding as quickly as possible. Additionally, we were a team of just a few hundred joining the ranks of thousands of seasoned SBA lenders with hundreds of thousands of employees and trillions of dollars in assets at their disposal.
In this post, we share our experience of how Amazon Textract helped support 80% of Kabbage’s PPP applicants to receive a fully automated lending experience and reduced approval times from multiple days to a median speed of 4 hours. By the end of the program, Kabbage became the second largest PPP lender in the nation by application volume, surpassing the major US banks—including Chase, the largest bank in America—serving over 297,000 small businesses, and preserving an estimated 945,000 jobs across America.
Implementing Amazon Textract
As one of the few PPP lenders that accepted applications from new customers, Kabbage saw an increased demand as droves of small businesses unable to apply with their long-standing bank turned to other lenders.
Businesses were required to upload documents from tax filings to proof of business documentation and forms of ID, and initially, all loans were underwritten manually. A human had to review, verify, and input values from various documents to substantiate the prescribed payroll calculation and subsequently submit the application to the SBA on behalf of the customer. However, in a matter of days, Kabbage had tens of thousands of small businesses submitting hundreds to thousands of documents that quickly climbed to millions. The task demanded automation.
We needed to break it down into parts. Our system already excelled at automating the verification processes commonly referred to as Know Your Business (KYB) and Know Your Customers (KYC), which allowed us to let net-new businesses in the door, totaling 97% of Kabbage’s PPP customers. Additionally, we needed to standardize the loan calculation process so we could automate document ingestion, verification, and review to extract only the appropriate values required to underwrite the loan.
To do so, we codified a loan calculation for different business types, including sole proprietors and independent contractors (which totaled 67% of our PPP customer base), around specific values found on various IRS forms. We bootstrapped an initial classifier for key IRS forms within 48 hours. The final hurdle was to accurately extract the values to issue loans compliant to the program. Amazon Textract was instrumental in getting over this final hurdle. We went from POC to full implementation within a week, and to full production within two weeks.
Integrating Amazon Textract into our pipelines was incredibly easy. Specifically, we used StartDocumentAnalysis and GetDocumentAnalysis, which allows us to asynchronously interact with Amazon Textract. We also found that using forms for FeatureTypes was well suited to processing tax documents. In the end, Amazon Textract was accurate, and it scaled to process a substantial backlog. After we finished integrating Amazon Textract, we were able to clear our backlog, and it remained a key step in our PPP flow through the end of the program.
Big impact on small businesses
For perspective, Kabbage customers accessed nearly $3 billion in working capital loans in 2019, driven by almost 60,000 new customers. In just 4 months, we delivered more than double the amount of funding ($7 billion) to roughly five times the number of new customers (297,000). With an average loan size of $23,000 and a median loan size of $12,700, over 90% of all PPP customers have 10 or fewer employees, representing businesses often most vulnerable to crises yet overlooked when seeking financial aid. Kabbage’s platform allowed it to serve the far-reaching and remote areas of the country, delivering loans in all 50 US states and territories, with one third of loans issued to businesses in zip codes with an average household income of less than $50,000.
We’re proud of what our team and technology accomplished, outperforming the nation’s largest banks with a fraction of the resources. For every 790 employees at a major US bank, Kabbage has one employee. Yet, we surpassed their volume of loans, serving nearly 300,000 of the smallest businesses in America for over $7 billion.
The path forward
At Kabbage, we always strive to find new data sources to enhance our cash flow platform to increase access to financial services to small businesses. Amazon Textract allowed us to add a new arrow to our quiver; we had never extracted values from tax filings prior to the PPP. It opens the opportunity for us to make our underwriting models more rich. This adds another viewpoint into the financial health and performance of small businesses when helping our customers access funding, and provides more insights into their cash flow to build a stronger business.
Conclusion
COVID-19 further revealed the financial system in America underserves Main Street business, even though they represent 99% of all companies, half of all jobs, and half of the non-farm GDP. Technology can fix this. It requires creative solutions such as what we built and delivered for the PPP to fundamentally shift how customers expect to access financial services in the future.
Amazon Textract was an important function that allowed us to successfully become the second-largest PPP lender in the nation and fund so many small businesses when they needed it the most. We found the entire process of integrating the APIs into our workflow simple and straightforward, which allowed us to focus more time on ensuring more small businesses—the backbone of our economy—received critical funding when they needed it the most.
About the Author
Anthony Sabelli is the Head of Data Science for Kabbage, a data and technology company providing small businesses cash flow solutions. Anthony holds a Ph.D. from Cornell University and an undergraduate degree from Brown University, both in applied mathematics. At Kabbage, Anthony leads the global data science team, analyzing the more than two million live data connections from its small business customers to improve business performance and underwriting models.
Right-sizing resources and avoiding unnecessary costs in Amazon SageMaker
Amazon SageMaker is a fully managed service that allows you to build, train, deploy, and monitor machine learning (ML) models. Its modular design allows you to pick and choose the features that suit your use cases at different stages of the ML lifecycle. Amazon SageMaker offers capabilities that abstract the heavy lifting of infrastructure management and provides the agility and scalability you desire for large-scale ML activities with different features and a pay-as-you-use pricing model.
In this post, we outline the pricing model for Amazon SageMaker and offer some best practices on how you can optimize your cost of using Amazon SageMaker resources to effectively and efficiently build, train, and deploy your ML models. In addition, the post offers programmatic approaches for automatically stopping or detecting idle resources that are incurring costs, allowing you to avoid unnecessary charges.
Amazon SageMaker pricing
Machine Learning is an iterative process with different computational needs for prototyping the code and exploring the dataset, processing, training, and hosting the model for real-time and offline predictions. In a traditional paradigm, estimating the right amount of computational resources to support different workloads is difficult, and often leads to over-provisioning resources. The modular design of Amazon SageMaker offers flexibility to optimize the scalability, performance, and costs for your ML workloads depending on each stage of the ML lifecycle. For more information about how Amazon SageMaker works, see the following resources:
The following diagram is a simplified illustration of the modular design for each stage of the ML lifecycle. Each environment, called build, train (and tune), and deploy, use separate compute resources with different pricing.
For more information about the costs involved in your ML journey on Amazon SageMaker, see Lowering total cost of ownership for machine learning and increasing productivity with Amazon SageMaker.
With Amazon SageMaker, you pay only for what you use. Pricing within Amazon SageMaker is broken down by the ML stage: building, processing, training, and model deployment (or hosting), and further explained in this section.
Build environment
Amazon SageMaker offers two environments for building your ML models: SageMaker Studio Notebooks and on-demand notebook instances. Amazon SageMaker Studio is a fully integrated development environment for ML, using a collaborative, flexible, and managed Jupyter notebook experience. You can now access Amazon SageMaker Studio, the first fully integrated development environment (IDE), for free, and you only pay for the AWS services that you use within Studio. For more information, see Amazon SageMaker Studio Tour.
Prices for compute instances are the same for both Studio and on-demand instances, as outlined in Amazon SageMaker Pricing. With Studio, your notebooks and associated artifacts such as data files and scripts are persisted on Amazon Elastic File System (Amazon EFS). For more information about storage charges, see Amazon EFS Pricing.
An Amazon SageMaker on-demand notebook instance is a fully managed compute instance running the Jupyter Notebook app. Amazon SageMaker manages creating the instance and related resources. Notebooks contain everything needed to run or recreate an ML workflow. You can use Jupyter notebooks in your notebook instance to prepare and process data, write code to train models, deploy models to Amazon SageMaker hosting, and test or validate your models.
Processing
Amazon SageMaker Processing lets you easily run your preprocessing, postprocessing, and model evaluation workloads on a fully managed infrastructure. Amazon SageMaker manages the instances on your behalf, and launches the instances for the job and terminates the instances when the job is done. For more information, see Amazon SageMaker Processing – Fully Managed Data Processing and Model Evaluation.
Training and tuning
Depending on the size of your training dataset and how quickly you need the results, you can use resources ranging from a single general-purpose instance to a distributed cluster of GPU instances. Amazon SageMaker manages these resources on your behalf, and provisions, launches, and then stops and terminates the compute resources automatically for the training jobs. With Amazon SageMaker training and tuning, you only pay for the time the instances were consumed for training. For more information, see Train and tune a deep learning model at scale.
Amazon SageMaker automatic model tuning, also known as hyperparameter tuning, finds the best version of a model by running many training jobs on your dataset using the algorithm and ranges of hyperparameters that you specify on a cluster of instances you define. Similar to training, you only pay for the resources consumed during the tuning time.
Deployment and hosting
You can perform model deployment for inference in two different ways:
- ML hosting for real-time inference – After you train your model, you can deploy it to get predictions in real time using a persistent endpoint with Amazon SageMaker hosting services
- Batch transform – You can use Amazon SageMaker batch transform to get predictions on an entire dataset offline
The Amazon SageMaker pricing model
The following table summarizing the pricing model for Amazon SageMaker.
ML Compute Instance | Storage | Data Processing In/Out | |
Build (On-Demand Notebook Instances) | Per instance-hour consumed while the notebook instance is running. | Cost of GB-month of provisioned storage. | No cost. |
Build (Stuio Notebookes) | Per instance-hour consumed while the instance is running. | See Amaon Elastic File System (EFS) pricing. | No cost. |
Processing | Per instance-hour consumed for each instance while the processing job is running. | Cost of GB-month of provisioned storage. | No cost. |
Training and Tuning |
On-Demand Instances: Per instance-hour consumed for each instance, from the time an instance is available for use until it is terminated or stopped. Each partial instance-hour consumed is billed per second.
Spot Training: Save up to 90% costs compared to on-demand instances by using managed spot training. |
GB-month of provisioned storage. | No cost. |
Batch Transform | Per instance-hour consumed for each instance while the batch transform job is running. | No cost. | No cost |
Deployment (Hosting) | Per instance-hour consumed for each instance while the endpoint is running. | GB-month of provisioned storage. | GB Data Processed IN and GB Data Processed OUT of the endpoint instance. |
You can also get started with Amazon SageMaker with the free tier. For more information about pricing, see Amazon SageMaker Pricing.
Right-sizing compute resources for Amazon SageMaker notebooks, processing jobs, training, and deployment
With the pricing broken down based on time and resources you use in each stage of an ML lifecycle, you can optimize the cost of Amazon SageMaker and only pay for what you really need. In this section, we discuss general guidelines to help you choose the right resources for your Amazon SageMaker ML lifecycle.
Amazon SageMaker currently offers ML compute instances on the following instance families:
- T – General-purpose burstable performance instances (when you don’t need consistently high levels of CPU, but benefit significantly from having full access to very fast CPUs when you need them)
- M – General-purpose instances
- C – Compute-optimized instances (ideal for compute bound applications)
- R – Memory-optimized instances (designed to deliver fast performance for workloads that process large datasets in memory)
- P, G and Inf – Accelerated compute instances (using hardware accelerators, or co-processors)
- EIA – Inference acceleration instances (used for Amazon Elastic Inference)
Instance type consideration for a computational workload running on an Amazon SageMaker ML compute instance is no different than running on an Amazon Elastic Compute Cloud (Amazon EC2) instance. For more information about instance specifications, such as number of virtual CPU and amount of memory, see Amazon SageMaker Pricing.
Build environment
The Amazon SageMaker notebook instance environment is suitable for interactive data exploration, script writing, and prototyping of feature engineering and modeling. We recommend using notebooks with instances that are smaller in compute for interactive building and leaving the heavy lifting to ephemeral training, tuning, and processing jobs with larger instances, as explained in the following sections. This way, you don’t keep a large instance (or a GPU) constantly running with your notebook. This can help you minimize your build costs by selecting the right instance.
For the building stage, the size of an Amazon SageMaker on-demand notebook instance depends on the amount of data you need to load in-memory for meaningful exploratory data analyses (EDA) and the amount of computation required. We recommend starting small with general-purpose instances (such as T or M families) and scale up as needed.
The burstable T family of instances is ideal for notebook activity because computation only comes when you run a cell but you want full power from CPU. For example, ml.t2.medium
is sufficient for most of basic data processing, feature engineering, and EDA that deal with small datasets that can be held within 4 GB memory. You can select an instance with larger memory capacity, such as ml.m5.12xlarge
(192 GB memory), if you need to load significantly more data into the memory for feature engineering. If your feature engineering involves heavy computational work (such as image processing), you can use one of the compute-optimized C family instances, such as ml.c5.xlarge
.
The benefit of Studio notebooks over on-demand notebook instances is that with Studio, the underlying compute resources are fully elastic and you can change the instance on the fly, allowing you to scale the compute up and down as your compute demand changes, for example from ml.t3.medium
to ml.g4dn.xlarge
as your build compute demand increases, without interrupting your work or managing infrastructure. Moving from one instance to another is seamless, and you can continue working while the instance launches. With on-demand notebook instances, you need to stop the instance, update the setting, and restart with the new instance type.
To keep your build costs down, we recommend stopping your on-demand notebook instances or shutting down your Studio instances when you don’t need them. In addition, you can use AWS Identity and Access Management (IAM) condition keys as an effective way to restrict certain instance types, such as GPU instances, for specific users, thereby controlling costs. We go into more detail in the section Recommendations for avoiding unnecessary costs.
Processing environment
After you complete data exploration and prototyping with a subset of your data and are ready to apply the preprocessing and transformation on the entire data, you can launch an Amazon SageMaker Processing job with your processing script that you authored during the EDA phase without scaling up the relatively small notebook instance you have been using. Amazon SageMaker Processing dispatches all things needed for processing the entire dataset, such as code, container, and data, to a compute infrastructure separate from the Amazon SageMaker notebook instance. Amazon SageMaker Processing takes care of the resource provisioning, data and artifact transfer, and shutdown of the infrastructure once the job finishes.
The benefit of using Amazon SageMaker Processing is that you only pay for the processing instances while the job is running. Therefore, you can take advantage of powerful instances without worrying too much about the cost. For example, as a general recommendation, you can use an ml.m5.4xlarge
for medium jobs (MBs of GB of data), ml.c5.18xlarge
for workloads requiring heavy computational capacity, or ml.r5.8xlarge
when you want to load multiple GBs of data in memory for processing, and only pay for the time of the processing job. Sometimes, you may consider using a larger instance to get the job done quicker, and end up paying less in total cost of the job.
Alternatively, for distributed processing, you can use a cluster of smaller instances by increasing the instance count. For this purpose, you could shard input objects by Amazon Simple Storage Service (Amazon S3) key by setting s3_data_distribution_type='ShardedByS3Key'
inside a ProcessingInput so that each instance receives about the same number of more manageable input objects, and instead you can use smaller instances in the cluster, leading to potential cost savings. Furthermore, you could perform the processing job asynchronously with .run(…, wait = False), as if you submit the job and get your notebook cell back immediately for other activities, leading to a more efficient use of your build compute instance time.
Training and tuning environment
The same compute paradigm and benefits for Amazon SageMaker Processing apply to Amazon SageMaker Training and Tuning. When you use fully managed Amazon SageMaker Training, it dispatches all things needed for a training job, such as code, container, and data, to a compute infrastructure separate from the Amazon SageMaker notebook instance. Therefore, your training jobs aren’t limited by the compute resource of the notebook instance. The Amazon SageMaker Training Python SDK also supports asynchronous training when you call .fit(…, wait = False)
. You get your notebook cell back immediately for other activities, such as calling .fit()
again for another training job with a different ML compute instance for profiling purposes or a variation of the hyperparameter settings for experimentation purposes. Because ML training can often be a compute-intensive and time-consuming part of the ML lifecycle, with training jobs happening asynchronously in a remote compute infrastructure, you can safely shut down the notebook instance for cost-optimizing purposes if starting a training job is the last task of your day. We discuss how to automatically shut down unused, idle on-demand notebook instances in the section Recommendations for avoiding unnecessary costs.
Cost-optimization factors that you need to consider when selecting instances for training include the following:
- Instance family – What type of instance is suitable for the training? You need to optimize for overall cost of training, and sometimes selecting a larger instance can lead to much faster training and thus less total cost; can the algorithm even utilize a GPU instance?
- Instance size – What is the minimum compute and memory capacity your algorithm requires to run the training? Can you use distributed training?
- Instance count – If you can use distributed training, what instance type (CPU or GPU) can you use in the cluster, and how many?
As for the choice of instance type, you could base your decision on what algorithms or frameworks you use for the workload. If you use the Amazon SageMaker built-in algorithms, which give you a head start without any sophisticated programming, see Instance types for built-in algorithms for detailed guidelines. For example, XGBoost currently only trains using CPUs. It is a memory-bound (as opposed to compute-bound) algorithm. So, a general-purpose compute instance (for example, M5
) is a better choice than a compute-optimized instance (for example, C4
).
Furthermore, we recommend having enough total memory in selected instances to hold the training data. Although it supports the use of disk space to handle data that doesn’t fit into main memory (the out-of-core feature available with the libsvm input mode), writing cache files onto disk slows the algorithm processing time. For the object detection algorithm, we support the following GPU instances for training:
ml.p2.xlarge
ml.p2.8xlarge
ml.p2.16xlarge
ml.p3.2xlarge
ml.p3.8xlarge
ml.p3.16xlarge
We recommend using GPU instances with more memory for training with large batch sizes. You can also run the algorithm on multi-GPU and multi-machine settings for distributed training.
If you’re bringing your own algorithms with script mode or with custom containers, you need to first clarify whether the framework or algorithm supports CPU, GPU, or both to decide the instance type to run the workload. For example, scikit-learn doesn’t support GPU, meaning that training with accelerated compute instances doesn’t result in any material gain in runtime but leads to overpaying for the instance. To determine which instance type and number of instances, if training in distributed fashion, for your workload, it’s highly recommended to profile your jobs to find the sweet spot between number of instance and runtime, which translates to cost. For more information, see Amazon Web Services achieves fasted training times for BERT and Mask R-CNN. You should also find the balance between instance type, number of instances, and runtime. For more information, see Train ALBERT for natural language processing with TensorFlow on Amazon SageMaker.
When it comes to GPU-powered P
and G
families of instances, you need to consider the differences. For example, P3
GPU compute instances are designed to handle large distributed training jobs for fastest time to train, whereas G4 instances are suitable for cost-effective, small-scale training jobs.
Another factor to consider in training is that you can select from either On-Demand Instances or Spot Instances. On-demand ML instances for training let you pay for ML compute capacity based on the time the instance is consumed, at on-demand rates. However, for jobs that can be interrupted or don’t need to start and stop at specific times, you can choose managed Spot Instances (Managed Spot Training). Amazon SageMaker can reduce the cost of training models by up to 90% over On-Demand Instances, and manages the Spot interruptions on your behalf.
Deployment/hosting environment
In many cases, up to 90% of the infrastructure spend for developing and running an ML application is on inference, making the need for high-performance, cost-effective ML inference infrastructure critical. This is mainly because the build and training jobs aren’t frequent and you only pay for the duration of build and training, but an endpoint instance is running all the time (while the instance is in service). Therefore, selecting the right way to host and the right type of instance can have a large impact on the total cost of ML projects.
For model deployment, it’s important to work backwards from your use case. What is the frequency of the prediction? Do you expect live traffic to your application and real-time response to your clients? Do you have many models trained for different subsets of data for the same use case? Does the prediction traffic fluctuate? Is latency of inference a concern?
There are hosting options from Amazon SageMaker for each of these situations. If your inference data comes in batches, Amazon SageMaker batch transform is a cost-effective way to obtain predictions with fully managed infrastructure provisioning and tear-down. If you have trained multiple models for one single use case, a multi-modal endpoint is a great way to save cost on hosting ML models that are trained on a per-user or segment basis. For more information, see Save on inference costs by using Amazon SageMaker multi-model endpoints.
After you decide how to host your models, load testing is the best practice to determine the appropriate instance type and fleet size, with or without autoscaling for your live endpoint to avoid over-provisioning and paying extra for capacity you don’t need. Algorithms that train most efficiently on GPUs might not benefit from GPUs for efficient inference. It’s important to load test to determine the most cost-effective solution. The following flowchart summarizes the decision process.
Amazon SageMaker offers different options for instance families that you can use for inference, from general-purpose instances to compute-optimized and GPU-powered instances. Each family is optimized for a different application, and not all instance types are suitable for inference jobs. For example, Amazon Inf1 instances offer high throughput and low latency and have the lowest cost per inference in the cloud. G4
instances for inference G4 have the lowest cost per inference for GPU instances and offer greater performance, lower latency, and reduced cost per inference for GPU instances. And P3
instances are optimized for training, and are designed to handle large distributed training jobs for fastest time to train, and thus not fully utilized for inference.
Another way to lower inference cost is to use Elastic Inference for cost savings of up to 75% on inference jobs. Picking an instance type and size for inference may not be easy, given the many factors involved. For example, for larger models, the inference latency of CPUs may not meet the needs of online applications, while the cost of a full-fledged GPU may not be justified. In addition, resources like RAM and CPU may be more important to the overall performance of your application than raw inference speed. With Elastic Inference, you attach just the right amount of GPU-powered inference acceleration to any Amazon compute instance. This is also available for Amazon SageMaker notebook instances and endpoints, bringing acceleration to built-in algorithms and to deep learning environments. This lets you select the best price/performance ratio for your application. For example, an ml.c5.large
instance configured with eia1.medium
acceleration costs you about 75% less than an ml.p2.xlarge
, but with only 10–15% slower performance. For more information, see Amazon Elastic Inference – GPU-Powered Deep Learning Inference Acceleration.
In addition, you can use Auto Scaling for Amazon SageMaker to add and remove capacity or accelerated instances to your endpoints automatically, whenever needed. With this feature, instead of having to closely monitor inference volume and change the endpoint configuration in response, your endpoint automatically adjusts the number of instances up or down in response to actual workloads, determined by using Amazon CloudWatch metrics and target values defined in the policy. For more information, see AWS Auto Scaling.
Recommendations for avoiding unnecessary costs
Certain Amazon SageMaker resources (such as processing, training, tuning, and batch transform instances) are ephemeral, and Amazon SageMaker automatically launches the instance and terminates them when the job is done. However, other resources (such as build compute resources or hosting endpoints) aren’t ephemeral, and the user has control over when these resources should be stopped or terminated. Therefore, knowing how to identify idle resources and stopping them can lead to better cost-optimization. This section outlines some useful methods for automating these processes.
Build environment: Automatically stopping idle on-demand notebook instances
One way to avoid the cost of idle notebook instances is to automatically stop idle instances using lifecycle configurations. With lifecycle configuration in Amazon SageMaker, you can customize your notebook environment by installing packages or sample notebooks on your notebook instance, configuring networking and security for it, or otherwise use a shell script to customize it. Such flexibility allows you to have more control over how your notebook environment is set up and run.
AWS maintains a public repository of notebook lifecycle configuration scripts that address common use cases for customizing notebook instances, including a sample bash script for stopping idle notebooks.
You can configure your notebook instance using a lifecycle configuration to automatically stop itself if it’s idle for a certain period of time (a parameter that you set). The idle state for a Jupyter notebook is defined in the following GitHub issue. To create a new lifecycle configuration for this purpose, follow these steps:
- On the Amazon SageMaker console, choose Lifecycle configurations.
- Choose Create a new lifecycle configuration (if you are creating a new one).
- For Name, enter a name using alphanumeric characters and
-
, but no spaces. The name can have a maximum of 63 characters. For example,Stop-Idle-Instance
. - To create a script that runs when you create the notebook and every time you start it, choose Start notebook.
- In the Start notebook editor, enter the script.
- Choose Create configuration.
The bash script to use for this purpose can be found on AWS Samples repository for lifecycle configuration samples. This script is basically running a cron job at a specific period of idle time, as defined with parameter IDLE_TIME
in the script. You can change this time to your preference and change the script as needed on the Lifecycle configuration page.
For this script to work, the notebook should meet these two criteria:
- The notebook instance has internet connectivity to fetch the example config Python script (
autostop.py
) from the public repository - The notebook instance execution role permissions to
SageMaker:StopNotebookInstance
to stop the notebook andSageMaker:DescribeNotebookInstance
to describe the notebook
If you create notebook instances in a VPC that doesn’t allow internet connectivity, you need to add the Python script inline in the bash script. The script is available on the GitHub repo. Enter it in your bash script as follows, and use this for lifecycle configuration instead:
#!/bin/bash
set -e
# PARAMETERS
IDLE_TIME=3600
echo "Creating the autostop.py"
cat << EOF > autostop.py
##
## [PASTE PYTHON SCRIPT FROM GIT REPO HERE]
##
EOF
echo "Starting the SageMaker autostop script in cron"
(crontab -l 2>/dev/null; echo "*/5 * * * * /usr/bin/python $PWD/autostop.py --time $IDLE_TIME --ignore-connections") | crontab -
The following screenshot shows how to choose the lifecycle configuration on the Amazon SageMaker console.
Alternatively, you can store the script on Amazon S3 and connect to the script through a VPC endpoint. For more information, see New – VPC Endpoint for Amazon S3.
Now that you have created the lifecycle configuration, you can assign it to your on-demand notebook instance when creating a new one or when updating existing notebooks. To create a notebook with your lifecycle configuration (for this post, Stop-Idle-Instance
), you need to assign the script to the notebook under the Additional Configuration section. All other steps are the same as outlined in Create a On-Demand Notebook Instance. To attach the lifecycle configuration to an existing notebook, you first need to stop the on-demand notebook instance, and choose Update settings to make changes to the instance. You attach the lifecycle configuration in the Additional configuration section.
Build environment: Scheduling start and stop of on-demand notebook instances
Another approach is to schedule your notebooks to start and stop at specific times. For example, if you want to start your notebooks (such as notebooks of specific groups or all notebooks in your account) at 7:00 AM and stop all of them at 9:00 PM during weekdays (Monday through Friday), you can accomplish this by using Amazon CloudWatch Events and AWS Lambda functions. For more information about configuring your Lambda functions, see Configuring functions in the AWS Lambda console. To build the schedule for this use case, you can follow the steps in the following sections.
Starting notebooks with a Lambda function
To start your notebooks with a Lambda function, complete the following steps:
- On the Lambda console, create a Lambda function for starting on-demand notebook instances with specific keywords in their name. For this post, our development team’s on-demand notebook instances have names starting with
dev-
. - Use Python as the runtime for the function, and name the function
start-dev-notebooks
.
Your Lambda function should have the SageMakerFullAccess
policy attached to its execution IAM role.
- Enter the following script into the Function code editing area:
# Code to start InService Notebooks that contain specific keywords in their name
# Change "dev-" in NameContains to your specific use case
import boto3
client = boto3.client('sagemaker')
def lambda_handler(event, context):
try:
response_nb_list = client.list_notebook_instances(
NameContains='dev-', # Change this to your specific use case
StatusEquals= 'Stopped'
)
for nb in response_nb_list['NotebookInstances']:
response_nb_stop = client.start_notebook_instance(
NotebookInstanceName = nb['NotebookInstanceName'])
return {"Status": "Success"}
except:
return {"Status": "Failure"}
- Under Basic Settings, change Timeout to 15 minutes (max).
This step makes sure the function has the maximum allowable timeout range during stopping and starting multiple notebooks.
- Save your function.
Stopping notebooks with a Lambda function
To stop your notebooks with a Lambda function, follow the same steps, use the following script, and name the function stop-dev-notebooks
:
# Code to stop InService Notebooks that contain specific keywords in their name
# Change "dev-" in NameContains to your specific use case
import boto3
client = boto3.client('sagemaker')
def lambda_handler(event, context):
try:
response_nb_list = client.list_notebook_instances(
NameContains='dev-', # Change this to your specific use case
StatusEquals= 'InService'
)
for nb in response_nb_list['NotebookInstances']:
response_nb_stop = client.stop_notebook_instance(
NotebookInstanceName = nb['NotebookInstanceName'])
return {"Status": "Success"}
except:
return {"Status": "Failure"}
Creating a CloudWatch event
Now that you have created the functions, you need to create an event to trigger these functions on a specific schedule.
We use cron expression format for the schedule. For more information about creating your custom cron expression, see Schedule Expressions for Rules. All scheduled events use UTC time zone, and the minimum precision for schedules is 1 minute.
For example, the cron expression for 7:00 AM, Monday through Friday throughout the year, is 0 7 ? * MON-FRI *
, and for 9:00 PM on the same days is 0 21 ? * MON-FRI *
.
To create the event for stopping your instances on a specific schedule, complete the following steps:
- On the CloudWatch console, under Events, choose Rules.
- Choose Create rule.
- Under Event Source, select Schedule, and then select Cron expression.
- Enter your cron expression (for example,
21 ? * MON-FRI *
for 9:00 PM Monday through Friday). - Under Targets, choose Lambda function.
- Choose your function from the list (for this post,
stop-dev-notebooks
). - Choose Configure details
- Add a name for your event, such as
Stop-Notebooks-Event
, and a description. - Leave Enabled
- Choose Create.
You can follow the same steps to create scheduled event to start your notebooks on a schedule, such as 7:00 AM on weekdays, so when your staff start their day, the notebooks are ready and in service.
Hosting environment: Automatically detecting idle Amazon SageMaker endpoints
You can deploy your ML models as endpoints to test the model for real-time inference. Sometimes these endpoints are accidentally left in service, leading to ongoing charges on the account. You can automatically detect these endpoints and take corrective actions (such as deleting them) by using CloudWatch Events and Lambda functions. For example, you can detect if endpoints have been idle for the past number of hours (with no invocations over a certain period, such as 24 hours). The function script we provide in this section detects idle endpoints and publishes a message to an Amazon Simple Notification Service (Amazon SNS) topic with the list of idle endpoints. You can subscribe the account admins to this topic, and they receive emails with the list of idle endpoints when detected. To create this scheduled event, follow these steps:
- Create an SNS topic and subscribe your email or phone number to it.
- Create a Lambda function with the following script.
- Your Lambda function should have the following policies attached to its IAM execution role:
CloudWatchReadOnlyAccess
,AmazonSNSFullAccess
, andAmazonSageMakerReadOnly
.
- Your Lambda function should have the following policies attached to its IAM execution role:
import boto3
from datetime import datetime
from datetime import timedelta
def lambda_handler(event, context):
idle_threshold_hr = 24 # Change this to your threshold in hours
cw = boto3.client('cloudwatch')
sm = boto3.client('sagemaker')
sns = boto3.client('sns')
try:
inservice_endpoints = sm.list_endpoints(
SortBy='CreationTime',
SortOrder='Ascending',
MaxResults=100,
# NameContains='string', # for example 'dev-'
StatusEquals='InService'
)
idle_endpoints = []
for ep in inservice_endpoints['Endpoints']:
ep_describe = sm.describe_endpoint(
EndpointName=ep['EndpointName']
)
metric_response = cw.get_metric_statistics(
Namespace='AWS/SageMaker',
MetricName='Invocations',
Dimensions=[
{
'Name': 'EndpointName',
'Value': ep['EndpointName']
},
{
'Name': 'VariantName',
'Value': ep_describe['ProductionVariants'][0]['VariantName']
}
],
StartTime=datetime.utcnow()-timedelta(hours=idle_threshold_hr),
EndTime=datetime.utcnow(),
Period=int(idle_threshold_hr*60*60),
Statistics=['Sum'],
Unit='None'
)
if len(metric_response['Datapoints'])==0:
idle_endpoints.append(ep['EndpointName'])
if len(idle_endpoints) > 0:
response_sns = sns.publish(
TopicArn='YOUR SNS TOPIC ARN HERE',
Message="The following endpoints have been idle for over {} hrs. Log on to Amazon SageMaker console to take actions.nn{}".format(idle_threshold_hr, 'n'.join(idle_endpoints)),
Subject='Automated Notification: Idle Endpoints Detected',
MessageStructure='string'
)
return {'Status': 'Success'}
except:
return {'Status': 'Fail'}
You can also revise this code to filter the endpoints based on resource tags. For more information, see AWS Python SDK Boto3 documentation.
Investigating endpoints
This script sends an email (or text message, depending on how the SNS topic is configured) with the list of detected idle endpoints. You can then sign in to the Amazon SageMaker console and investigate those endpoints, and delete them if you find them to be unused stray endpoints. To do so, complete the following steps:
- On the Amazon SageMaker console, under Inference, choose Endpoints.
You can see the list of all endpoints on your account in that Region.
- Select the endpoint that you want to investigate, and under Monitor, choose View invocation metrics.
- Under All metrics, select Invocations
You can see the invocation activities on the endpoint. If you notice no invocation event (or activity) for the duration of your interest, it means the endpoint isn’t in use and you can delete it.
- When you’re confident you want to delete the endpoint, go back to the list of endpoints, select the endpoint you want to delete, and under the Actions menu, choose
Conclusion
This post walked you through how Amazon SageMaker pricing works, best practices for right-sizing Amazon SageMaker compute resources for different stages of an ML project, and best practices for avoiding unnecessary costs of unused resources by either automatically stopping idle on-demand notebook instances or automatically detecting idle Amazon SageMaker endpoints so you can take corrective actions.
By understanding how Amazon SageMaker works and the pricing model for Amazon SageMaker resources, you can take steps in optimizing your total cost of ML projects even further.
About the authors
Nick Minaie is an Artificial Intelligence and Machine Learning (AI/ML) Specialist Solution Architect, helping customers on their journey to well-architected machine learning solutions at scale. In his spare time, Nick enjoys family time, abstract painting, and exploring nature.
Michael Hsieh is a Senior AI/ML Specialist Solutions Architect. He works with customers to advance their ML journey with a combination of AWS ML offerings and his ML domain knowledge. As a Seattle transplant, he loves exploring the great mother nature the city has to offer such as the hiking trails, scenic kayaking in the SLU, and the sunset at the Shilshole Bay.