Learn how the winner of the AWS DeepComposer Chartbusters Keep Calm and Model On challenge used Transformer algorithms to create music

AWS is excited to announce the winner of the AWS DeepComposer Chartbusters Keep Calm and Model On challenge, Nari Koizumi. AWS DeepComposer gives developers a creative way to get started with machine learning (ML) by creating an original piece of music in collaboration with artificial intelligence (AI). In June 2020, we launched Chartbusters, a global competition where developers use AWS DeepComposer to create original AI-generated compositions and compete to showcase their ML skills. The Keep Calm and Model On challenge, which ran from December 2020 to January 2021, challenged developers to use the newly launched Transformers algorithm to extend an input melody by up to 20 seconds to create new and interesting musical scores from an input melody.

We interviewed Nari to learn more about his experience competing in the Keep Calm and Model On Chartbusters challenge, and asked him to tell us more about how he created his winning composition.

Learning about AWS DeepComposer

Nari currently works in the TV and media industry and describes himself as a creator. Before getting started with AWS DeepComposer, Nari had no prior ML experience.

“I have no educational background in machine learning, but I’m an artist and creator. I always look for artificial intelligence services for creative purposes. I’m working on a project, called Project 52, which is about making artwork everyday. I always set a theme each month, and this month’s theme was about composition and audio visualization.”

Nari discovered AWS DeepComposer when he was gathering ideas for his new project.

“I was searching one day for ‘AI composition music’, and that’s how I found out about AWS DeepComposer. I knew that AWS had many, many services and I was surprised that AWS was doing something with entertainment and AI.”

Nari at his work station.

Building in AWS DeepComposer

Nari saw AWS DeepComposer as an opportunity to see how he could combine his creative side with his interest in learning more about AI. To get started, Nari first played around in the AWS DeepComposer Music Studio and used the learning capsules provided to understand the generative AI models offered by AWS DeepComposer.

“I thought AWS DeepComposer was very easy to use and make music. I checked through all the learning capsules and pages to help get started.”

For the Keep Calm and Model On Chartbusters challenge, participants were challenged to use the newly launched Transformers algorithm, which can extend an input melody by up to 30 seconds. The Transformer is a state-of-the-art model that works with sequential data such as predicting stock prices, or natural language tasks such as translation. Learn more about the Transformer technique in the learning capsule provided on the AWS DeepComposer console.

“I used my keyboard and connected it to the Music Studio, and made a short melody and recorded in the Music Studio. What’s interesting is you can extend your own melody using Transformers and it will make a 30-second song from only 5 seconds of input. That was such an interesting moment for me; how I was able to input a short melody, and AI created the rest of the song.”

The Transformers feature used in Nari’s composition in the AWS DeepComposer Music Studio.

After playing around with his keyboard, Nari chose one of the input melodies. The Transformers model allows developers to experiment with parameters such as creative risk, track length, and note length.

“I chose one of the melodies provided, and then played around with a couple parameters. I made seven songs, and tweaked until I liked the final output. You can also export the MIDI file and continue to play around with parts of the song. That was a fun part, because I exported the file and continued to play with the melody to customize with other instruments. It was so much fun playing around and making different sounds.”

Nari composing his melody.

You can listen to Nari’s winning composition “P.S. No. 11 Ext.” on the AWS DeepComposer SoundCloud page. Check out Nari’s Instagram, where he created audio visualization to one of the tracks he created using AWS DeepComposer.

Conclusion

Nari found competing in the challenge to be a rewarding experience because he was able to go from no experience in ML to developing an understanding of generative AI in less than an hour.

“What’s great about AWS DeepComposer is it’s easy to use. I think AWS has so many services and many can be hard or intimidating to get started with for those who aren’t programmers. When I first found out about AWS DeepComposer, I knew it was exciting. But at the same time, I thought it was AWS and I’m not an engineer and I wasn’t sure if I had the knowledge to get started. But even the setup was super easy, and it took only 15 minutes to get started, so it was very easy to use.”

Nari is excited to see how AI will continue to transform the creative industry.

“Even though I’m not an engineer or programmer, I know that AI has huge potential for creative purposes. I think it’s getting more interesting in creating artwork with AI. There’s so much potential with AI not just within music, but also in the media world in general. It’s a pretty exciting future.”

By participating in the challenge, Nari hopes that he will inspire future participants to get started in ML.

“I’m on the creative side, so I hope I can be a good example that someone who’s not an engineer or programmer can create something with AWS DeepComposer. Try it out, and you can do it!”

Congratulations to Nari for his well-deserved win!

We hope Nari’s story inspired you to learn more about ML and AWS DeepComposer. Check out the new skill-based AWS DeepComposer Chartbusters challenge and start composing today.


About the Authors

Paloma Pineda is a Product Marketing Manager for AWS Artificial Intelligence Devices. She is passionate about the intersection of technology, art, and human centered design. Out of the office, Paloma enjoys photography, watching foreign films, and cooking French cuisine.

Read More

Speed up YOLOv4 inference to twice as fast on Amazon SageMaker

Machine learning (ML) models have been deployed successfully across a variety of use cases and industries, but due to the high computational complexity of recent ML models such as deep neural networks, inference deployments have been limited by performance and cost constraints. To add to the challenge, preparing a model for inference involves packaging the model in the right format and optimizing the model for each target hardware such as CPU, GPU, or AWS Inferentia. ML acceleration technologies have evolved to close the gap between productivity-focused ML frameworks and performance-oriented and efficiency-oriented hardware backends. However, optimizing a model for target hardware still involves assembling a complex tool chain of framework-specific converters and hardware-specific compilers, each with their own dependencies and configuration choices that can be difficult to understand, and then using it to compile the model.

Amazon SageMaker is a fully managed service that enables data scientists and developers to build, train, and deploy ML models at 50% lower total cost of ownership than self-managed deployments on Amazon Elastic Compute Cloud (Amazon EC2). Amazon SageMaker Neo is a capability of SageMaker that automatically compiles ML models for any ML framework and to any target hardware. With Neo, you don’t need to set up third-party or framework-specific compiler software, or tune the model manually for optimizing inference performance. We’re continually updating Neo to support more operators and expand model coverage for frameworks, including TensorFlow, PyTorch, XGBoost, MXNet, Darknet, and ONNX.

In this post, we show you how to deploy a PyTorch YOLOv4 model on a SageMaker ML CPU-based instance. You download a pre-trained model artifact, compile your pre-trained model using Neo, set up a SageMaker endpoint for both compiled and uncompiled model versions, and benchmark performance to evaluate latency, comparing a compiled and uncompiled YOLOv4 model on the same instance.

In our performance comparison, deploying YOLOv4 with Neo improved performance on SageMaker ML instances. Benchmark testing on a SageMaker ML c5.9xlarge instance revealed improved inference performance compared to a baseline model without Neo optimizations running on the same instance type. The Neo compiled model achieved a speedup in latency twice as fast compared to an uncompiled model on the same SageMaker ML instance.

You Only Look Once

Object detection stands out as a computer vision (CV) task that has seen large accuracy improvements due to deep learning (DL) model architectures. An object detection model tries to localize and classify objects in an image, allowing for applications ranging from real-time inspection of manufacturing defects to medical imaging.

YOLO (You Only Look Once) is part of the DL single-stage object detection model family, which includes models such as Single Shot Detector (SSD) and RetinaNet. These models are built by stacking neural networks (backbone, neck, and head) that together perform detection and classification tasks. The prediction outputs are bounding boxes with confidence scores for identified objects and associated classes.

The backbone network takes care of extracting features of the input image, while the head gets trained on a supervised prediction task to predict the edges of the bounding box and classify its contents. The addition of a neck neural network allows the head network to process features from intermediate steps of the backbone. The whole pipeline processes the images only once, hence the name You Only Look Once.

Single-stage models allow for multiple predictions of the same object in a single image. These predictions get disambiguated by a process called non-maximal suppression (NMS), which takes care of leaving only the highest detection probability bounding boxes that don’t overlap significantly. It’s a less computationally expensive workflow than the two-stage approach and is commonly used in real-time inference. With YOLOv4, you can achieve real-time inference above the human perception of around 30 frames per second (FPS). In this post, you explore ways to push the performance of this model even further using Neo as an accelerator for real-time object detection.

Prerequisites

For this walkthrough, you need an AWS account and an environment running Python 3.x.

Setup

First, we need to ensure we have SageMaker Python SDK 1.x and import the necessary Python packages. If you’re using SageMaker notebook instances, select conda_pytorch_p36 as your kernel. You may have to restart your kernel after upgrading packages. Use the following code to import your packages:

import numpy as np
import time
import json
import requests
import boto3
import os
import sagemaker

Next, we get the AWS Identity and Access Management (IAM) execution role and a few other SageMaker-specific variables from our notebook environment, so that SageMaker can access resources in your AWS account later:

from sagemaker import get_execution_role
from sagemaker.session import Session

role = get_execution_role()
sess = Session()
region = sess.boto_region_name
bucket = sess.default_bucket()

import torch
print(torch.__version__)

1.6.0

import sys
print(sys.version)

3.6.13 | packaged by conda-forge | (default, Feb 19 2021, 05:36:01)
[GCC 9.3.0]

Import pre-trained YOLOv4

The original pre-trained model is from GitHub. For this post, we provide a traced version of the model artifact packaged in a tarball. Tracing requires no changes to your Python code and converts your PyTorch model to TorchScript, a more portable format for usage with the model server included in SageMaker containers. See the following code:

model_archive = 'yolov4.tar.gz'
!wget https://aws-ml-blog-artifacts.s3.us-east-2.amazonaws.com/yolov4.tar.gz
--2021-03-30 20:07:02--  https://aws-ml-blog-artifacts.s3.us-east-2.amazonaws.com/yolov4.tar.gz
Resolving aws-ml-blog-artifacts.s3.us-east-2.amazonaws.com (aws-ml-blog-artifacts.s3.us-east-2.amazonaws.com)... 52.219.84.136
Connecting to aws-ml-blog-artifacts.s3.us-east-2.amazonaws.com (aws-ml-blog-artifacts.s3.us-east-2.amazonaws.com)|52.219.84.136|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 239656714 (229M) [application/x-gzip]
Saving to: ‘yolov4.tar.gz’

yolov4.tar.gz       100%[===================>] 228.55M  87.7MB/s    in 2.6s    

2021-03-30 20:07:05 (87.7 MB/s) - ‘yolov4.tar.gz’ saved [239656714/239656714]

We upload the model archive to Amazon Simple Storage Service (Amazon S3) with the following code:

from sagemaker.utils import name_from_base
compilation_job_name = name_from_base('torchvision-yolov4-neo-1')
prefix = compilation_job_name+'/model'
model_path = sess.upload_data(path=model_archive, key_prefix=prefix)
compiled_model_path = 's3://{}/{}/output'.format(bucket, compilation_job_name)

Create a SageMaker model and endpoint

Now that the model archive is in Amazon S3, we can create a SageMaker model and deploy it to a SageMaker endpoint. An entry_point script isn’t necessary and can be a blank file. The environment variables in the env parameter are also optional. Create the model and deploy it with the following code:

framework_version = '1.6'
py_version = 'py3'
instance_type = 'ml.c5.9xlarge'
from sagemaker.pytorch.model import PyTorchModel
from sagemaker.predictor import Predictor

sm_model = PyTorchModel(model_data=model_path,
                               framework_version=framework_version,
                               role=role,
                               sagemaker_session=sess,
                               entry_point='code/inference.py',
                               py_version=py_version,
                               env={"COMPILEDMODEL": 'False', 'MMS_MAX_RESPONSE_SIZE': '100000000', 'MMS_DEFAULT_RESPONSE_TIMEOUT': '500'}
                              )
uncompiled_predictor = sm_model.deploy(initial_instance_count=1, instance_type=instance_type)
-------------!

Use Neo to compile the model

Next, we can compile the model using Neo. The resulting compiled_model is also a SageMaker model and can be deployed to a SageMaker endpoint. When the compiled model is deployed, SageMaker automatically integrates the TVM runtime to interpret the compiled model. Compile the model with the following code:

input_layer_name = 'input0'
input_shape = [1,3,416,416]
data_shape = json.dumps({input_layer_name: input_shape})
target_device = 'ml_c5'
framework = 'PYTORCH'
compiled_env = {"MMS_DEFAULT_WORKERS_PER_MODEL":'1', "TVM_NUM_THREADS": '36', "COMPILEDMODEL": 'True', 'MMS_MAX_RESPONSE_SIZE': '100000000', 'MMS_DEFAULT_RESPONSE_TIMEOUT': '500'}
sm_model_compiled = PyTorchModel(model_data=model_path,
                               framework_version = framework_version,
                               role=role,
                               sagemaker_session=sess,
                               entry_point='code/inference.py',
                               py_version=py_version,
                               env=compiled_env
                              )
compiled_model = sm_model_compiled.compile(target_instance_family=target_device, 
                                         input_shape=data_shape,
                                         job_name=compilation_job_name,
                                         role=role,
                                         framework=framework.lower(),
                                         framework_version=framework_version,
                                         output_path=compiled_model_path
                                        )
?...............................................!
compiled_model.env = compiled_env

Deploy the compiled model as an optimized predictor with the following code:

optimized_predictor = compiled_model.deploy(initial_instance_count = 1,
                                  instance_type = instance_type
                                 )
--------------------------!!

Make predictions using the endpoints

Finally, we can compare the performance between the uncompiled and compiled models. We run 1,000 sequential iterations and calculate the round trip latency for each endpoint request:

iters = 1000
warmup = 100
client = boto3.client('sagemaker-runtime', region_name=region)

content_type = 'application/x-image'

sample_img_url = "https://github.com/ultralytics/yolov5/raw/master/data/images/zidane.jpg"
body = requests.get(sample_img_url).content
   
compiled_perf = []
uncompiled_perf = []
  
for i in range(iters):
    t0 = time.time()
    response = client.invoke_endpoint(EndpointName=optimized_predictor.endpoint_name, Body=body, ContentType=content_type)
    t1 = time.time()
    #convert to millis
    compiled_elapsed = (t1-t0)*1000

    t0 = time.time()
    response = client.invoke_endpoint(EndpointName=uncompiled_predictor.endpoint_name, Body=body, ContentType=content_type)
    t1 = time.time()
    #convert to millis
    uncompiled_elapsed = (t1-t0)*1000
    

    if warmup == 0:
        compiled_perf.append(compiled_elapsed)
        uncompiled_perf.append(uncompiled_elapsed)
    else:
        print(f'warmup ({i}, {iters}) : c - {compiled_elapsed} ms . uc - {uncompiled_elapsed} ms')
        warmup = warmup – 1

Performance comparison

The following graph shows the measured latency speedup of the compiled model compared with a uncompiled model on the same instance. The default SageMaker PyTorch container uses Intel one-DNN libraries for inference acceleration, so any speedup from Neo is on top of what’s provided by Intel libraries. Speedup is specific to the model and instance type, so the performance gain achieved with Neo varies based on your model architecture and target instance type.

On the ml.c5.9xlarge instance, we see an average latency of 397 milliseconds for the baseline endpoint and 188 milliseconds for the Neo optimized endpoint. Similarly, for the tail latency (95th percentile), we see 446 milliseconds for the baseline endpoint and 254 milliseconds for the Neo optimized endpoint. Optimizing the model with Neo resulted in twice as fast performance.

Speedup across common models and frameworks

As you saw in the preceding section, using Neo for model compilation provides a speedup over an uncompiled model using Intel one-DNN libraries alone. The following table lists latency speedups that you might see from a few other common models across frameworks in CPU and GPU instances.

Task Framework Model Target SageMaker Speedup
Image Classification TensorFlow mobilenetv2 GPU 200%
Image Classification TensorFlow resnet50 CPU 286%
Image Classification PyTorch resnet152 CPU 33%
Semantic Segmentation TensorFlow u-net CPU 22%

These numbers are only benchmarks and vary for your specific model, instance type, and payload. The numbers in the table are measured end to end on SageMaker. Other optimizations such as pruning and quantization are also worth looking into as part of your overall model optimization strategy.

Summary

In this post, we deployed a PyTorch YOLOv4 model on a SageMaker ML CPU-based instance and compared performance between an uncompiled model and a model compiled with Neo. We saw a performance increase in the Neo compiled model—twice as fast compared to an uncompiled model on the same SageMaker ML instance.

We continue to improve Neo’s operator coverage and performance across different frameworks and models. If you have any questions or comments, use the Amazon SageMaker Discussion Forums or send an email to amazon-ei-feedback@amazon.com.


About the Author

Santosh Bhavani is a Senior Technical Product Manager with the Amazon SageMaker Elastic Inference team. He focuses on helping SageMaker customers accelerate model inference and deployment. In his spare time, he enjoys traveling, playing tennis, and drinking lots of Pu’er tea.

 

 

Vamshidhar Dantu is a Software Developer with AWS Deep Learning. He focuses on building scalable and easily deployable deep learning systems. In his spare time, he enjoy spending time with family and playing badminton.

 

Read More

Amazon Lookout for Vision Accelerator Proof of Concept (PoC) Kit

Amazon Lookout for Vision is a machine learning service that spots defects and anomalies in visual representations using computer vision. With Amazon Lookout for Vision, manufacturing companies can increase quality and reduce operational costs by quickly identifying differences in images of objects at scale.

Basler and Amazon Lookout for Vision have collaborated to launch the “Amazon Lookout for Vision Accelerator PoC Kit” (APK) to help customers complete a Lookout for Vision PoC in less than six weeks. The APK is an “out-of-the-box” vision system (hardware + software) to capture and transmit images to the Lookout for Vision service and train/evaluate Lookout for Vision models. The APK simplifies camera selection/installation and capturing/analyzing images, enabling you to quickly validate Lookout for Vision performance before moving to a production setup.

Most manufacturing and industrial customers have multiple use cases (such as multiple production lines or multiple product SKUs) in which Amazon Lookout for Vision can provide support in automated visual inspection. The APK enables customers to use the kit to test Lookout for Vision functionalities for their use case first and then decide on purchasing a customized vision solution for multiple lines. Without the APK, you would have to procure and set up a vision system that integrates with Amazon Lookout for Vision, which is resource and time-consuming and can delay PoC starts. The integrated hardware and software design of the APK comprises an automated AWS Cloud connection, image preprocessing, and direct image transmission to Amazon Lookout for Vision – saving you time and resources.

The APK is intended to be set up and installed by technical staff with easy-to-follow instructions.

The APK enables you to quickly capture and transmit images, train Amazon Lookout for Vision models, run inferences to detect anomalies, and assess model performance. The following diagram illustrates our solution architecture.

The kit comes equipped with a:

  1. Basler ace camera
  2. Camera lens
  3. USB cable
  4. Network cable
  5. Power cable for the ring light
  6. Basler standard ring light
  7. Basler camera mount
  8. NVIDIA Jetson Nano development board (in its housing)
  9. Development board power supply

See corresponding items in the following image:

In the next section, we will walk through the steps for acquiring an image, extracting the region of interest (ROI) with image preprocessing, uploading training images to an Amazon Simple Storage Service (Amazon S3) bucket, training an Amazon Lookout for Vision model, and running inference on test images. The train and test images are of a printed circuit board. The Lookout for Vision model will learn to classify images into normal and anomaly (scratches, bent pins, bad solder, and missing components). In this blog, we will create a training dataset using the Lookout for Vision auto-split feature on the console with a single dataset. You can also set up a separate training and test dataset using the kit.

Kit Setup

After you unbox the kit, complete the following steps:

  1. Firmly screw the lens onto the camera mount.
  2. Connect the camera to the board with the supplied USB cable.
  3. For poorly lighted areas, use the supplied ring light. Note: If you use the ring light for training images, you should also use it to capture inference images.

  1. Connect the board to the network using a network cable (you can optionally use the supplied cable).
  2. Connect the board to its power supply and plug it in. In the image below, please note the camera stand and the base platform show an example set, but they are not provided as part of the APK.

  1. A monitor, keyboard, and mouse have to be attached when turning on the system for the first time.
  2. On the first boot, accept the end user licensing agreement from NVIDIA. You will see a series of prompts to set up the location, user name, and password, etc. For more information, see the first boot section on the initial setup.
  3. Log in to APK with the user name and password. You will see the following screen. Bring up the Linux terminal window using the search icon (green icon on the top left). This will display the APK IP address.

  1. Enter the command “ip addr show” command. This will display the APK IP address (For, e.g., 192.168.0.22 as shown in the following screenshot)

  1. Go to your Chrome browser on a machine on the same network and enter the APK IP address. The kit’s webpage should come up with a live stream from the camera.

Now we can do the optical setup (as described in the next section), and start taking pictures.

Image acquisition, preprocessing, and cloud connection setup

  1. With the browser running and showing the webpage of the kit, choose Configuration.

In a few seconds, a live image from the camera appears.

  1. Create an AWS account if you don’t have one. One can create an AWS account for free. The new user has access to AWS free tier service for the first 12 months. For more information, see creating and activating a new AWS account.
  2. Now you set up the connection in the cloud to your AWS account.
  3. Choose Create AWS Resources.

  1. In the dialog box that appears, choose Create AWS Resources.

You are redirected to the AWS Management Console, where you are asked to run the AWS CloudFormation stack.

  1. As part of creating the stack, create an S3 bucket in your specified region. Accept the check box to create AWS Identity and Access Management (IAM) resources.
  2. Choose Create Stack.

  1. When the stack is created, on the Outputs tab, copy the value for DeviceCertUrl.

  1. Return to the kit’s webpage and enter the URL value.
  2. Choose OK
  3. You are redirected back to the live image; the setup is now complete.
  4. Place the camera some distance away from the object to be inspected so that the object is fully in the live camera view and fills up the view as much as possible.
  5. As a general guideline, the operator should be able to see the anomaly in the image so that the Amazon Lookout for Vision models can learn the defects from the normal image. Since the supplied lens has a minimal distance to the object of 100 millimeters, the object should be placed at or greater than the minimal distance.
  6. If the object at this distance doesn’t fill up the image, you can cut out the background using the region-of-interest (ROI) tool described below.
  7. Check the focus, and either change the object’s distance to the lens or turn the focus on the lens (most likely a combination).
  8. If the live image appears too dark or too light, adjust the Gain and Exposure Times Note: Too much gain causes more noise in the image, and a long exposure time causes blurriness if the object is moving.

  1. If the object is focused and takes up a large part of the picture, use the ROI tool to reduce the unnecessary “background information”.

  1. The ROI tool selects the relevant part of the image and reduces background information. The image in the ROI is sent to the Amazon S3 bucket and will be used for Lookout for Vision training and inference.

  1. Choose Apply to reconfigure the camera to concentrate on this region.
  2. You can see the ROI on the live view. If you change the camera angle or distance to the object, you may need to change or reset the ROI. You can do this by choosing “Select Region of Interest” again and repeating the process.

Upload training images

 We are now ready to upload our training images.

  1. Choose the Training tab on the browser webpage.

  1. On the drop-down menu, choose Training: Normal or Training: Anomaly. Images are sent to the appropriate folder in the Amazon S3 bucket.

  1. Choose Trigger to trigger images from an object with and without anomalies. The camera may also be triggered by a hardware trigger direct to its I/O pins. For more information, see connector pin numbering and assignments.

It’s essential that each image captured is of a unique object and not the same object captured multiple times. If you repeat the same image, the model will not learn normal, defect-free variations of your object, and it could negatively impact model performance.

  1. After every trigger, the image is sent to the S3 bucket. At a minimum, you need to capture 20 normal and 10 anomalous images to use the single dataset auto-split option on the Amazon Lookout for Vision console. In general, the more images you capture, the better model performance you can expect. A table on the website shows the last image sent as a thumbnail and the number of images in each category.

Lookout for Vision Model Dataset and Training

 In this step, we prepare the dataset and start training.

  1. Choose Add to Lookout for Vision button when you have a minimum of 20 normal and 10 anomalous images. Because we’re using the single dataset and the auto-split option, it’s OK to have no test images. The auto-split option automatically divides the 30 images into a training and test dataset internally.

  1. Choose Create Dataset in Lookout for Vision

  1. You are redirected to the Amazon Lookout for Vision console.
  2. Select Create a single dataset.

  1. Select Import images from S3 bucket

  1. For S3 URL, enter the URL for the S3 training images directory as shown in the following picture.
  2. Select Automatically attach labels to images based on the folder name.
  3. This option imports the images with the correct labels in the dataset.
  4. Choose Create dataset

  1. Choose Train model button to start training

On the Models page, you can see the status indicate Training in progress and change to Training complete when the model is trained.

  1. Choose your model to see the model performance.

The model reports the precision, recall, and F1 scores. Precision is a measure of the number of correct anomalies out of the total predictions. A recall is a measure of the number of predicted anomalies out of the total anomalies. The F1 score is an average of precision and recall measures.

In general, you can improve model performance by adding more training images and providing a consistent lighting setup. Please note lighting can change during the day depending on your environment. (such as sunlight coming through the windows). You can control the lighting by closing the curtains and using the provided ring light. For more information, see how to light up your vision system.

Run Inference on new images

To run inferences on new images, complete the following steps:

  1. On the kit webpage, choose the Inference
  2. Choose Start the model to host the Lookout for Vision model.

  1. On the drop-down menu, choose the project you want to use and the model version.

  1. Place a new object that the model hasn’t seen before in front of the camera, and choose trigger in the browser webpage of the kit.

Make sure the object pose and lighting is similar to the training object pose and lighting. This is important to prevent the model from identifying a false anomaly due to lighting or pose changes.

Inference results for the current image are shown in the browser window. You can repeat this exercise with new objects and test your model performance on different anomaly types.

The cumulated inference results are available on the Amazon Lookout for Vision console on the Dashboard page.

In most cases, you can expect to implement these steps in a few hours, get a quick assessment of your use case fit by running inferences on unseen test images, and correlate the inference results with the model precision, recall, and F1 scores.

Conclusion

Basler and Amazon Web Services collaborated on an “Amazon Lookout for Vision Accelerator PoC Kit” (APK). The APK is a testing camera system that customers can use for fast prototyping of their Lookout for Vision application. It includes out-of-the-box vision hardware (camera, processing unit, lighting, and accessories) with integrated software components to quickly connect to the AWS Cloud and Lookout for Vision.

With direct integration with Lookout for Vision, the APK offers you a new and efficient approach for rapid prototyping and shortens your proof-of-concept evaluation by weeks. The APK can give you the confidence to evaluate your anomaly detection model performance before moving to production. As the kit is a bundle of fixed components, changes in the hard-and software may be necessary for the next step, depending on the customer application. After completing your PoC with the APK, Basler and AWS will offer customers a gap analysis to determine if the scope of the kit met your use case requirements or adjustments are needed in terms of a customized solution.

Note: To help ensure the highest level of success in your prototyping efforts, we require you to have a kit qualification discussion with Basler before purchase.

Contact Basler today to discuss your use case fit for APK: AWSBASLER@baslerweb.com

Learn more | Basler Tools for Component Selection


About the Authors

Amit Gupta is an AI Services Solutions Architect at AWS. He is passionate about enabling customers with well-architected machine learning solutions at scale.

 

 

 

Mark Hebbel is Head of IoT and Applications at Basler AG. He and his team implement camera based solutions for customers in the machine vision space. He has a special interest in decentralized architectures.

Read More

Prepare data for predicting credit risk using Amazon SageMaker Data Wrangler and Amazon SageMaker Clarify

For data scientists and machine learning (ML) developers, data preparation is one of the most challenging and time-consuming tasks of building ML solutions. In an often iterative and highly manual process, data must be sourced, analyzed, cleaned, and enriched before it can be used to train an ML model.

Typical tasks associated with data preparation include:

  • Locating data – Finding where raw data is stored and getting access to it
  • Data visualization – Examining statistical properties for each column in the dataset, building histograms, studying outliers
  • Data cleaning – Removing duplicates, dropping or filling entries with missing values, removing outliers
  • Data enrichment and feature engineering – Processing columns to build more expressive features, selecting a subset of features for training

Data scientists and developers typically iterate through these tasks until a model reaches the desired level of accuracy. This iterative process can be tedious, error-prone, and difficult to replicate as a deployable data pipeline. Fortunately, with Amazon SageMaker Data Wrangler, you can reduce the time it takes to prepare data for ML from weeks to minutes by accelerating the process of data preparation and feature engineering. With Data Wrangler, you can complete each step of the ML data preparation workflow, including data selection, cleansing, exploration, and visualization, with little to no code, which simplifies the data preparation process.

In a previous post introducing Data Wrangler, we highlighted its main features and walked through a basic example using the well-known Titanic dataset. For this post, we dive deeper into Data Wrangler and its integration with other Amazon SageMaker features to help you get started quickly.

Now, let’s get started with Data Wrangler.

Solution overview

In this post, we use Data Wrangler to prepare data for creating ML models to predict credit risk and help financial institutions more easily approve loans. The result is an exportable data flow capturing the data preparation steps required to prepare the data for modeling. We use a sample dataset containing information on 1,000 potential loan applications, built from the German Credit Risk dataset. This dataset contains categorical and numeric features covering the demographic, employment, and financial attributes of loan applicants, as well as a label indicating whether the individual is high or low credit risk. The features require cleaning and manipulation before we can use them as training data for an ML model. A modified version of the dataset, which we use in this post, has been saved in a sample data Amazon Simple Storage Service (Amazon S3) bucket. In the next section, we walk through how to download the sample data and upload it to your own S3 bucket.

The main ML workflow components that we focus on are data preparation, analysis, and feature engineering. We also discuss Data Wrangler’s integration with other SageMaker features as well as how to export the data flow for ease of use as a deployable data pipeline or submission to Amazon SageMaker Feature Store.

Data preparation and initial analysis

In this section, we download the sample data and save it in our own S3 bucket, import the sample data from the S3 bucket, and explore the data using Data Wrangler analysis features and custom transforms.

To get started with Data Wrangler, you need to first onboard to Amazon SageMaker Studio and create a Studio domain for your AWS account within a given Region. For instructions on getting started with Studio, see Onboard to Amazon SageMaker Studio or watch the video Onboard Quickly to Amazon SageMaker Studio. To follow along with this post, you need to download and save the sample dataset in the default S3 bucket associated with your SageMaker session, or in another S3 bucket of your choice. Run the following code in a SageMaker notebook to download the sample dataset and then upload it to your own S3 bucket:

from sagemaker.s3 import S3Uploader
import sagemaker
sagemaker_session = sagemaker.Session()
7#specify target location (modify to specify a location of your choosing)
bucket = sagemaker_session.default_bucket()
prefix = 'data-wrangler-demo'

#download data from sample data Amazon S3 bucket
!wget https://sagemaker-sample-files.s3.amazonaws.com/datasets/tabular/uci_statlog_german_credit_data/german_credit_data.csv

#upload data to your own Amazon S3 bucket
dataset_uri = S3Uploader.upload('german_credit_data.csv', 's3://{}/{}'.format(bucket,prefix))
print('Demo data uploaded to: {}'.format(dataset_uri))

Data Wrangler simplifies the data import process by offering connections to Amazon S3, Amazon Athena, and Amazon Redshift, which makes loading multiple datasets as easy as a couple of clicks. You can easily load tabular data into Amazon S3 and directly import it, or you can import the data using Athena. Alternatively, you can seamlessly connect to your Amazon Redshift data warehouse and quickly load your data. The ability to upload multiple datasets from different sources enables you to connect disparate data across sources.

With any ML solution, you iterate through exploratory data analysis (EDA) and data transformation until you have a suitable dataset for training a model. With Data Wrangler, switching between these tasks is as easy as adding a transform or analysis step into the data flow using the visual interface.

To start off, we import our German credit dataset, german_credit_data.csv, from Amazon S3 with a few clicks.

  1. On the Studio console, on the File menu, under New, choose Flow.

After we create this new flow, the first window we see has options related to the location of the data source that you want to import. You can import data from Amazon S3, Athena, or Amazon Redshift.

  1. Select Amazon S3 and navigate to the german_credit_data.csv dataset that we stored in an S3 bucket.

You can review the details of the dataset, including a preview of the data in the Preview pane.

  1. Choose Import dataset.

We’re now ready to start exploring and transforming the data in our new Data Wrangler flow.

After the dataset is loaded, we can start by creating an analysis step to look at some summary statistics.

  1. From the data flow view, choose the plus sign (+ icon) and choose Add analysis.

This opens a new analysis view in which we can explore the DataFrame using visualizations such as histograms or scatterplots. You can also quickly view summary statistics.

  1. For Analysis type¸ choose Table Summary.
  2. Choose Preview.

Data Wrangler displays a table of statistics similar to the Pandas Dataframe.describe() method.

It may also be useful to understand the presence of null values in the data and view column data types.

  1. Navigate back to the data flow view by choosing the Prepare.
  2. In the data flow, choose Add Transform.

In this transform view, data transformation options are listed in the pane on the right, including an option to add a custom transform step.

  1. On the Custom Transform drop-down menu, choose Python (Pandas).
  2. Enter df.info() into the code editor.
  3. Choose Preview to run the snippet of Python code.

We can inspect the DataFrame information in the right pane while also looking at the dataset in the left pane.

  1. Return to the data flow view and choose Add analysis to analyze the data attributes.

Let’s look at the distribution of the target variable: credit risk.

  1. On the Analysis type menu, choose Histogram.
  2. For X axis, choose risk.

This creates a histogram that shows the risk distribution of applicants. We see that approximately 2/3 of applicants are labeled as low risk and approximately 1/3 of applicants are labeled as high risk.

Next, let’s look at the distribution of the age of credit applicants, colored by risk. We see that in younger age groups, a higher proportion of applicants have high risk.

We can continue to explore the distributions of other features such as risk by sex, housing type, job, or amount in savings account. We can use the Facet by option to explore the relationships between additional variables. In the next section, we move to the data transformation stage.

Data transformation and feature engineering

In this section, we complete the following:

  • Separate concatenated string columns
  • Recode string categorical variables to numeric ordinal and nominal categorical variables
  • Scale numeric continuous variables
  • Drop obsolete features
  • Reorder columns

Data Wrangler contains numerous built-in data transformations so you can quickly clean, normalize, transform, and combine features. You can use these built-in data transformations without writing any code, or you can write custom transforms to make additional changes to the dataset such as encoding string categorical variables to specific numerical values.

  1. In the data flow view, choose the plus sign and choose Add transform.

A new view appears that shows the first few lines of the dataset, as well as a list of over 300 built-in transforms. Let’s start with modifying the status_sex column. This column contains two values: sex and marital status. We first split the string into a list of two values separated by the delimiter : .

  1. Choose Search and Edit.
  2. On the Transform menu, choose Split string by delimiter.
  3. For Input column, choose the status_sex.
  4. For Delimiter, enter :.
  5. For Output column, enter a name (for this post, we use vec).

We can further flatten this column in a later step.

  1. Choose Preview to review the changes.
  2. Choose Add.

  1. To flatten the column vec we just created, we can apply a Manage vectors transformation and choose Flatten.

The outputs are two columns: sex_split_0, the Sex column, and sex_split_1, the Marital Status column.

  1. To easily identify the features, we can rename these two columns to sex and marital_status using the Manage columns transformation by choosing Rename column.

The current credit risk classification is indicated by string values. Low risk means that the user has good credit, and high risk means that the user has bad credit. We need to encode this target or label variable as a numeric categorical variable where 0 indicates low risk and 1 indicates high risk.

  1. To do that, we choose Encode categorical and choose the transform Ordinal encode.
  2. Output this revised feature to the output column target.

The classification column now indicates 0 for low risk and 1 for high risk.

Next, let’s encode the other categorical string variables.

  1. Starting with existingchecking, we can again use Ordinal encode if we consider the categories no account, none, little, and moderate to have an inherent order.

  1. For greater control over the encoding of ordinal variables, you can choose Custom Transform and use Python (Pandas) to create a new custom transform for the dataset.

Starting with savings, we represent the amount of money available in a savings account with the following map: {'unknown': 0 ,little': 1, 'moderate': 2, 'high': 3, 'very high': 4}.

When you create custom transforms using Python (Pandas), the DataFrame is referenced as df.

  1. Enter the following code into the code editor cell:
# 'Savings' custom transform pandas code
savings_map = {'unknown': 0, 'little': 1,'moderate': 2,'high': 3,'very high': 4}
df['savings'] = df['savings'].map(savings_map).fillna(df['savings'])
  1. We do the same for employmentsince:
# 'Employmentsince' custom transform pandas code 
employment_map = { 'unemployed': 0,'1 year': 1,'1 to 4 years': 2,'4 to 7 years': 3,'7+ years': 4}
df['employmentsince'] = df['employmentsince'].map(employment_map).fillna(df['employmentsince'])

For more information about encoding categorical variables, see Custom Formula.

Other categorical variables that don’t have an inherent order also need to be transformed. We can use the Encode Categorical transform to one-hot encode these nominal variables. We one-hot encode housing, job, sex, and marital_status.

  1. Let’s start by encoding housing by choosing One-hot encode on the Transform drop-down menu.
  2. For Output style¸ choose Columns.

  1. Repeat for the remaining three nominal variables: job, sex, and marital_status.

After we encode all the categorical variables, we can address the numerical values. In particular, we can scale the numerical values in order to improve the performance of our future ML model.

  1. We do this by again choosing Add Transform and then Process numeric.
  2. From here you have the option of selecting between standard, robust, min-max, or max absolute scalars.

Before exporting the data, I remove the original string categorical columns that I encoded to numeric columns, so that our feature dataset contains only numbers, and therefore is machine-readable for training ML models.

  1. Choose Manage columns and choose the transform Drop column.
  2. Drop all the original categorical columns that contain string values such as status_sex, risk, and the temporary column vec.

As a final step, some ML libraries, such as XGBoost, expect the first column in the dataset to be the label or target variable.

  1. Use the Manage columns transform to move the target variable to the first column in the dataset.

We used custom and built-in transforms to create a training dataset that is ready for training an ML model. One tip for building out a data flow is to take advantage of the Previous steps tab in the right pane to walk through each step and view how the table changes after each transform. To change a step that is upstream, you have to delete all the downstream steps as well.

Further analysis and integration

In this section, we discuss opportunities for further data analysis and integration with SageMaker features.

Detect bias with Amazon SageMaker Clarify

Let’s explore Data Wrangler’s integration with other SageMaker features. In addition to the data analysis options available within Data Wrangler, you can also use Amazon SageMaker Clarify to detect potential bias during data preparation, after model training, and in your deployed model. In many use cases for detecting and analyzing bias in data and models, Clarify can be a great asset, including this credit application use case.

In this use case, we use Clarify to check for class imbalance and bias against one feature: sex. Clarify is integrated into Data Wrangler as part of the analysis capabilities, so we can easily create a bias report by adding a new analysis, choosing our target column, and selecting the column we want to analyze for bias. In this case, we use the sex column as an example, but you could continue to explore and analyze bias for other columns.

The bias report is generated by Clarify that operates within Data Wrangler. This report provides the following default metrics: class imbalance, difference in positive proportions in labels, and Jensen-Shannon Divergences. A short description provides instructions on how to read each metric. In our example, the report indicates that the data may be imbalanced. We should consider using sampling methods to correct this imbalance in our training data. For more information about Clarify capabilities and how to create a Clarify processing job using the SageMaker Python SDK, see New – Amazon SageMaker Clarify Detects Bias and Increases the Transparency of Machine Learning Models.

Support rapid model prototyping with Data Wrangler Quick Model visualization

Let’s now use Data Wrangler’s Quick Model analysis, which allows you to quickly evaluate your data and produce importance scores for each potential feature that you may consider including in an ML model, now that your data preparation data flow is complete. The Quick Model analysis gives you a feature importance score for each variable in the data, indicating how useful a feature is at predicting the target label.

This Quick Model also provides an overall model score. For a classification problem, such as our use case of predicting high or low credit risk, the Quick Model also provides an F1 score. This gives an indication of potential model fit using the data as you’ve prepared it in your complete data flow. For regression problems, the model provides a mean squared error (MSE) score. In the following screenshot, we can see which features contribute most to the predicted outcome: existing checking, credit amount, duration of loan, and age. We can use this information to inform our model development approach or make additional adjustments to our data flow, such as dropping additional columns with low feature importance.

Use Data Wrangler data flows in ML deployments

After you complete your data transformation steps and analysis, you can conveniently export your data preparation workflow flow. When you export your data flow, you have the option of exporting to the following:

  • A notebook running the data flow as a Data Wrangler jobExporting as a Data Wrangler job and running the resulting notebook takes the data processing steps defined in your .flow file and generates a SageMaker processing job to run these steps on your entire source dataset, providing a way to save processed data as a CSV or Parquet file to Amazon S3.
  • A notebook running the data flow as an Amazon SageMaker Pipelines workflow – With Amazon SageMaker Pipelines, you can create end-to-end workflows that manage and deploy SageMaker jobs responsible for data preparation, model training, and model deployment. By exporting your Data Wrangler flow to Pipelines, a Jupyter notebook is created that, when run, defines a data transformation pipeline following the data processing steps defined in your .flow file.
  • Python code replicating the steps in the Data Wrangler data flowExporting as a Python file enables you to manually integrate the data processing steps defined in your flow into any data processing workflow.
  • A notebook pushing your processed features to Feature Store – When you export to Feature Store and run the resulting notebook, your data can be processed as a SageMaker processing job, and then ingested into an online and offline feature store.

Conclusion

In this post, we explored the German credit risk dataset to understand the transformation steps needed to prepare the data for ML modeling so financial institutions can approve loans more easily. We then created ordinal and one-hot encoded features from the categorical variables, and finally scaled our numerical features—all using Data Wrangler. We now have a complete data transformation data flow that has transformed our raw dataset into a set of features ready for training an ML model to predict credit risk among credit applicants.

The options to export our Data Wrangler data flow allow us to use the transformation pipeline as a Data Wrangler processing job, create a feature store to better store and track features to be used in future modeling, or save the transformation steps as part of a complete SageMaker pipeline in your ML workflow. Data Wrangler makes it easy to work interactively on data preparation steps before transforming them into code that can be used immediately for ML model experimentation and into production.

To learn more about Amazon SageMaker Data Wrangler, visit the webpage. Give Data Wrangler a try, and let us know what you think in the comments!


About the Author

Courtney McKay is a Senior Principal with Slalom Consulting. She is passionate about helping customers drive measurable ROI with AI/ML tools and technologies. In her free time, she enjoys camping, hiking and gardening.

Read More

Maximize TensorFlow performance on Amazon SageMaker endpoints for real-time inference

Machine learning (ML) is realized in inference. The business problem you want your ML model to solve is the inferences or predictions that you want your model to generate. Deployment is the stage in which a model, after being trained, is ready to accept inference requests. In this post, we describe the parameters that you can tune to maximize performance of both CPU-based and GPU-based Amazon SageMaker real-time endpoints. SageMaker is a managed, end-to-end service for ML. It provides data scientists and MLOps teams with the tools to enable ML at scale. It provides tools to facilitate each stage in the ML lifecycle, including deployment and inference.

SageMaker supports both real-time inference with SageMaker endpoints and offline and temporary inference with SageMaker batch transform. In this post, we focus on real-time inference for TensorFlow models.

Performance tuning and optimization

For model inference, we seek to optimize costs, latency, and throughput. In a typical application powered by ML models, we can measure latency at various time points. Throughput is usually bounded by latency. Costs are calculated based on instance usage, and price/performance is calculated based on throughput and SageMaker ML instance cost per hour. Finally, as we continue to advance rapidly in all aspects of ML including low-level implementations of mathematical operations in chip design, hardware-specific libraries will play a greater role in performance optimization. Rapid experimentation that SageMaker facilitates is the lynchpin in achieving business objectives in a cost-effective, timely, and performant manner.

Performance tuning and optimization is an empirical field. The number of parameters to tune is combinatorial such that each set of configuration parameter values aren’t independent of each other. Various factors such as payload size, network hops, nature of hops, model graph features, operators in the model, and the model’s CPU, GPU, memory, and I/O profiles affect the optimal parameter tuning. The distribution of these effects on performance is a vast unexplored space. Therefore, we begin by describing these different parameters and recommend an empirical approach to tune these parameters and understand their effects on your model performance.

Based on our past observations, the function of effect of these parameters on an inference workload is, approximately, plateau-shaped or Gaussian-uniform. The values to maximize the performance of an endpoint lie along the ascendant curve of this distribution, demarcated by latencies. Typically, latencies increase with an increase in throughput. Improvements in throughput levels out or plateaus at a point where respective increases in concurrent connections don’t result in any significant improvement in throughput. Certain cases may show a detrimental effect from increasing certain parameters, such that the throughput rapidly decreases as the system is saturated with overhead.

The following chart illustrates transactions per second demarcated by latencies.

SageMaker TensorFlow Deep Learning Containers (DLCs) recently introduced new parameters to help with performance optimization of a CPU-based or GPU-based endpoint. As we discussed earlier, an ideal value of each of these parameters is subjective to factors such as model, model input size, batch size, endpoint instance type, and payload. What follows next is a description of these tunable parameters.

TensorFlow serving

We start with parameters related to TensorFlow serving.

SAGEMAKER_TFS_INSTANCE_COUNT

For TensorFlow-based models, the tensorflow_model_server binary is the operational piece that is responsible for loading a model in memory, running inputs against a model graph, and deriving outputs. Typically, a single instance of this binary is launched to serve models in an endpoint. This binary is internally multi-threaded and spawns multiple threads to respond to an inference request. In certain instances, if you observe that the CPU is respectably employed (over 30% utilized) but the memory is underutilized (less than 10% utilization), increasing this parameter might help. We have observed in our experiments that increasing the number of tensorflow_model_servers available to serve typically increases the throughput of an endpoint.

SAGEMAKER_TFS_FRACTIONAL_GPU_MEM_MARGIN

This parameter governs the fraction of the available GPU memory to initialize CUDA/cuDNN and other GPU libraries. 0.2 means 20% of the available GPU memory is reserved for initializing CUDA/cuDNN and other GPU libraries, and 80% of the available GPU memory is allocated equally across the TF processes. GPU memory is pre-allocated unless the allow_growth option is enabled.

Deep learning operators

Operators are nodes in a deep learning graph that perform mathematical operations on data. These nodes can be independent of each other and therefore can be run in parallel. In addition, you can internally parallelize nodes for operators such as tf.matmul() and tf.reduce_sum(). Next, we describe two parameters to control running these two operators using the TensorFlow threadpool.

SAGEMAKER_TFS_INTER_OP_PARALLELISM

This ties back to the inter_op_parallelism_threads variable. This variable determines the number of threads used by independent non-blocking operations. 0 means that the system picks an appropriate number.

SAGEMAKER_TFS_INTRA_OP_PARALLELISM

This ties back to the intra_op_parallelism_threads variable. This determines the number of threads that can be used for certain operations like matrix multiplication and reductions for speedups. A value of 0 means that the system picks an appropriate number.

Architecture for serving an inference request over HTTP

Before we look at the next set of parameters, let’s understand the typical arrangement when we deploy Nginx and Gunicorn to frontend tensorflow_model_server. Nginx is responsible for listening on port 8080; it accepts a connection and forwards it to Gunicorn, which serves as a Python HTTP Web Server Gateway Interface. Gunicorn is responsible for replying to /ping and forwarding /invocations to tensorflow_model_server. While replying to /invocations, Gunicorn invokes tensorflow_model_server with the payload.

The following diagram illustrates the anatomy of a SageMaker endpoint.

SAGEMAKER_GUNICORN_WORKERS

This governs the number of worker processes that Gunicorn is requested to spawn for handling requests. This value is used in combination with other parameters to derive a set that maximizes inference throughput. In addition to this, SAGEMAKER_GUNICORN_WORKER_CLASS governs the type of workers spawned, typically async, typically gevent.

OpenMP (Open Multi-Processing)

OpenMP is an implementation of multithreading, a method of parallelizing whereby a primary thread (a series of instructions ran consecutively) forks a specified number of sub-threads and the system divides a task among them. The threads then run concurrently, with the runtime environment allocating threads to different processors. Various parameters control the behavior of this library; in this post, we explore the impact of changing one of these many parameters. For a full list of parameters available and their intended use, refer to Environment Variables.

OMP_NUM_THREADS

Python internally uses OpenMP for implementing multithreading within processes. Typically, threads equivalent to the number of CPU cores are spawned. But when implemented on top of a Simultaneous Multi Threading (SMT) such Intel’s HypeThreading, a certain process might oversubscribe a particular core by spawning twice the threads as the number of actual CPU cores. In certain cases, a Python binary might end up spawning up to four times the threads as available actual processor cores. Therefore, an ideal setting for this parameter, if you have oversubscribed available cores using worker threads, is 1 or half the number of CPU cores on a SMT-enabled CPU.

In our experiments, we changed the values of these parameters as a tuple and not independently. Therefore, all the results and guidance assume the preceding scenario. As the results illustrate, we observed an increase of over 1,900% to over 87% in some models.

The following table shows an increase in TPS by adjusting parameters for a retrieval type model on an ml.c5.9xlarge instance.

Number of workers Number of TFS OMP_NUM_THREAD Inter Op Parallelization Intra Op Parallelization TPS
1 1 36 36 36 15.87
36 1 1 36 36 164
1 1 1 1 1 33.0834
36 1 1 1 1 67.5118
36 8 1 1 1 319.078

The following table shows an increase in TPS by adjusting parameters for a Single Shot Detector type model on an ml.p3.2xlarge instance.

Number of workers Number of TFS OMP_NUM_THREAD Inter Op Parallelization Intra Op Parallelization TPS
1 1 36 36 36 16.4613
1 1 1 36 36 17.1414
36 1 1 36 36 22.7277
1 1 1 1 1 16.7216
36 1 1 1 1 22.0933
1 4 1 1 1 16.6026
16 4 1 1 1 31.1001
36 4 1 1 1 30.9372

The following diagram shows the resultant increase in TPS by adjusting parameters.

Observe results in your own environments

Now that you know about these various parameters, how can you try them out in your environments? We first discuss how to set up these parameters, then describe a tool and methodology to test it and observe variations in latency and throughput.

Set up an endpoint with custom parameters

When you create a SageMaker endpoint, you can set values of these parameters by passing them in a dictionary for the env parameter in sagemaker.model.Model. See the following example code:

sagemaker_model = Model(image_uri=image_uri,
                        model_data=model_location_in_s3,
                        role=get_execution_role(), 
                        env={'SAGEMAKER_GUNICORN_WORKERS': ’10’, 
                             'SAGEMAKER_TFS_INSTANCE_COUNT': ’20’, 
                             'OMP_NUM_THREADS': '1',
                             'SAGEMAKER_TFS_INTER_OP_PARALLELISM': '4', 
                             'SAGEMAKER_TFS_INTRA_OP_PARALLELISM': '1'})
predictor = sagemaker_model.deploy(initial_instance_count=1,instance_type=test_instance_type, wait=True, endpoint_name=endpoint_name)

Test for success

Now that our parameters are set up, how do we test for success? How do we standardize a test that is uniform across our runs? We recommend the open-source tool Locust. In its simplest form, it allows us to control the number of concurrent connections being sent across to a target (in this case, SageMaker endpoints). Via one concurrent connection, we’re invoking inference (using invoke_endpoint) as fast as possible, sequentially. So, although the connections (users in Locust parlance) are concurrent, the invocations against the endpoint requesting inference are sequential.

The following graph shows invocations tracked with respect to Locust users peaking at about over 45,000 (with 95 TFS server spawned).

The following graph shows invocations for same instance peaking at around 11,000 (with 1 TFS server spawned).

This allows us to observe, as an output of this Locust command, the end-to-end P95 latency and TPS for the duration of test. So roughly speaking, lower latency and higher TPS (users) is better. As we tune our parameters, we observe TPS ascend delta between two users (n and n+1), and test for it to reach a point at which with every respective increase in users, the TPS stays constant. At such a point, past a certain number of users, the latency usually explodes due to resource contention in the endpoint. The point just before this latency explosion is where we have an endpoint at its functional best.

Although we observe this increase in TPS and decrease in latency while we tune parameters, you should also focus on two other metrics: average CPU utilization and average memory utilization. When you’re adjusting the number of SAGEMAKER_GUNICORN_WORKERS and SAGEMAKER_TFS_INSTANCE_COUNT, your aim should be to drive both CPU and memory to the maximum and treat that as a soft threshold to understand the high watermark for the throughput of this particular endpoint. The hard threshold is the latency that you can tolerate.

The following graph tracks an increase in ModelLatency with respect to increased load.

The following graph tracks an increase in CPUUtilization with respect to increased load.

The following graph tracks in increase in MemoryUtilization with respect to increased load.

Other optimizations to consider

You should consider a few other optimizations to further maximize the performance of your endpoint:

  • To further enhance performance, optimize the model graph by compiling, pruning, fusing, and so on.
  • You can also export models to an intermediate representation such as ONNX and use ONNX runtime for inference.
  • Inputs can be batched, serialized, compressed, and passed over the wire in binary format to save bandwidth and maximize utilization.
  • You can compile the TensorFlow Model Server binary to use hardware-specific optimizations (such as Intel optimizations like AVX-512 and MKL) or model optimizations such as compilation provided by SageMaker Neo. You can also use an optimized inference chip such as AWS Inferentia to further improve performance.
  • In SageMaker, you can gain an additional performance boost by deploying models with automatic scaling.

Conclusion

In this post, we explored a few parameters that you can use to maximize the performance of a TensorFlow-based SageMaker real-time endpoint. These parameters are in essence overprovisioning serving processes and adjusting their parallel processing capabilities. As we saw in the tables, this overprovisioning and adjustment leads to better utilization of resources and higher throughput, sometimes an increase as much as 1,000%.

Although the best way to derive the correct values is through experimentation, by observing the combinations of different parameters and its effect on performance across ML models and SageMaker ML instances, you can start to build empirical knowledge on performance tuning and optimization.

SageMaker provides the tools to remove the undifferentiated heavy lifting from each stage of the ML lifecycle, thereby facilitating rapid experimentation and exploration needed to fully optimize your model deployments.

For more information, see Maximize TensorFlow* Performance on CPU: Considerations and Recommendations for Inference Workloads, Meaning of inter_op_parallelism_threads and intra_op_parallelism_threads, and the model SageMaker inference API.


About the Authors

Chaitanya Hazarey is a Senior ML Architect with the Amazon SageMaker team. He focuses on helping customers design, deploy, and scale end-to-end ML pipelines in production on AWS. He is also passionate about improving explainability, interpretability, and accessibility of AI solutions.

 

 

Karan Kothari is a software engineer at Amazon Web Services. He is on the Elastic Inference team working on building Model Server focused towards low latency inference workloads.

 

 

 

Liang Ma is a software engineer at Amazon Web Services and is fascinated with enabling customers on their AI/ML journey in the cloud to become AWSome. He is also passionate about serverless architectures, data visualization, and data systems.

 

 

Santosh Bhavani is a Senior Technical Product Manager with the Amazon SageMaker Elastic Inference team. He focuses on helping SageMaker customers accelerate model inference and deployment. In his spare time, he enjoys traveling, playing tennis, and drinking lots of Pu’er tea.

 

 

Aaron Keller is a Senior Software Engineer at Amazon Web Services. He works on the real-time inference platform for Amazon SageMaker. In his spare time, he enjoys video games and amateur astrophotography.

Read More