Build a computer vision model using Amazon Rekognition Custom Labels and compare the results with a custom trained TensorFlow model

Building accurate computer vision models to detect objects in images requires deep knowledge of each step in the process—from labeling, processing, and preparing the training and validation data, to making the right model choice and tuning the model’s hyperparameters adequately to achieve the maximum accuracy. Fortunately, these complex steps are simplified by Amazon Rekognition Custom Labels, a service of Amazon Rekognition that enables you to build your own custom computer vision models for image classification and object detection tasks without requiring any prior computer vision expertise or advanced programming skills.

In this post, we showcase how we can train a model to detect bees in images using Amazon Rekognition Custom Labels. We also compare these results against a custom-trained TensorFlow model (DIY model). We use Amazon SageMaker as the platform to develop and train our model. Finally, we demonstrate how to build a serverless architecture to process new images using Amazon Rekognition APIs.

When and where to use each model

Before diving deeper, it is important to understand the use cases that drive the decision of which model to use, whether it’s an Amazon Rekognition Custom Labels model or a DIY model.

Amazon Rekognition Custom Labels models are a great choice when our desired goal is to achieve maximum quality results in our task quickly. These models are heavily optimized and fine-tuned to perform at a high accuracy and recall. This is a cloud service, so when the model is trained, images must be uploaded to the cloud to be analyzed. A great advantage of this service is that the user doesn’t need to have expertise to run this training pipeline. You can do it on the AWS Management Console with just a few clicks, and it takes care of the heavy lifting of training and fine-tuning the model for you. Then, a simple set of API calls is offered, tailored to this specific model, for you to apply when needed.

DIY models are the choice for advanced users with expertise in machine learning (ML). They allow you to control every aspect of the model, and tune the training data and the necessary parameters as needed. This requires advanced coding skills. These models trade off accuracy for latency: you can run them faster at the expense of lower qualitative performance. This lower latency fits really well in low bandwidth scenarios where the model needs to be deployed on the edge. For instance, IoT devices that support these models can host and run them and only upload the inference results to the cloud, which reduces the amount of data sent upstream.

Overview of solution

To build our DIY model, we follow the solution from the GitHub repo TensorFlow 2 Object Detection API SageMaker, which consists of these steps:

  1. Download and prepare our bee dataset.
  2. Train the model using a SageMaker custom container instance.
  3. Test the model using a SageMaker model endpoint.

After we have our DIY model, we can proceed with the steps to build our bee detection model using Amazon Rekognition Custom Labels:

  1. Deploy a serverless architecture using AWS CloudFormation.
  2. Download and prepare our bee dataset.
  3. Create a project in Amazon Rekognition Custom Labels and import the dataset.
  4. Train the Amazon Rekognition Custom Labels model.
  5. Test the Amazon Rekognition Custom Labels model using the automatically generated API endpoint using Amazon Simple Storage Service (Amazon S3) events.

Amazon Rekognition Custom Labels lets you manage the ML model training process on the Amazon Rekognition console, which simplifies the end-to-end process. After we train both models, we can compare them.

Set up the environment

We prepare our serverless environment using the CloudFormation template on GitHub. On the AWS CloudFormation console, we create a new stack and use the template.yaml file present in the root folder of our code repository. We provide a unique Amazon Simple Storage Service (Amazon S3) bucket name when prompted, where our images are downloaded for further processing. We also provide a name for the inference processing Amazon Simple Queue Service (Amazon SQS) queue, as well as an AWS Key Management Service (AWS KMS) alias to securely encrypt the inference pipeline.

The architecture diagram is as follows, and it is used for detecting objects in new images as they are uploaded to our bucket.

Following the first notebook (1_prepare_data), we download and store our images in a bucket in Amazon S3. The dataset is already curated and annotated, and the images used have been licensed under CC0. For convenience, the dataset is stored in a single .zip archive: dataset.zip.

Inside the dataset folder, the manifest file output.manifest contains the bounding box annotations of the dataset. The Amazon S3 references of these images belong to a different S3 bucket where the images were annotated originally. To import this manifest in Amazon Rekognition Custom Labels, the notebook rewrites the manifest file according to the bucket name we chose.

Train your DIY model

To establish a comparison between a DIY and Amazon Rekognition Custom Labels model, we follow the steps in the following public repository that demonstrates how to train a TensorFlow2 model using the same dataset.

We follow the steps described in this repository to train an EfficientNet object detector using our bee dataset. We modify the training notebook so that it runs for 10,000 steps. The model trains for about 2 hours, achieving an average precision of 83% and a recall of 56%.

Create your Amazon Rekognition Custom Labels project

To create your bee detection project, complete the following steps:

  1. On the Amazon Rekognition console, choose Amazon Rekognition Custom Labels.
  2. Choose Get Started.
  3. For Project name, enter bee-detection.
  4. Choose Create project.

Import your dataset

We created a manifest using the first notebook (1_prepare_data) that contains the Amazon S3 URIs of our image annotations. We follow these steps to import our manifest into Amazon Rekognition Custom Labels:

  1. On the Amazon Rekognition Custom Labels console, choose Create dataset.
  2. Select Import images labeled by Amazon SageMaker Ground Truth.
  3. Name your dataset (for example, bee_dataset).
  4. Enter the Amazon S3 URI of the manifest file that we created.
  5. Copy the bucket policy that appears on the console.
  6. Open the Amazon S3 console in a new tab and access the bucket where the images are stored.
  7. On the Permissions tab, enter the bucket policy to allow access of the dataset by Amazon Rekognition Custom Labels.
  8. Go back to the dataset creation console and choose Submit.

Train your model

After the dataset is imported into Amazon Rekognition Custom Labels, we can train a model immediately.

  1. Choose Train Model from the dataset page.
  2. For Choose project, choose your bee-detection project.
  3. For Choose training dataset, choose your bee_dataset dataset.

As part of model training, Amazon Rekognition Custom Labels requires a labeled test dataset to validate the model training. Amazon Rekognition Custom Labels uses the test dataset to verify how well your trained model predicts the correct labels and to generate evaluation metrics. Images in the test dataset are not used to train your model and should represent the same types of images you use your model to analyze.

  1. For Create test set, select how you want to provide your test dataset.

Amazon Rekognition Custom Labels provides three options:

  • Choose an existing test dataset
  • Create a new test dataset
  • Split training dataset

For this post, we choose to split our training dataset, which sets aside 20% of our dataset for testing the model.

  1. Select Split training dataset.
  2. Choose Train.

Our model took approximately 1.5 hours to train. The model achieved an average precision of 99% with a recall of 90% on the test data. The training time required for your model depends on many factors, including the number of images provided in the dataset and the complexity of the model. When training is complete, Amazon Rekognition Custom Labels outputs key quality metrics including F1 score, precision, recall, and the assumed threshold for each label. For more information about metrics, see Metrics for evaluating your model.

Serverless inference architecture

After our model is trained, Amazon Rekognition Custom Labels provides the API calls for starting, using, and stopping your model. In the environment setup section, we set up a serverless architecture to process test images that are uploaded to our S3 bucket via Amazon S3 events. It uses an AWS Lambda function to call the inference API, and manages these API calls using Amazon SQS.

We’re ready now to start applying our trained model to new images. We first need to start the project model version via the Amazon Rekognition Custom Labels console.

We take note of our model’s ARN and update the Lambda function bee-detection-inference with it. This indicates which endpoint we must invoke to retrieve the object detection results. We can also change the assumed threshold to accept or reject results with a low confidence score.

Now it’s time to start uploading our test images to our S3 bucket prefix (s3://your-bucket/test_images). We can either use the Amazon S3 console or the AWS Command Line Interface (AWS CLI). We choose some test images present in our bee detection dataset and upload them using the console. As the images are uploaded, they’re queued in Amazon SQS and then processed by our Lambda function, leaving the result with the same file name, plus the .json suffix.

We visualize the results of the JSON response from our Amazon Rekognition Custom Labels model using the second notebook (2_visualize_images). The following is an example of a response output:

{'CustomLabels': [{'Name': 'bee',
   			   'Confidence': 99.9679946899414,
   			   'Geometry': {'BoundingBox': {'Width': 0.17472000420093536,
     'Height': 0.23267999291419983,
     'Left': 0.34907999634742737,
     'Top': 0.36125999689102173}}}],
 'ResponseMetadata': {'RequestId': '4f98fdc8-a7d3-4251-b21e-484baf958efb',
  'HTTPStatusCode': 200,
  'HTTPHeaders': {'content-type': 'application/x-amz-json-1.1',
   'date': 'Thu, 11 Mar 2021 15:23:39 GMT',
   'x-amzn-requestid': '4f98fdc8-a7d3-4251-b21e-484baf958efb',
   'content-length': '202',
   'connection': 'keep-alive'},
  'RetryAttempts': 0}}

This bee is detected with a confidence of 99.97%

In the following image on the left, we find six bees over 99.4% confidence, which is our optimal threshold. The image on the right shows the same result with a threshold of 90% (15 bees).

Clean up

When you’re done, remember to follow these steps to avoid incurring in unnecessary charges:

  1. Stop the model version on the Amazon Rekognition Custom Labels console.
  2. Empty the S3 bucket that was created where images were uploaded.
  3. Delete the CloudFormation stack to remove all provisioned resources.

Comparison with a custom DIY model

The performance of our Amazon Rekognition Custom Labels model is quantitatively better than our DIY model, achieving almost perfect precision (99%). It is noticeable how it’s also able to prevent false negatives, yielding a very robust recall of 90%, smashing the 56% recall of our DIY model. This is partly due to the optimized tuning that Amazon Rekognition Custom Labels applies to the model, and the assumed thresholds that it yields after training to achieve the best performance at test time.

For the first example, our single bee is detected at a much lower confidence score (64%), and with a rather large bounding box that doesn’t reflect accurately the size of the bee.

For the more challenging picture, we must lower our threshold to 81% to find the very first detection (left), and lower it even more to 50% to find 7 bees (right).

Playing with this threshold can be risky. Setting a very low threshold can detect more bees (better recall), but at the same time find false detections, lowering our model precision. However, Amazon Rekognition Custom Labels can detect bees with a much higher confidence, which allows us to set a higher threshold for a much better overall performance.

Conclusion

In this post, we showed you how to create a computer vision object detection model with Amazon Rekognition Custom Labels using annotated data, and compared the results with a custom DIY model. Amazon Rekognition Custom Labels brings a great advantage over using your own models. Amazon Rekognition Custom Labels enables you to build and optimize your own specialized computer vision models to detect unique objects without the need of advanced programming knowledge.

With more experiments with other model architectures and hyperparameters, an ML scientist can improve the DIY model we tested in this post. The Amazon Rekognition Custom Labels value proposition is that it does these experiments on your behalf, thereby reducing the time to get a usable model and its development costs. Finally, we also showed how to set up a minimal serverless architecture to process new images using our trained model.

For more information about using custom labels, see What Is Amazon Rekognition Custom Labels?


About the Author

Raúl Díaz García is a Sr Data Scientist in the EMEA SDT IoT Team. Raúl works with customers across the EMEA region, where he helps them enable solutions related to Computer Vision and Machine Learning in the IoT space.

Read More

Build GAN with PyTorch and Amazon SageMaker

GAN is a generative ML model that is widely used in advertising, games, entertainment, media, pharmaceuticals, and other industries. You can use it to create fictional characters and scenes, simulate facial aging, change image styles, produce chemical formulas synthetic data, and more.

For example, the following images show the effect of picture-to-picture conversion.

The following images show the effect of synthesizing scenery based on semantic layout.

This post walks you through building your first GAN model using Amazon SageMaker. This is a journey of learning GAN from the perspective of practical engineering experiences, as well as opening a new AI/ML domain of generative models.

We also introduce a use case of one of the hottest GAN applications in the synthetic data generation area. We hope this gives you a tangible sense on how GAN is used in real-life scenarios.

Overview of solution

Among the following two pictures of handwritten digits, one of them is actually generated by a GAN model. Can you tell which one?

The main topic of this article is to use ML techniques to generate synthetic handwritten digits. To achieve this goal, you personally experience the training of a GAN model. Generating synthetic handwritten digits is basically the same as the basic principles and engineering processes of portrait generation, although their data, algorithm complexity, and accuracy requirements are different.

Generative Adversarial Networks by Ian Goodfellow et al. is a deep neural network architecture consisting of a generator network and a discriminator network. The generator synthesizes data and tries to deceive the discriminator, whereas the discriminator authenticates the data and tries to correctly identify all synthesized data. In the process of training iterations, the two networks continue to evolve and confront until they reach an equilibrium state (Nash equilibrium). The discriminator can no longer recognize synthesized data anymore, at which point the training process is over.

To train a GAN model, we need to start with some tools and services that are efficient and necessary for ML practices on AWS. As the working environment, SageMaker is a fully managed ML service. It offers all mainstream ML frameworks as managed container images, such as Scikit-Learn, XGBoost, MXNet, TensorFlow, PyTorch, and more. The SageMaker SDK is an open-source development kit for SageMaker that allows you to use SageMaker and other AWS services, for example, accessing data in an Amazon Simple Storage Service (Amazon S3) bucket, or training a model with a managed Amazon Elastic Compute Cloud (Amazon EC2) instance.

With SageMaker end-to-end ML functionality, you can focus on the model building work and easily train a variety of GAN models, without overheads in infrastructure and framework maintenance.

The following diagram illustrates our architecture.

The training data comes from the S3 storage bucket, and is loaded into the local storage of the training instance. The managed training frameworks and managed algorithms serve in the form of container images in Amazon Elastic Container Registry (Amazon ECR), which are combined with the custom training code when the training container is launched. The training output is collected and sent to a specified S3 bucket. In the following sections, we learn how to use these resources via the SageMaker SDK.

We use AWS services such as Amazon SageMaker and Amazon S3, which incur certain cloud resource usage fees.

Set up the development environment

SageMaker provides a managed Jupyter notebook instance, for model building, training, and more. You can carry out ML activities effectively and easily via Jupyter notebooks. For instructions on setting up your Jupyter notebook working environment, see Get Started with Amazon SageMaker Notebook Instances.

Alternatively, you may want to work with Amazon SageMaker Studio. For instructions, see Get Started with Studio Notebooks.

Download the source code

The source code is available in SageMaker Examples GitHub repository.

  1. On the Git menu, choose Clone a Repository.
  2. Enter the clone URI of the repository (https://github.com/aws/amazon-sagemaker-examples.git).
  3. Choose Clone.

When the download is complete, browse the source code structure through the file browser.

  1. Open the notebook build_gan_with_pytorch.ipynb, which is under the folder /amazon-sagemaker-examples/advanced_functionality/pytorch_bring_your_own_gan/.
  2. In the Select Kernel pop-up, choose conda_pytorch_latest_p36.

If using a Studio environment, select the Python3 (PyTorch 1.6 Python 3.6 GPU Optimized) kernel instead.

The code and notebooks used in this post are available on GitHub, and are all verified with Python 3.6, PyTorch 1.5, and SageMaker-managed JupyterLab.

Deep convolutional generative adversarial network (DCGAN)

In 2016, Alec Radford et al. published the paper “Unsupervised Representation Learning with Deep Convolutional Generative Adversarial Networks”. This pioneered the application of convolutional neural networks to GAN. In the algorithm design, the full connected layers are replaced with convolutional layers, which improves the stability of training in the image generation scenarios.

Network structure

The generator network uses a stride transposed convolutional layers to improve the resolution of the tensor. The input shape is (batch_size, 100) and the output shape is (batch_size, 64, 64, 3). In other words, the network accepts a 100-dimensional uniform distribution vector, and then undergoes continuous transformation until the final image is generated.

The discriminator network receives pictures in (64, 64, 3) format, uses 2D convolutional layers for downsampling, and finally passes them to the full connected layer for classification.

The training process of the DCGAN model can be roughly divided into three sub-processes.

Firstly, the generator network uses a random number as input to generate a synthetic picture. Then it uses the authentic picture and the synthetic picture to train the discriminator network and update the parameters. Finally, it updates the generator network parameters.

Code structure

The file structure of the project directory pytorch_bring_your_own_gan is as follows:

├── data
├── src
│   └── train.py
├── tmp
└── build_gan_with_pytorch.ipynb

The file train.py contains three classes: the generator network Generator, the discriminator network Discriminator, and a wrapper class for a single batch training process. See the following code:

class Generator(nn.Module):
...

class Discriminator(nn.Module):
...

class DCGAN(object):
    """
    A wrapper class for Generator and Discriminator,
    'train_step' method is for single batch training.
    """
...

The train.py file also contains several functions, which are used to facilitate training of the networks of Generator and Discriminator. Some of the major functions are as follows:

def parse_args():
...

def get_datasets(dataset_name, ...):
...

def train(dataloader, hps, ...):
...

Model development

During development, you may run the train.py script directly from the Linux command line. You can specify input data channels, model hyperparameters, and training output storage via command line arguments (for more information, see Use PyTorch with the SageMaker Python SDK):

python src/train.py --dataset mnist 
        --model-dir '/home/myhome/byos-pytorch-gan/model' 
        --output-dir '/home/myhome/byos-pytorch-gan/tmp' 
        --data-dir '/home/myhome/byos-pytorch-gan/data' 
        --hps '{"beta1":0.5, "dataset":"mnist", "epochs":18,
            "learning-rate":0.0002, "log-interval":64, "nz":100, "ngf":28, "ndf":28}'

Such design of the training script parameter not only provides a good debugging method, but also is a protocol and prerequisite for integration with SageMaker containers. This takes into account the flexibility of model development and the portability of the training environment.

Model training and validation

Find and open the notebook file build_gan_with_pytorch.ipynb, which introduces and runs the training process. Some of the code in this section is omitted; refer to the notebook for details.

Download data

Many public datasets are available on the internet that are very helpful for ML engineering and scientific research, such as algorithm study and evaluation. We use the MNIST dataset, which is a handwritten digits dataset, to train a DCGAN model, and eventually generate some synthetic handwritten digits. See the following code:

from sagemaker.s3 import S3Downloader as s3down
s3down.download('s3://sagemaker-sample-files/datasets/image/MNIST/pytorch/', './data')

Prepare the data

The PyTorch framework has a torchvision.datasets package, which provides access to several datasets. You can use the following commands to read the pre-downloaded MNIST dataset from local storage, for later use:

from torchvision import datasets

dataroot = './data'
trainset = datasets.MNIST(root=dataroot, train=True, download=False)
testset = datasets.MNIST(root=dataroot, train=False, download=False)

The SageMaker SDK creates a default S3 bucket for you to access various files and data that you may need in the ML engineering lifecycle. We can get the name of this bucket through the default_bucket method of the sagemaker.session.Session class in the SageMaker SDK:

from sagemaker.session import Session

sess = Session()

# S3 bucket for saving code and model artifacts.
# Feel free to specify a different bucket here if you wish.
bucket = sess.default_bucket()

The SageMaker SDK provides tools for operating AWS services. For example, the S3Downloader class is used to download objects in Amazon S3, and S3Uploader is used to upload local files to Amazon S3. You upload the dataset files to Amazon S3 for model training. During model training, we don’t download data from the internet in order to avoid network latency caused by fetching data from the internet. This also avoids possible security risks due to direct access to the internet. See the following code:

import os
from sagemaker.s3 import S3Uploader as s3up

s3_data_location = s3up.upload(os.path.join(dataroot, "MNIST"),
    f"s3://{bucket}/{prefix}/data/mnist")

Train the model

Via the sagemaker.get_execution_role() method, the notebook can get the role pre-assigned to the notebook instance. This role is used to obtain training resources, such as downloading training framework images, allocating EC2 instances, and so on.

The hyperparameters used in the model training task can be defined in the notebook so that it’s separated from the algorithm and training code. The hyperparameters are passed in when the training task is created and dynamically combined with the training task. See the following code:

import json

hps = {
         'seed': 0,
         'learning-rate': 0.0002,
         'epochs': 18,
         'pin-memory': 1,
         'beta1': 0.5,
         'nz': 100,
         'ngf': 28,
         'ndf': 28,
         'batch-size': 128,
         'log-interval': 20,
     }

The PyTorch class from the sagemaker.pytorch package is an estimator for the PyTorch framework. You can use it to create and run training tasks. In the parameter list, instance_type specifies the type of the training instance, such as CPU or GPU instances. The directory containing the training script and model code is specified by source_dir, and the training script name must be clearly defined by entry_point. These parameters are passed to the training job along with other parameters, and they determine the environment settings of the training task. See the following code:

from sagemaker.pytorch import PyTorch

my_estimator = PyTorch(role=role,
                        entry_point='train.py',
                        source_dir='src',
                        output_path=s3_model_artifacts_location,
                        code_location=s3_custom_code_upload_location,
                        instance_count=1,
                        instance_type='ml.g4dn.2xlarge',
                        use_spot_instances=False,
                        framework_version='1.5.0',
                        py_version='py3',
                        hyperparameters=hps)

Pay special attention to the use_spot_instances parameter. The value of True here means that you want to use Spot Instances to train the model. Because ML training usually requires a large amount of computing resources to run for a long time, using Spot Instances can help you control your cost. Spot Instances may save cost up to 90% vs. On-Demand Instances. Depending on the instance type, Region, and time, the actual price might be different.

You have created a PyTorch object, and you can use it to fit pre-uploaded training data on Amazon S3. The following command initiates the training job, and the training data is loaded into the training instance local storage in the form of an input channel named MNIST. When the training task starts, the training data is already available on the local file system of the training instance, and the training script train.py can access the data from the local disk afterwards.

# Start training
my_estimator.fit({'MNIST': s3_data_location}, wait=False)

Depending on the training instance you choose, the training process may last from dozens of minutes to hours. We recommend setting the wait parameter to False, which detaches the notebook from the training job. In scenarios with long training time and many training logs, it can prevent the notebook context from being lost due to network interruption or session timeout. After the notebook is detached from the training task, the output is temporarily invisible. Run the following code to allow the notebook to obtain and resume the previous training session:

%%time
from sagemaker.estimator import Estimator

# Attaching previous training session
training_job_name = my_estimator.latest_training_job.name
attached_estimator = Estimator.attach(training_job_name)

Because the model was designed to use the GPU power to accelerate training, it’s much faster on GPU instances than on CPU instances. For example, the g4dn.2xlarge instance take about 12 minutes, whereas the c5.xlarge instance may take more than 6 hours. The current model doesn’t support multi-instance training, so an instance_count parameter with a value more than 1 doesn’t bring extra benefits in training time optimization.

When the training job is complete, the trained model is collected and uploaded to Amazon S3. The upload location is specified by the output_path parameter, which is provided when creating the PyTorch object.

Test the model

You download the trained model from Amazon S3 to the local file system of the notebook instance, where this Jupyter notebook is running. The following code loads and runs the model, and then generates a picture of handwritten digits from a random number as input:

import matplotlib.pyplot as plt
import numpy as np
import torch
from src.train import Generator

device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")

params = {'nz': hps['nz'], 'nc': 1, 'ngf': hps['ngf']}
model = load_model("./tmp/model.pth", model_cls=Generator, params=params, device=device, strict=False)
img = generate_fake_handwriting(model, num_images=64, nz=hps['nz'], device=device)

plt.imshow(np.asarray(img))

Use case: Synthetic data boosting handwritten text recognition

GAN and DCGAN have been derived into a remarkable number of variants that address different problems in their respective domains. Let’s look at one use case, which is designed to reduce the effort and cost in data collection and annotation, as well as improve the performance of a handwriting text recognition system.

ScrabbleGAN (see also the GitHub repo), introduced by scientists from Amazon, is a semi-supervised approach to synthesize handwritten text images that are versatile both in style and lexicon. It relies on a novel generative model that can generate images of words with an arbitrary length. The generator can manipulate the resulting text style, for instance, whether the text is cursive, or how thin the pen stroke is.

Problem definition

Optical character recognition (OCR), especially handwritten text recognition (HTR) systems, have seen significant performance improvements in the deep learning era. However, deep learning-based HTR is limited by the number of training examples. In other words, data gathering and labeling are challenging and costly tasks.

Targeting the lack of versatile, annotated handwritten text, and the difficulty to obtain it, Amazon scientists introduced a semi-supervised learning solution by creating realistic, synthesized text, reducing the need for annotations and enriching the variety of training data in both style and lexicon.

Network architecture

In contrast to the vast majority of text-related networks that rely on recurrent neural networks (RNNs), ScrabbleGAN introduces a novel fully convolutional handwritten text generation architecture, which allows for arbitrarily long outputs. This architecture learns character embeddings without the need for character-level annotation.

Handwriting is a local process—each letter is influenced by its predecessor and successor. The attention of the synthesizer is focused on the immediate neighbors of the current letter, and the generator G is designed to mimic this process. Instead of generating the image out of an entire word representation, each convolutional-upsampling layer widens the receptive field, as well as the overlap between two neighboring characters. This overlap allows adjacent characters to interact, and creates a smooth transition. The style of each image is controlled by a noise vector z given as input to the network. To generate the same style for the entire word or sentence, this noise vector is kept constant throughout the generation of all the characters in the input.

The purpose of the discriminator D is to identify synthetic images generated by G from the real ones. It also discriminates between such images based on the handwriting output style. The discriminator architecture has to account for the varying length of the generated image, therefore it’s designed to be convolutional, and is essentially a concatenation of separate binary classifiers with overlapping receptive fields. Because it’s designed not to rely on character-level annotations, it doesn’t use class supervision for each of these classifiers, therefore unlabeled images can be used to train D. A pooling layer aggregates scores from all classifiers into the final discriminator output.

While discriminator D promotes real-looking images, the recognizer R promotes readable text, in essence identifying between gibberish and real text. Generated images are penalized by comparing the recognized text in the output of R to the one that was given as input to G. R is trained only on real, labeled, handwritten samples.

Most recognition networks use a recurrent module, which learns an implicit language model that helps it identify the correct character even if it’s not written clearly. Although this quality is usually desired in a handwriting recognition model, in this synthetic data case, it may lead the network to correctly read characters that weren’t written clearly by the generator G. Therefore, the recurrent head of the recognition network isn’t excluded, and only the convolutional backbone is used.

Conclusion

The PyTorch framework, one of the most popular deep learning frameworks, has been advancing rapidly, and is widely recognized and applied in recent years. More and more new models have been composed with PyTorch, and a remarkable number of existing models are being migrated from other frameworks to PyTorch. It has already become one of the de facto mainstream deep learning frameworks.

SageMaker is closely integrated with a variety of AWS services, such as EC2 instances of various types, Amazon S3, and Amazon ECR. It provides an end-to-end, consistent ML experience for ML practitioners of all frameworks. SageMaker continues to support mainstream ML frameworks, including PyTorch. ML algorithms and models developed with PyTorch can be easily transplanted to a SageMaker environment by using the fully managed Jupyter notebook, Spot training instances, Amazon ECR, the SageMaker SDK, and more. This lowers the overhead of ML engineering and infrastructure operation, improves productivity and efficiency, and reduces operation and maintenance costs.

Synthetic data, generated by GAN, is rich and versatile in features, and can be produced in substantial amounts. Therefore, you can use it to improve the performance of a model by enriching the training set. Moreover, this technique can reduce effort and cost in data gathering and labeling.

DCGAN is a landmark in the field of generative adversarial networks, and it’s the cornerstone of many modern complex generative adversarial networks today. We explore some of the most recent and interesting variants of GANs in later posts. The introduction and engineering practices discussed in this post can help you understand the principles and engineering methods for GAN in general. Try out your first generative model, available as an example of SageMaker, have fun, and see you next time.


About the Author

Laurence MIAO, Solutions Architect at AWS. Laurence is specialized in AI/ML. He helps customers empower their business with AI/ML on AWS. Before AWS, Laurence served in a variety of software projects and organizations. His tech spectrum covers high-performance internet applications, enterprise information system integration, DevOps, cloud computing, and Machine Learning.

Read More

Process Amazon Redshift data and schedule a training pipeline with Amazon SageMaker Processing and Amazon SageMaker Pipelines

Customers in many different domains tend to work with multiple sources for their data: object-based storage like Amazon Simple Storage Service (Amazon S3), relational databases like Amazon Relational Database Service (Amazon RDS), or data warehouses like Amazon Redshift. Machine learning (ML) practitioners are often driven to work with objects and files instead of databases and tables from the different frameworks they work with. They also prefer local copies of such files in order to reduce the latency of accessing them.

Nevertheless, ML engineers and data scientists might be required to directly extract data from data warehouses with SQL-like queries to obtain the datasets that they can use for training their models.

In this post, we use the Amazon SageMaker Processing API to run a query against an Amazon Redshift cluster, create CSV files, and perform distributed processing. As an extra step, we also train a simple model to predict the total sales for new events, and build a pipeline with Amazon SageMaker Pipelines to schedule it.

Prerequisites

This post uses the sample data that is available when creating a Free Tier cluster in Amazon Redshift. As a prerequisite, you should create your cluster and attach to it an AWS Identity and Access Management (IAM) role with the correct permissions. For instructions on creating the cluster with the sample dataset, see Using a sample dataset. For instructions on associating the role with the cluster, see Authorizing access to the Amazon Redshift Data API.

You can then use your IDE of choice to open the notebooks. This content has been developed and tested using SageMaker Studio on a ml.t3.medium instance. For more information about using Studio, refer to the following resources:

Define the query

Now that your Amazon Redshift cluster is up and running, and loaded with the sample dataset, we can define the query to extract data from our cluster. According to the documentation for the sample database, this application helps analysts track sales activity for the fictional TICKIT website, where users buy and sell tickets online for sporting events, shows, and concerts. In particular, analysts can identify ticket movement over time, success rates for sellers, and the best-selling events, venues, and seasons.

Analysts may be tasked to solve a very common ML problem: predict the number of tickets sold given the characteristics of an event. Because we have two fact tables and five dimensions in our sample database, we have some data that we can work with. For the sake of this example, we try to use information from the venue in which the event takes place as well as its date. The SQL query looks like the following:

SELECT sum(s.qtysold) AS total_sold, e.venueid, e.catid, d.caldate, d.holiday
from sales s, event e, date d
WHERE s.eventid = e.eventid and e.dateid = d.dateid
GROUP BY e.venueid, e.catid, d.caldate, d.holiday

We can run this query in the query editor to test the outcomes and change it to include additional information if needed.

Extract the data from Amazon Redshift and process it with SageMaker Processing

Now that we’re happy with our query, we need to make it part of our training pipeline.

A typical training pipeline consists of three phases:

  • Preprocessing – This phase reads the raw dataset and transforms it into a format that matches the input required by the model for its training
  • Training – This phase reads the processed dataset and uses it to train the model
  • Model registration – In this phase, we save the model for later usage

Our first task is to use a SageMaker Processing job to load the dataset from Amazon Redshift, preprocess it, and store it to Amazon S3 for the training model to pick up. SageMaker Processing allows us to directly read data from different resources, including Amazon S3, Amazon Athena, and Amazon Redshift. SageMaker Processing allows us to configure access to the cluster by providing the cluster and database information, and use our previously defined SQL query as part of a RedshiftDatasetDefinition. We use the SageMaker Python SDK to create this object, and you can check the definition and the parameters needed on the GitHub page. See the following code:

from sagemaker.dataset_definition.inputs import RedshiftDatasetDefinition

rdd = RedshiftDatasetDefinition(
    cluster_id="THE-NAME-OF-YOUR-CLUSTER",
    database="THE-NAME-OF-YOUR-DATABASE",
    db_user="YOUR-DB-USERNAME",
    query_string="THE-SQL-QUERY-FROM-THE-PREVIOUS-STEP",
    cluster_role_arn="THE-IAM-ROLE-ASSOCIATED-TO-YOUR-CLUSTER",
    output_format="CSV",
    output_s3_uri="WHERE-IN-S3-YOU-WANT-TO-STORE-YOUR-DATA"
)

Then, you can define the DatasetDefinition. This object is responsible for defining how SageMaker Processing uses the dataset loaded from Amazon Redshift:

from sagemaker.dataset_definition.inputs import DatasetDefinition

dd = DatasetDefinition(
    data_distribution_type='ShardedByS3Key', # This tells SM Processing to shard the data across instances
    local_path='/opt/ml/processing/input/data/', # Where SM Processing will save the data in the container
    redshift_dataset_definition=rdd # Our ResdhiftDataset
)

Finally, you can use this object as input of your processor of choice. For this post, we wrote a very simple scikit-learn script that cleans the dataset, performs some transformations, and splits the dataset for training and testing. You can check the code in the file processing.py.

We can now instantiate the SKLearnProcessor object, where we define the framework version that we plan on using, the amount and type of instances that we spin up as part of our processing cluster, and the execution role that contains the right permissions. Then, we can pass the parameter dataset_definition as the input of the run() method. This method accepts our processing.py script as the code to run, given some inputs (namely, our RedshiftDatasetDefinition), generates some outputs (a train and a test dataset), and stores both to Amazon S3. We run this operation synchronously thanks to the parameter wait=True:

from sagemaker.sklearn import SKLearnProcessor
from sagemaker.processing import ProcessingInput, ProcessingOutput
from sagemaker import get_execution_role

skp = SKLearnProcessor(
    framework_version='0.23-1',
    role=get_execution_role(),
    instance_type='ml.m5.large',
    instance_count=1
)
skp.run(
    code='processing/processing.py',
    inputs=[ProcessingInput(
        dataset_definition=dd,
        destination='/opt/ml/processing/input/data/',
        s3_data_distribution_type='ShardedByS3Key'
    )],
    outputs = [
        ProcessingOutput(
            output_name="train", 
            source="/opt/ml/processing/train"
        ),
        ProcessingOutput(
            output_name="test", 
            source="/opt/ml/processing/test"
        ),
    ],
    wait=True
)

With the outputs created by the processing job, we can move to the training step, by means of the sagemaker.sklearn.SKLearn() Estimator:

from sagemaker.sklearn import SKLearn

s = SKLearn(
    entry_point='training/script.py',
    framework_version='0.23-1',
    instance_type='ml.m5.large',
    instance_count=1,
    role=get_execution_role()
)
s.fit({
    'train':skp.latest_job.outputs[0].destination, 
    'test':skp.latest_job.outputs[1].destination
})

To learn more about the SageMaker Training API and Scikit-learn Estimator, see Using Scikit-learn with the SageMaker Python SDK.

Define a training pipeline

Now that we have proven that we can read data from Amazon Redshift, preprocess it, and use it to train a model, we can define a pipeline that reproduces these steps, and schedule it to run. To do so, we use SageMaker Pipelines. Pipelines is the first purpose-built, easy-to-use continuous integration and continuous delivery (CI/CD) service for ML. With Pipelines, you can create, automate, and manage end-to-end ML workflows at scale.

Pipelines are composed of steps. These steps define the actions that the pipeline takes, and the relationships between steps using properties. We already know that our pipelines are composed of three steps:

Furthermore, to make the pipeline definition dynamic, Pipelines allows us to define parameters, which are values that we can provide at runtime when the pipeline starts.

The following code is a snippet that shows the definition of a processing step. The step requires the definition of a processor, which is very similar to the one defined previously during the preprocessing discovery phase, but this time using the parameters of Pipelines. The others parameters, code, inputs, and outputs are the same as we have defined previously:

#### PROCESSING STEP #####

# PARAMETERS
processing_instance_type = ParameterString(name='ProcessingInstanceType', default_value='ml.m5.large')
processing_instance_count = ParameterInteger(name='ProcessingInstanceCount', default_value=2)

# PROCESSOR
skp = SKLearnProcessor(
    framework_version='0.23-1',
    role=get_execution_role(),
    instance_type=processing_instance_type,
    instance_count=processing_instance_count
)

# DEFINE THE STEP
processing_step = ProcessingStep(
    name='ProcessingStep',
    processor=skp,
    code='processing/processing.py',
    inputs=[ProcessingInput(
        dataset_definition=dd,
        destination='/opt/ml/processing/input/data/',
        s3_data_distribution_type='ShardedByS3Key'
    )],
    outputs = [
        ProcessingOutput(output_name="train", source="/opt/ml/processing/output/train"),
        ProcessingOutput(output_name="test", source="/opt/ml/processing/output/test"),
    ]
)

Very similarly, we can define the training step, but we use the outputs from the processing step as inputs:

# TRAININGSTEP
training_step = TrainingStep(
    name='TrainingStep',
    estimator=s,
    inputs={
        "train": TrainingInput(s3_data=processing_step.properties.ProcessingOutputConfig.Outputs["train"].S3Output.S3Uri),
        "test": TrainingInput(s3_data=processing_step.properties.ProcessingOutputConfig.Outputs["test"].S3Output.S3Uri)
    }
)

Finally, let’s add the model step, which registers the model to SageMaker for later use (for real-time endpoints and batch transform):

# MODELSTEP
model_step = CreateModelStep(
    name="Model",
    model=model,
    inputs=CreateModelInput(instance_type='ml.m5.xlarge')
)

With all the pipeline steps now defined, we can define the pipeline itself as a pipeline object comprising a series of those steps. ParallelStep and Condition steps also are possible. Then we can update and insert (upsert) the definition to Pipelines with the .upsert() command:

#### PIPELINE ####
pipeline = Pipeline(
    name = 'Redshift2Pipeline',
    parameters = [
        processing_instance_type, processing_instance_count,
        training_instance_type, training_instance_count,
        inference_instance_type
    ],
    steps = [
        processing_step, 
        training_step,
        model_step
    ]
)
pipeline.upsert(role_arn=role)

After we upsert the definition, we can start the pipeline with the pipeline object’s start() method, and wait for the end of its run:

execution = pipeline.start()
execution.wait()

After the pipeline starts running, we can view the run on the SageMaker console. In the navigation pane, under Components and registries, choose Pipelines. Choose the Redshift2Pipeline pipeline, and then choose the specific run to see its progress. You can choose each step to see additional details such as the output, logs, and additional information. Typically, this pipeline should take about 10 minutes to complete.

Conclusions

In this post, we created a SageMaker pipeline that reads data from Amazon Redshift natively without requiring additional configuration or services, processed it via SageMaker Processing, and trained a scikit-learn model. We can now do the following:

If you want additional notebooks to play with, check out the following:


About the Author

Davide Gallitelli is a Specialist Solutions Architect for AI/ML in the EMEA region. He is based in Brussels and works closely with customers throughout Benelux. He has been a developer since he was very young, starting to code at the age of 7. He started learning AI/ML at university, and has fallen in love with it since then.

Read More

Add AutoML functionality with Amazon SageMaker Autopilot across accounts

AutoML is a powerful capability, provided by Amazon SageMaker Autopilot, that allows non-experts to create machine learning (ML) models to invoke in their applications.

The problem that we want to solve arises when, due to governance constraints, Amazon SageMaker resources can’t be deployed in the same AWS account where they are used.

Examples of such a situation are:

  • A multi-account enterprise setup of AWS where the Autopilot resources must be deployed in a specific AWS account (the trusting account), and should be accessed from trusted accounts
  • A software as a service (SaaS) provider that offers AutoML to their users and adopts the resources in the customer AWS account so that the billing is associated to the end customer

This post walks through an implementation using the SageMaker Python SDK. It’s divided into two sections:

  • Create the AWS Identity and Access Management (IAM) resources needed for cross-account access
  • Perform the Autopilot job, deploy the top model, and make predictions from the trusted account accessing the trusting account

The solution described in this post is provided in the Jupyter notebook available in this GitHub repository.

For a full explanation of Autopilot, you can refer to the examples available in GitHub, particularly Top Candidates Customer Churn Prediction with Amazon SageMaker Autopilot and Batch Transform (Python SDK).

Prerequisites

We have two AWS accounts:

  • Customer (trusting) account – Where the SageMaker resources are deployed
  • SaaS (trusted) account – Drives the training and prediction activities

You have to create a user for each account, with programmatic access enabled and the IAMFullAccess managed policy associated.

You have to configure the user profiles in the .aws/credentials file:

  • customer_config for the user configured in the customer account
  • saas_config for the user configured in the SaaS account

To update the SageMaker SDK, run the following command in your Python environment:

!pip install --update sagemaker

The procedure has been tested in the SageMaker environment conda_python3.

Common modules and initial definitions

Import common Python modules used in the script:

import boto3
import json
import sagemaker
from botocore.exceptions import ClientError

Let’s define the AWS Region that will host the resources:

REGION = boto3.Session().region_name

and the reference to the dataset for the training of the model:

DATASET_URI = "s3://sagemaker-sample-files/datasets/tabular/synthetic/churn.txt"

Set up the IAM resources

The following diagram illustrates the IAM entities that we create, which allow the cross-account implementation of the Autopilot job.

On the customer account, we define the single role customer_trusting_saas, which consolidates the permissions for Amazon Simple Storage Service (Amazon S3) and SageMaker access needed for the following:

  • The local SageMaker service that performs the Autopilot actions
  • The principal in the SaaS account that initiates the actions in the customer account

On the SaaS account, we define the following:

  • The AutopilotUsers group with the policy required to assume the customer_trusting_saas role via AWS Security Token Service (AWS STS)
  • The saas_user, which is a member of the AutopilotUsers group and is the actual principal triggering the Autopilot actions

For additional security, in the cross-account trust relationship, we use the external ID to mitigate the confused deputy problem.

Let’s proceed with the setup.

For each of the two accounts, we complete the following tasks:

  1. Create the Boto3 session with the profile of the respective configuration user.
  2. Retrieve the AWS account ID by means of AWS STS.
  3. Create the IAM client that performs the configuration steps in the account.

For the customer account, use the following code:

customer_config_session = boto3.session.Session(profile_name="customer_config")
CUSTOMER_ACCOUNT_ID = customer_config_session.client("sts").get_caller_identity()["Account"]
customer_iam_client = customer_config_session.client("iam")

Use the following code in the SaaS account:

saas_config_session = boto3.session.Session(profile_name="saas_config")
SAAS_ACCOUNT_ID = saas_config_session.client("sts").get_caller_identity()["Account"]
saas_iam_client = saas_config_session.client("iam")

Set up the IAM entities in the customer account

Let’s first define the role needed to perform cross-account tasks from the SaaS account in the customer account.

For simplicity, the same role is adopted for trusting SageMaker in the customer account. Ideally, consider splitting this role into two roles with fine-grained permissions in line with the principle of granting least privilege.

The role name and the references to the ARN of the SageMaker AWS managed policies are as follows:

CUSTOMER_TRUST_SAAS_ROLE_NAME = "customer_trusting_saas"
CUSTOMER_TRUST_SAAS_ROLE_ARN = "arn:aws:iam::{}:role/{}".format(CUSTOMER_ACCOUNT_ID, CUSTOMER_TRUST_SAAS_ROLE_NAME)
SAGEMAKERFULLACCESS_POLICY_ARN = "arn:aws:iam::aws:policy/AmazonSageMakerFullAccess"

The following customer managed policy gives the role the permissions to access the Amazon S3 resources that are needed for the SageMaker tasks and for the cross-account copy of the dataset.

We restrict the access to the S3 buckets dedicated to SageMaker in the AWS Region for the customer account. See the following code:

CUSTOMER_S3_POLICY_NAME = "customer_s3"
CUSTOMER_S3_POLICY = 
{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Action": [
         "s3:GetObject",
         "s3:PutObject",
         "s3:DeleteObject",
         "s3:ListBucket"
      ],
      "Resource": [
         "arn:aws:s3:::sagemaker-{}-{}".format(REGION, CUSTOMER_ACCOUNT_ID),
         "arn:aws:s3:::sagemaker-{}-{}/*".format(REGION, CUSTOMER_ACCOUNT_ID)
      ]
    }
  ]
}

Then we define the external ID to mitigate the confused deputy problem:

EXTERNAL_ID = "XXXXX"

The trust relationships policy allows the principals from the trusted account and SageMaker to assume the role:

CUSTOMER_TRUST_SAAS_POLICY = 
{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Principal": {
        "AWS": "arn:aws:iam::{}:root".format(SAAS_ACCOUNT_ID)
      },
      "Action": "sts:AssumeRole",
      "Condition": {
        "StringEquals": {
          "sts:ExternalId": EXTERNAL_ID
        }
      }
    },
    {
      "Effect": "Allow",
      "Principal": {
        "Service": "sagemaker.amazonaws.com"
      },
      "Action": "sts:AssumeRole"
    }
  ]
}

For simplicity, we don’t include the management of the exceptions in the following snippets. See the Jupyter notebook for the full code.

We create the customer managed policy in the customer account, create the new role, and attach the two policies. We use the maximum session duration parameter to manage long-running jobs. See the following code:

MAX_SESSION_DURATION = 10800
create_policy_response = customer_iam_client.create_policy(PolicyName=CUSTOMER_S3_POLICY_NAME,
                                                           PolicyDocument=json.dumps(CUSTOMER_S3_POLICY))
customer_s3_policy_arn = create_policy_response["Policy"]["Arn"]

create_role_response = customer_iam_client.create_role(RoleName=CUSTOMER_TRUST_SAAS_ROLE_NAME,
                                                       AssumeRolePolicyDocument=json.dumps(CUSTOMER_TRUST_SAAS_POLICY),
                                                       MaxSessionDuration=MAX_SESSION_DURATION)

customer_iam_client.attach_role_policy(RoleName=CUSTOMER_TRUST_SAAS_ROLE_NAME,
                                       PolicyArn=customer_s3_policy_arn)
customer_iam_client.attach_role_policy(RoleName=CUSTOMER_TRUST_SAAS_ROLE_NAME,
                                       PolicyArn=SAGEMAKERFULLACCESS_POLICY_ARN)

Set up IAM entities in the SaaS account

We define the following in the SaaS account:

  • A group of users allowed to perform the Autopilot job in the customer account
  • A policy associated with the group for assuming the role defined in the customer account
  • A policy associated with the group for uploading data to Amazon S3 and managing bucket policies
  • A user that is responsible for the implementation of the Autopilot jobs – the user has programmatic access
  • A user profile to store the user access key and secret in the file for the credentials

Let’s start with defining the name of the group (AutopilotUsers):

SAAS_USER_GROUP_NAME = "AutopilotUsers"

The first policy refers to the customer account ID and the role:

SAAS_ASSUME_ROLE_POLICY_NAME = "saas_assume_customer_role"
SAAS_ASSUME_ROLE_POLICY = 
{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Action": "sts:AssumeRole",
            "Resource": "arn:aws:iam::{}:role/{}".format(CUSTOMER_ACCOUNT_ID, CUSTOMER_TRUST_SAAS_ROLE_NAME)
        }
    ]
}

The second policy is needed to download the dataset, and to manage the Amazon S3 bucket used by SageMaker:

SAAS_S3_POLICY_NAME = "saas_s3"
SAAS_S3_POLICY = 
{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Action": [
                "s3:GetObject"
            ],
            "Resource": [
                "arn:aws:s3:::{}".format(DATASET_URI.split('://')[1])
            ]
        },
        {
            "Effect": "Allow",
            "Action": [
                "s3:CreateBucket",
                "s3:GetObject",
                "s3:PutObject",
                "s3:DeleteObject",
                "s3:PutBucketPolicy",
                "s3:DeleteBucketPolicy"
            ],
            "Resource": [
                "arn:aws:s3:::sagemaker-{}-{}".format(REGION, SAAS_ACCOUNT_ID),
                "arn:aws:s3:::sagemaker-{}-{}/*".format(REGION, SAAS_ACCOUNT_ID)
            ]
        }
    ]
}

For simplicity, we give the same value to the user name and user profile:

SAAS_USER_PROFILE = SAAS_USER_NAME = "saas_user"

Now we create the two new managed policies. Next, we create the group, attach the policies to the group, create the user with programmatic access, and insert the user into the group. See the following code:

create_policy_response = saas_iam_client.create_policy(PolicyName=SAAS_ASSUME_ROLE_POLICY_NAME,
                                                       PolicyDocument=json.dumps(SAAS_ASSUME_ROLE_POLICY))      
saas_assume_role_policy_arn = create_policy_response["Policy"]["Arn"]

create_policy_response = saas_iam_client.create_policy(PolicyName=SAAS_S3_POLICY_NAME,
                                                       PolicyDocument=json.dumps(SAAS_S3_POLICY))
saas_s3_policy_arn = create_policy_response["Policy"]["Arn"]

saas_iam_client.create_group(GroupName=SAAS_USER_GROUP_NAME)

saas_iam_client.attach_group_policy(GroupName=SAAS_USER_GROUP_NAME,PolicyArn=saas_assume_role_policy_arn)
saas_iam_client.attach_group_policy(GroupName=SAAS_USER_GROUP_NAME,PolicyArn=saas_s3_policy_arn)

saas_iam_client.create_user(UserName=SAAS_USER_NAME)
saas_iam_client.create_access_key(UserName=SAAS_USER_NAME)

add_user_to_group(GroupName=SAAS_USER_GROUP_NAME,UserName=SAAS_USER_NAME)

Update the credentials file

Create the user profile for saas_user in the .aws/credentials file:

from pathlib import Path
import configparser

credentials_config = configparser.ConfigParser()
credentials_config.read(str(Path.home()) + "/.aws/credentials")

if not credentials_config.has_section(SAAS_USER_PROFILE):
    credentials_config.add_section(SAAS_USER_PROFILE)
    
credentials_config[SAAS_USER_PROFILE]["aws_access_key_id"] = create_akey_response["AccessKey"]["AccessKeyId"]
credentials_config[SAAS_USER_PROFILE]["aws_secret_access_key"] = create_akey_response["AccessKey"]["SecretAccessKey"]

with open(str(Path.home()) + "/.aws/credentials", "w") as configfile:
    credentials_config.write(configfile, space_around_delimiters=False)

This completes the configuration of IAM entities that are needed for the cross-account implementation of the Autopilot job.

Autopilot cross-account access

This is the core objective of the post, where we demonstrate the main differences with respect to the single-account scenario.

First, we prepare the dataset the Autopilot job uses for training the models.

Data

We reuse the same dataset adopted in the SageMaker example: Top Candidates Customer Churn Prediction with Amazon SageMaker Autopilot and Batch Transform (Python SDK).

For a full explanation of the data, refer to the original example.

We skip the data inspection and proceed directly to the focus of this post, which is the cross-account Autopilot job invocation.

Download the churn dataset with the following AWS Command Line Interface (AWS CLI) command:

!aws s3 cp $DATASET_URI ./ --profile saas_user

Split the dataset for the Autopilot job and the inference phase

After you load the dataset, split it into two parts:

  • 80% for the Autopilot job to train the top model
  • 20% for testing the model that we deploy

Autopilot applies a cross-validation resampling procedure, on the dataset passed as input, to all candidate algorithms to test their ability to predict data they have not been trained on.

Split the dataset with the following code:

import pandas as pd
import numpy as np

churn = pd.read_csv("./churn.txt")
train_data = churn.sample(frac=0.8,random_state=200)
test_data = churn.drop(train_data.index)
test_data_no_target = test_data.drop(columns=["Churn?"])

Let’s save the training data into a file locally that we pass to the fit method of the AutoML estimator:

train_file = "train_data.csv"
train_data.to_csv(train_file, index=False, header=True)

Autopilot training job, deployment, and prediction overview

The training, deployment, and prediction process is illustrated in the following diagram.

The following are the steps for the cross-account invocation:

  1. Initiate a session as saas_user in the SaaS account and load the profile from the credentials.
  2. Assume the role in the customer account via the AWS STS.
  3. Set up and train the AutoML estimator in the customer account.
  4. Deploy the top candidate model proposed by AutoML in the customer account.
  5. Invoke the deployed model endpoint for the prediction on test data.

Initiate the user session in the SaaS account

The setup procedure of IAM entities, explained at the beginning of the post, created the saas_user, identified by the saas_user profile in the .aws/credentials file. We initiate a Boto3 session with this profile:

saas_user_session = boto3.session.Session(profile_name=SAAS_USER_PROFILE, 
                                          region_name=REGION)

The saas_user inherits from the AutopilotUsers group the permission to assume the customer_trusting_saas role in the customer account.

Assume the role in the customer account via AWS STS

AWS STS provides the credentials for a temporary session that is initiated in the customer account:

saas_sts_client = saas_user_session.client("sts", region_name=REGION)

The default session duration (the DurationSeconds parameter) is 1 hour. We set it to the maximum duration session value set for the role. If the session expires, you can recreate it by performing the following steps again. See the following code:

assumed_role_object = saas_sts_client.assume_role(RoleArn=CUSTOMER_TRUST_SAAS_ROLE_ARN,
                                                  RoleSessionName="sagemaker_autopilot",
                                                  ExternalId=EXTERNAL_ID,
                                                  DurationSeconds=MAX_SESSION_DURATION)

assumed_role_credentials = assumed_role_object["Credentials"]
			 
assumed_role_session = boto3.Session(aws_access_key_id=assumed_role_credentials["AccessKeyId"],
                                     aws_secret_access_key=assumed_role_credentials["SecretAccessKey"],
                                     aws_session_token=assumed_role_credentials["SessionToken"],
                                     region_name=REGION)
									 
sagemaker_session = sagemaker.Session(boto_session=assumed_role_session)

The sagemaker_session parameter is needed for using the high-level AutoML estimator.

Set up and train the AutoML estimator in the customer account

We use the AutoML estimator from the SageMaker Python SDK to invoke the Autopilot job to train a set of candidate models for the training data.

The setup of the AutoML object is similar to the single-account scenario, but with the following differences for the cross-account invocation:

  • The role for SageMaker access in the customer account is CUSTOMER_TRUST_SAAS_ROLE_ARN
  • The sagemaker_session is the temporary session created by AWS STS

See the following code:

target_attribute_name = "Churn?"

from sagemaker import AutoML
from time import gmtime, strftime, sleep

timestamp_suffix = strftime("%d-%H-%M-%S", gmtime())
base_job_name = "automl-churn-sdk-" + timestamp_suffix

target_attribute_name = "Churn?"
target_attribute_values = np.unique(train_data[target_attribute_name])
target_attribute_true_value = target_attribute_values[1] # 'True.'

automl = AutoML(role=CUSTOMER_TRUST_SAAS_ROLE_ARN,
                target_attribute_name=target_attribute_name,
                base_job_name=base_job_name,
                sagemaker_session=sagemaker_session,
                max_candidates=10)

We now launch the Autopilot job by calling the fit method of the AutoML estimator in the same way as in the single-account example. We consider the following alternative options for providing the training dataset to the estimator.

First option: upload a local file and train by fit method

We simply pass the training dataset by referring to the local file that the fit method uploads into the default Amazon S3 bucket used by SageMaker in the customer account:

automl.fit(train_file, job_name=base_job_name, wait=False, logs=False)

Second option: cross-account copy

Most likely, the training dataset is located in an Amazon S3 bucket owned by the SaaS account. We copy the dataset from the SaaS account into the customer account and refer to the URI of the copy in the fit method.

  1. Upload the dataset into a local bucket of the SaaS account. For convenience, we use the SageMaker default bucket in the Region.
    DATA_PREFIX = "auto-ml-input-data"
    local_session = sagemaker.Session(boto_session=saas_user_session)
    local_session_bucket = local_session.default_bucket()
    train_data_s3_path = local_session.upload_data(path=train_file,key_prefix=DATA_PREFIX)

  2. To allow the cross-account copy, we set the following policy in the local bucket, only for the time needed for the copy operation:
    train_data_s3_arn = "arn:aws:s3:::{}/{}/{}".format(local_session_bucket,DATA_PREFIX,train_file)
    bucket_policy = {
        "Version": "2012-10-17",
        "Statement": [
            {
                "Effect": "Allow",
                "Principal": {
                    "AWS": CUSTOMER_TRUST_SAAS_ROLE_ARN
                },
                "Action": "s3:GetObject",
                "Resource": train_data_s3_arn
            }
        ]
    }
    bucket_policy = json.dumps(bucket_policy)
    
    saas_s3_client = saas_user_session.client("s3")
    saas_s3_client.put_bucket_policy(Bucket=local_session_bucket,Policy=bucket_policy)

  3. Then the copy is performed by the assumed role in the customer account:
    assumed_role_s3_client = boto3.client("s3",
                                           aws_access_key_id=assumed_role_credentials["AccessKeyId"],
                                           aws_secret_access_key=assumed_role_credentials["SecretAccessKey"],
                                           aws_session_token=assumed_role_credentials["SessionToken"])
    target_train_key = "{}/{}".format(DATA_PREFIX, train_file)
    assumed_role_s3_client.copy_object(Bucket=sagemaker_session.default_bucket(), 
                                       CopySource=train_data_s3_path.split("://")[1], 
                                       Key=target_train_key)

  4. Delete the bucket policy so that the access has been granted only for the time of the copy:
    saas_s3_client.delete_bucket_policy(Bucket=local_session_bucket)

  5. Finally, we launch the Autopilot job, passing the URI of the object copy:
target_train_uri = "s3://{}/{}".format(sagemaker_session.default_bucket(), 
                                       target_train_key)
automl.fit(target_train_uri, job_name=base_job_name, wait=False, logs=False)

Another option is to refer to the URI of the source dataset in the bucket in SaaS account. In this case, the bucket policy should include the s3:ListBucket action for the source bucket.

The bucket policy should be assigned for the duration of all the training and allow the s3:ListBucket action for the source bucket, including a statement like the following:

{
  "Effect": "Allow",
  "Principal": {
     "AWS": "arn:aws:iam::CUSTOMER_ACCOUNT_ID:role/customer_trusting_saas"
  },
  "Action": "s3:ListBucket",
  "Resource": "arn:aws:s3:::sagemaker-REGION-SAAS_ACCOUNT_ID"
}

We can use the describe_auto_ml_job method to track the status of our SageMaker Autopilot job:

describe_response = automl.describe_auto_ml_job()
print (describe_response["AutoMLJobStatus"] + " - " + describe_response["AutoMLJobSecondaryStatus"])
job_run_status = describe_response["AutoMLJobStatus"]

while job_run_status not in ("Failed", "Completed", "Stopped"):
    describe_response = automl.describe_auto_ml_job()
    job_run_status = describe_response["AutoMLJobStatus"]
    
    print(describe_response["AutoMLJobStatus"] + " - " + describe_response["AutoMLJobSecondaryStatus"])
    sleep(30)

Because an Autopilot job can take a long time, if the session token expires during the fit, you can create a new session following the steps described earlier and retrieve the current Autopilot job reference by implementing the following code:

automl = AutoML.attach(auto_ml_job_name=base_job_name,sagemaker_session=sagemaker_session)

Deploy the top candidate model proposed by AutoML

The Autopilot job trains and returns a set of trained candidate models, identifying among them the top candidate that optimizes the evaluation metric related to the ML problem.

In this post, we only demonstrate the deployment of the top candidate proposed by AutoML, but you can choose a different candidate that better fits your business criteria.

First, we review the performance achieved by the top candidate in the cross-validation:

best_candidate = automl.describe_auto_ml_job()["BestCandidate"]
best_candidate_name = best_candidate["CandidateName"]
print("n")
print("CandidateName: " + best_candidate_name)
print("FinalAutoMLJobObjectiveMetricName: " + best_candidate["FinalAutoMLJobObjectiveMetric"]["MetricName"])
print("FinalAutoMLJobObjectiveMetricValue: " + str(best_candidate["FinalAutoMLJobObjectiveMetric"]["Value"]))

If the performance is good enough for our business criteria, we deploy the top candidate in the customer account:

from sagemaker.predictor import Predictor
from sagemaker.serializers import CSVSerializer
from sagemaker.deserializers import CSVDeserializer

inference_response_keys = ["predicted_label", "probability"]

predictor = automl.deploy(initial_instance_count=1,
                          instance_type="ml.m5.large",
                          inference_response_keys=inference_response_keys,
                          predictor_cls=Predictor,
                          serializer=CSVSerializer(),
                          deserializer=CSVDeserializer())

print("Created endpoint: {}".format(predictor.endpoint_name))

The instance is deployed and billed to the customer account.

Prediction on test data

Finally, we access the model endpoint for the prediction of the label output for the test data:

predictor.predict(test_data_no_target.to_csv(sep=",", 
                                             header=False, 
                                             index=False))

If the session token expires after the deployment of the endpoint, you can recreate a new session following the steps described earlier and connect to the already deployed endpoint by implementing the following code:

predictor = Predictor(predictor.endpoint_name, 
                      sagemaker_session = sagemaker_session,
                      serializer=CSVSerializer(), 
                      deserializer=CSVDeserializer())

Clean up

To avoid incurring unnecessary charges, delete the endpoints and resources that were created when deploying the model after they are no longer needed.

Delete the model endpoint

The model endpoint is deployed in a container that is always active. We delete it first to avoid consumption of credits:

predictor.delete_endpoint()

Delete the artifacts generated by the Autopilot job

Delete all the artifacts created by the Autopilot job, such as the generated candidate models, scripts, and notebook.

We use the high-level resource for Amazon S3 to simplify the operation:

assumed_role_s3_resource = boto3.resource("s3",
                                          aws_access_key_id=assumed_role_credentials["AccessKeyId"],
                                          aws_secret_access_key=assumed_role_credentials["SecretAccessKey"],
                                          aws_session_token=assumed_role_credentials["SessionToken"])

s3_bucket = assumed_role_s3_resource.Bucket(automl.sagemaker_session.default_bucket())
s3_bucket.objects.filter(Prefix=base_job_name).delete()

Delete the training dataset copied into the customer account

Delete the training dataset in the customer account with the following code:

from urllib.parse import urlparse

train_data_uri = automl.describe_auto_ml_job()["InputDataConfig"][0][ "DataSource"]["S3DataSource"]["S3Uri"]

o = urlparse(train_data_uri, allow_fragments=False)
assumed_role_s3_resource.Object(o.netloc, o.path.lstrip("/")).delete()

Clean up IAM resources

We delete the IAM resources in reverse order to the creation phase.

  1. Remove the user from the group, and the profile from the credentials, and delete the user:
    saas_iam_client.remove_user_from_group(GroupName = SAAS_USER_GROUP_NAME,
                                           UserName = SAAS_USER_NAME)
                                          
    credentials_config.remove_section(SAAS_USER_PROFILE)
    with open(str(Path.home()) + "/.aws/credentials", "w") as configfile:
        credentials_config.write(configfile, space_around_delimiters=False)
        
    user_access_keys = saas_iam_client.list_access_keys(UserName=SAAS_USER_NAME)
    for AccessKeyId in [element["AccessKeyId"] for element in user_access_keys["AccessKeyMetadata"]]:
        saas_iam_client.delete_access_key(UserName=SAAS_USER_NAME, AccessKeyId=AccessKeyId)
    	
    saas_iam_client.delete_user(UserName=SAAS_USER_NAME)

  2. Detach the policies from the group in the SaaS account, and delete the group and policies:
    attached_group_policies = saas_iam_client.list_attached_group_policies(GroupName=SAAS_USER_GROUP_NAME)
    for PolicyArn in [element["PolicyArn"] for element in attached_group_policies["AttachedPolicies"]]:
        saas_iam_client.detach_group_policy(GroupName=SAAS_USER_GROUP_NAME, PolicyArn=PolicyArn)
        
    saas_iam_client.delete_group(GroupName=SAAS_USER_GROUP_NAME)
    saas_iam_client.delete_policy(PolicyArn=saas_assume_role_policy_arn)
    saas_iam_client.delete_policy(PolicyArn=saas_s3_policy_arn)

  3. Detach the AWS policies from the role in the customer account, then delete the role and the policy:
    attached_role_policies = customer_iam_client.list_attached_role_policies(RoleName=CUSTOMER_TRUST_SAAS_ROLE_NAME)
    for PolicyArn in [element["PolicyArn"] for element in attached_role_policies["AttachedPolicies"]]:
        customer_iam_client.detach_role_policy(RoleName=CUSTOMER_TRUST_SAAS_ROLE_NAME, PolicyArn=PolicyArn)
    
    customer_iam_client.delete_role(RoleName=CUSTOMER_TRUST_SAAS_ROLE_NAME)
    customer_iam_client.delete_policy(PolicyArn=customer_s3_policy_arn)

Conclusion

This post described a possible implementation, using the SageMaker Python SDK, of an Autopilot training job, model deployment, and prediction in a cross-account configuration. The originating account owns the data for the training and it delegates the activities to the account hosting the SageMaker resources.

You can use the API calls shown in this post to incorporate AutoML capabilities into a SaaS application, by delegating the management and billing of SageMaker resources to the customer account.

SageMaker decouples the environment where the data scientist drives the analysis from the containers that perform each phase of the ML process.

This capability simplifies other cross-account scenarios. For example: a SaaS provider who owns sensitive data, instead of sharing its data with the customer, could expose certified training algorithms and generate models on behalf of the customer. The customer will receive the trained model at the end of the Autopilot job.

For more examples of how to integrate Autopilot into SaaS products, see the following posts:


About the Authors

Francesco Polimeni is a Sr Solutions Architect at AWS with focus on Machine Learning. He has over 20 years of experience in professional services and pre-sales organizations for IT management software solutions.

Mehmet Bakkaloglu is a Sr Solutions Architect at AWS. He has vast experience in data analytics and cloud architecture, having provided technical leadership for transformation programs and pre-sales activities in a variety of sectors.

Read More

Train and deploy a FairMOT model with Amazon SageMaker

Multi-object tracking (MOT) in video analysis is increasingly in demand in many industries, such as live sports, manufacturing, surveillance, and traffic monitoring. For example, in live sports, MOT can track soccer players in real time to analyze physical performance such as real-time speed and moving distance.

Previously, most methods were designed to separate MOT into two tasks: object detection and association. The object detection task detects objects first. The association task extracts re-identification (re-ID) features from image regions for each detected object, and links each detected object through re-ID features to existing tracks or creates a new track. It’s challenging to do real-time inference in an environment with a large number of objects. This is because two tasks extract features respectively and the association task needs to run re-ID feature extraction for each object. Some proposed one-shot MOT methods add a re-ID branch to the object detection network to conduct object detection and association simultaneously. This reduces the inference time, but sacrifices the tracking performance.

FairMOT is a one-shot tracking method with two homogeneous branches for detecting objects and extracting re-ID features. FairMOT has higher performance than the two-step methods—it reaches a speed of about 30 FPS on the MOT challenge datasets. This improvement helps MOT find its way in many industrial scenarios.

Amazon SageMaker is a fully managed service that provides every developer and data scientist with the ability to prepare, build, train, and deploy machine learning (ML) models quickly. SageMaker provides several built-in algorithms and container images that you can use to accelerate training and deployment of ML models. Additionally, custom algorithms such as FairMOT can also be supported via custom-built Docker container images.

This post demonstrates how to train and deploy a FairMOT model with SageMaker, optimize it using hyperparameter tuning, and make predictions in real time as well as batch mode.

Overview of the solution

Our solution consists of the following high-level steps:

  1. Set up your resources.
  2. Use SageMaker to train a FairMOT model and tune hyperparameters on the MOT challenge dataset.
  3. Run real-time inference.
  4. Run batch inference.

Prerequisites

Before getting started, complete the following prerequisites:

  1. Create an AWS account or use an existing AWS account.
  2. Make sure that you have a minimum of one ml.p3.16xlarge instance for the training job.
  3. Make sure that you have a minimum of one ml.p3.2xlarge instance for inference endpoint.
  4. Make sure that you have a minimum of one ml.p3.2xlarge instance for processing jobs.

If this is your first time training a model, deploying a model, or running a processing job on the previously mentioned instance sizes, you must request a service quota increase for SageMaker training job.

Set up your resources

After you complete all the prerequisites, you’re ready to deploy the necessary resources.

  1. Create a SageMaker notebook instance. For this task, we recommend the ml.t3.medium instance type. The default volume size is 5 GB; you must increase the volume size to 100 GB. For your AWS Identity and Access Management (IAM) role, choose an existing role or create a new role, and attach the AmazonSageMakerFullAccess and AmazonElasticContainerRegistryPublicFullAccess policies to the role.
  2. Clone the GitHub repo to the notebook you created.
  3. Create a new Amazon Simple Storage Service (Amazon S3) bucket or use an existing bucket.

Train a FairMOT model

To train your FairMOT model, we use the fairmot-training.ipynb notebook. The following diagram outlines the logical flow implemented in this code.

In the Initialize SageMaker section, we define the S3 bucket location and dataset name, and choose either to train on the entire dataset (by setting the half_val parameter to 0) or split it into training and validation (half_val is set to 1). We use the latter mode for hyperparameter tuning.

Next, the prepare-s3-bucket.sh script downloads the dataset from MOT challenge, converts it, and uploads it to the S3 bucket. We tested training the model using the MOT17 and MOT20 datasets, but you can try training with other MOT datasets as well.

In the Build and push SageMaker training image section, we create a custom container image with the FairMOT training algorithm. You can find the definition of the Docker image in the container-dp folder. Because this container image consumes about 13.5 GB volume, the prepare-docker.sh script changes the default directory of the local temporary Docker image in order to avoid the “no space” error. The build_and_push.sh command does just that—it builds and pushes the container to Amazon Elastic Container Registry (Amazon ECR). You should be able to validate the result on the Amazon ECR console.

Finally, the Define a training job section initiates the model training. You can observe the model training on the SageMaker console on the Training Jobs page. The model shows an In progress status first and changes to Completed in about 3 hours (if you’re running the notebook as is). You can access corresponding training metrics on the training job details page, as shown in the following screenshot.

Training metrics

The FairMOT model is based on a backbone network with object detection and re-ID branches on top. The object detection branch has three parallel heads to estimate heatmaps, object center offsets, and bounding box sizes. During the training phase, each head has a corresponding loss value: hm_loss for heatmap, offset_loss for center offsets, and wh_loss for bounding box sizes. The re-ID branch has an id_loss for the re-ID feature learning. Based on these four loss values, a total loss named loss is calculated for the entire network. We monitor all loss values on both the training and validation datasets. During hyperparameter tuning, we rely on ObjectiveMetric to select the best-performing model.

When the training job is complete, note the URI of your model in the Output section of the job details page.

Finally, the last section of the notebook demonstrates SageMaker hyperparameter optimization (HPO). The right combination of hyperparameters can improve performance of ML models; however, finding one manually is time-consuming. SageMaker hyperparameter tuning helps automate the process. We simply define the range for each tuning hyperparameter and the objective metric, while HPO does the rest.

To accelerate the process, SageMaker HPO can run multiple training jobs in parallel. In the end, the best training job provides the most optimal hyperparameters for the model, which you can then use for training on the entire dataset.

Perform real-time inference

In this section, we use the fairmot-inference.ipynb notebook. Similar to the training notebook, we begin by initializing SageMaker parameters and building a custom container image. The inference container is then deployed with the model we built earlier. The model is referenced via the s3_model_uri variable—you should double-check to make sure it links to the correct URI (adjust manually if necessary).

The following diagram illustrates the inference flow.

After our custom container is deployed on a SageMaker inference endpoint, we’re ready to test. First, we download a test video from MOT16-03. Next, in our inference loop, we use OpenCV to split the video into individual frames, convert them to base64, and make predictions by calling the deployed inference endpoint.

The following code demonstrates this logic implemented with the SageMaker SDK:

frame_path = # the path of a frame
with open(frame_path, "rb") as image_file:
        img_data = base64.b64encode(image_file.read())
        data = {"frame_id": frame_id}
        data["frame_data"] = img_data.decode("utf-8")
        if frame_id == 0:
            data["frame_w"] = frame_w
            data["frame_h"] = frame_h
            data["batch_size"] = 1
        body = json.dumps(data).encode("utf-8")
    
    os.remove(frame_path)
    response = client.invoke_endpoint(
        EndpointName=endpoint_name, ContentType="application/json", Accept="application/json", Body=body
    )

    body = response["Body"].read()

The resulting video is stored in {root_directory}/datasets/test.mp4. The following is a sample frame. The same person in consecutive frames is wrapped by a bounding box with a unique ID.

Perform batch inference

Now that we implemented and validated the FairMOT model using a frame-by-frame inference endpoint, we build a container that can process the entire video as a whole. This allows us to use FairMOT as a step in more complex video processing pipelines. We use a SageMaker processing job to achieve this goal, as demonstrated in the fairmot-batch-inference.ipynb notebook.

Once again, we begin with SageMaker initialization and building a custom container image. This time we encapsulate the frame-by-frame inference loop into the container itself (the predict.py script). Our test data is MOT16-03, pre-staged in the S3 bucket. As in the previous steps, make sure that the s3_model_uri variable refers to the correct model URI.

SageMaker processing jobs rely on Amazon S3 for input and output data placement. The following diagram demonstrates our workflow.

In the Run batch inference section, we create an instance of ScriptProcessor and define the path for input and output data, as well as the target model. We then run the processor, and the resulting video is placed into the location defined in the s3_output variable. It looks the same as the resulting video generated in the previous section.

Clean up

To avoid unnecessary costs, delete the resources you created as part of this solution, including the inference endpoint.

Conclusion

This post demonstrated how to use SageMaker to train and deploy an object tracking model based on FairMOT. You can use a similar approach to implement other custom algorithms. Although we used public datasets in this example, you can certainly accomplish the same with your own dataset. Amazon SageMaker Ground Truth can help you with the labeling, and SageMaker custom containers simplify implementation.


About the Author

Gordon Wang is a Data Scientist on the Professional Services team at Amazon Web Services. He supports customers in many industries, including media, manufacturing, energy, and healthcare. He is passionate about computer vision, deep learning, and MLOps. In his spare time, he loves running and hiking.

Read More

Distributed Mask RCNN training with Amazon SageMakerCV

Computer vision algorithms are at the core of many deep learning applications. Self-driving cars, security systems, healthcare, logistics, and image processing all incorporate various aspects of computer vision. But despite their ubiquity, training computer vision algorithms, like Mask or Cascade RCNN, is hard. These models employ complex architectures, train on large datasets, and require computer clusters, often requiring dozens of GPUs.

Last year at AWS re:Invent we announced record-breaking Mask RCNN training times of 6:45 minutes on PyTorch and 6:12 minutes on TensorFlow, which we achieved through a series of algorithmic, system, and infrastructure improvements. Our model made heavy use of half precision computation, state-of-the-art optimizers and loss functions, the AWS Elastic Fabric Adapter, and a new parameter server distribution approach.

Now, we’re making these optimizations available in Amazon SageMaker in our new SageMakerCV package. SageMakerCV takes all the high performance tools we developed last year and combines them with the convenience features of SageMaker, such as interactive development in SageMaker Studio, Spot training, and streaming data directly from Amazon Simple Storage Service (Amazon S3).

The challenge of training object detection and instance segmentation

Object detection models, like Mask RCNN, have complex architectures. They typically involve a pretrained backbone, such as a ResNet model, a region proposal network, classifiers, and regression heads. Essentially, these models work like a collection of neural networks working on slightly different, but related, tasks. On top of that, developers often need to modify these models for their own use case. For example, along with the classifier, we might want a model that can identify human poses, as part of an autonomous vehicle project, in order to predict movement and behavior. This involves adding an additional network to the model, alongside the classifier and regression heads.

Mask RCNN architecture

The following diagram illustrates the Mask RCNN architecture.

For more information on Mask RCNN, see the following blog posts:

Modifying models like this is a time-consuming process. The updated model might train slower, or not converge as well as the previous model. SageMakerCV solves these issues by simplifying both the model modification and optimization process. The modification process is streamlined by modularizing the models, and using the interactive development environment in Studio. At the same time, we can apply all the optimizations we developed for our record training time to the new model.

GPU and algorithmic improvements

Several pieces of Mask RCNN are difficult to optimize for GPUs. For example, as part of the region proposal step, we want to reduce the number of regions using non-max suppression (NMS), the process of removing overlapping boxes. Many implementations of Mask RCNN run NMS on the CPU, which means moving a lot of data off the GPU in the middle of training. Other parts of the model, such as anchor generation and assignment, and ROI align, encounter similar problems.

As part of our Mask RCNN optimizations in 2020, we worked with NVIDIA to develop efficient CUDA implementations of NMS, ROI align, and anchor tools, all of which are built into SageMakerCV. This means data stays on the GPU and models train faster. Options for mixed and half precision training means larger batch sizes, shorter step times, and higher GPU utilization.

SageMakerCV also includes the same improved optimizers and loss functions we used in our record Mask RCNN training. NovoGrad means you can now train a model on batch sizes as large as 512. GIoU loss boosts both box and mask performance by around 5%. Combined, these improvements make it possible to train Mask RCNN to state-of-the-art levels of performance in under 7 minutes.

The following table summarizes the benchmark training times for Mask RCNN trained to MLPerf convergence levels using SageMakerCV on P4d.24xlarge instances SageMaker instances. Total time refers to the entire elapsed time, including SageMaker instance setup, Docker and data download, training, and evaluation.

Framework Nodes Total Time Training Time Box MaP Seg MaP
PyTorch 1 1:33:04 1:25:59 37.8 34.1
PyTorch 2 0:57:05 0:50:21 38.0 34.4
PyTorch 4 0:36:27 0:29:40 37.9 34.3
TensorFlow 1 2:23:52 2:18:24 37.7 34.3
TensorFlow 2 1:09:02 1:03:29 37.8 34.5
TensorFlow 4 0:48:55 0:42:33 38.0 34.8

Interactive development

Our goal with SageMakerCV was not only to provide fast training models to our users, but also to make developing new models easier. To that end, we provide a series of template object detection models in a highly modularized format, with a simple registry structure for adding new pieces. We also provide tools to modify and test models directly in Studio, so you can quickly go from prototyping a model to launching a distributed training cluster.

For example, say you want to add a custom keypoint head to Mask RCNN in TensorFlow. You first build your new head using the TensorFlow 2 Keras API, and add the SageMakerCV registry decorator at the top. The registry is a set of dictionaries organized into sections of the model. For example, the HEADS section triggers when the build_detector function is called, and the KeypointHead value from the configuration file tells the build to include the new ROI head. See the following code:

import tensorflow as tf
from sagemakercv.builder import HEADS

@HEADS.register("KeypointHead")
class KeypointHead(tf.keras.Model):
    def __init__(self, cfg):
        ...

Then you can call your new head by adding it to a YAML configuration file:

MODEL:
    RCNN:
        ROI_HEAD: "KeypointHead"

You provide this new configuration when building a model:

from configs.default_config import _C as cfg
from sagemakercv.detection import build_detector

cfg.merge_from_file('keypoint_config.yaml')

model = build_detector(cfg)

We know that building a new model is never as straightforward as we’re describing here, so we provide example notebooks of how to prototype models in Studio. This allows developers to quickly iterate on and debug their ideas.

Distributed training

SageMakerCV uses the distributed training capabilities of SageMaker right out of the box. You can go from prototyping a model on a single GPU to launching training on dozens of GPUs with just a few lines of code. SageMakerCV automatically supports SageMaker Distributed Data Parallel, which uses EFA to provide unmatched multi-node scaling efficiency. We also provide support for DDP in PyTorch, and Horovod in TensorFlow. By default, SageMakerCV automatically selects the optimal distributed training strategy for the cluster configuration you select. All you have to do is set your instance type and number of nodes, and SageMakerCV takes care of the rest.

Distributed training also typically involves huge amounts of data, often in the order of many terabytes. Getting all that data onto the training instances can take time, providing it will even fit. To fix this problem, SageMakerCV provides built-in support for streaming data directly from Amazon S3 with our recently released S3 plugin, reducing startup times and training costs.

Get started

We provide detailed tutorial notebooks that walk you through the entire process, from getting the COCO dataset, to building a model in Studio, to launching a distributed cluster. What follows is a brief overview.

Follow the instructions in Onboard to Amazon SageMaker Studio Using Quick Start. On your Studio instance, open a system terminal and clone the SageMakerCV repo.

git clone https://github.com/aws-samples/amazon-sagemaker-cv

Create a new Studio notebook with the PyTorch DLC, and install SageMakerCV in editable mode:

cd amazon-sagemaker-cv/pytorch
pip install -e .

In your notebook, create a new training configuration:

from configs import cfg

cfg.SOLVER.OPTIMIZER="NovoGrad" 
cfg.SOLVER.BASE_LR=0.042
cfg.SOLVER.LR_SCHEDULE="COSINE"
cfg.SOLVER.IMS_PER_BATCH=384 
cfg.SOLVER.WEIGHT_DECAY=0.001 
cfg.SOLVER.MAX_ITER=5000
cfg.OPT_LEVEL="O1"

Set your data sources by using either channels, or an S3 location to stream data during training:

S3_DATA_LOCATION = 's3://my-bucket/coco/'
CHANNELS_DIR='/opt/ml/input/data/' # on node, set by SageMaker

channels = {'validation': os.path.join(S3_DATA_LOCATION, 'val2017'),
            'weights': S3_WEIGHTS_LOCATION,
            'annotations': os.path.join(S3_DATA_LOCATION, 'annotations')}
            
cfg.INPUT.VAL_INPUT_DIR = os.path.join(CHANNELS_DIR, 'validation') 
cfg.INPUT.TRAIN_ANNO_DIR = os.path.join(CHANNELS_DIR, 'annotations', 'instances_train2017.json')
cfg.INPUT.VAL_ANNO_DIR = os.path.join(CHANNELS_DIR, 'annotations', 'instances_val2017.json')
cfg.MODEL.WEIGHT=os.path.join(CHANNELS_DIR, 'weights', R50_WEIGHTS) 
cfg.INPUT.TRAIN_INPUT_DIR = os.path.join(S3_DATA_LOCATION, "train2017") 
cfg.OUTPUT_DIR = '/opt/ml/checkpoints' # SageMaker output dir

# Save the new configuration file
dist_config_file = f"configs/dist-training-config.yaml"
with open(dist_config_file, 'w') as outfile:
    with redirect_stdout(outfile): print(cfg.dump())
    
hyperparameters = {"config": dist_config_file}

Finally, we can launch a distributed training job. For example, we can say we want four ml.p4d.24xlarge instances, and train a model to state-of-the-art convergence in about 45 minutes:

estimator = PyTorch(
                entry_point='train.py', 
                source_dir='.', 
                py_version='py3',
                framework_version='1.8.1',
                role=get_execution_role(),
                instance_count=4,
                instance_type='ml.p4d.24xlarge',
                distribution={ "smdistributed": { "dataparallel": { "enabled": True } } } ,
                output_path='s3://my-bucket/output/',
                checkpoint_s3_uri='s3://my-bucket/checkpoints/',
                model_dir='s3://my-bucket/model/',
                hyperparameters=hyperparameters,
                volume_size=500,
)

estimator.fit(channels)

Clean up

After training your model, be sure to check that all your training instances are complete or stopped by using the SageMaker console and choosing Training Jobs in the navigation pane.

Also, make sure to stop all Studio instances by choosing the Studio session monitor (square inside a circle icon) at the left of the page in Studio. Choose the power icon next to any running instances to shut them down. Your files are saved on your Studio EBS.

Conclusion

SageMakerCV started life as our project to break training records for computer vision models. In the process, we developed new tools and techniques to boost both training speed and accuracy. Now, we’ve combined those advances with SageMaker’s unified machine learning development experience. By combining the latest algorithmic advances, GPU hardware, EFA, and the ability to stream huge datasets from Amazon S3, SageMakerCV is the ideal place to develop the most advanced computer vision models. We look forward to seeing what new models and applications the machine learning community develops, and welcome any and all contributions. To get started, see our comprehensive tutorial notebooks in PyTorch and TensorFlow on GitHub.


About the Authors

Ben Snyder is an applied scientist with AWS Deep Learning. His research interests include computer vision models, reinforcement learning, and distributed optimization. Outside of work, he enjoys cycling and backcountry camping.

Khaled ElGalaind is the engineering manager for AWS Deep Engine Benchmarking, focusing on performance improvements for AWS Machine Learning customers. Khaled is passionate about democratizing deep learning. Outside of work, he enjoys volunteering with the Boy Scouts, BBQ, and hiking in Yosemite.

Sami Kama is a software engineer in AWS Deep Learning with expertise in performance optimization, HPC/HTC, Deep learning frameworks and distributed computing. Sami aims to reduce the environmental impact of Deep Learning by increasing the computation efficiency. He enjoys spending time with his kids, catching up with science and technology and occasional video games.

Read More

Machine learning inference at the edge using Amazon Lookout for Vision and AWS IoT Greengrass

Discrete and continuous manufacturing lines generate a high volume of products at low latency, ranging from milliseconds to a few seconds. To identify defects at the same throughput of production, camera streams of images must be processed at low latency. Additionally, factories may have low network bandwidth or intermittent cloud connectivity. In such scenarios, you may need to run the defect detection system on your on-premises compute infrastructure, and upload the processed results for further development and monitoring purposes to the AWS Cloud. This hybrid approach with both local edge hardware and the cloud can address the low latency requirements and help reduce storage and network transfer costs to the cloud. This may also fulfill your data privacy and other regulatory requirements.

In this post, we show you how to detect defective parts using Amazon Lookout for Vision machine learning (ML) models running on your on-premises edge appliance.

Lookout for Vision is an ML service that helps spot product defects using computer vision to automate the quality inspection process in your manufacturing lines, with no ML expertise required. The fully managed service enables you to build, train, optimize, and deploy the models in the AWS Cloud or edge. You can use the cloud APIs or deploy Amazon Lookout for Vision models on any NVIDIA Jetson edge appliance or x86 compute platform running Linux with an NVIDIA GPU accelerator. You can use AWS IoT Greengrass to deploy, and manage your edge compatible customized models on your fleet of devices.

Solution overview

In this post, we use a printed circuit board dataset composed of normal and defective images such as scratches, solder blobs, and damaged components on the board. We train a Lookout for Vision model in the cloud to identify defective and normal printed circuit boards. We compile the model to a target ARM architecture, package the trained Lookout for Vision model as an AWS IoT Greengrass component, and deploy the model to an NVIDIA Jetson edge device using the AWS IoT Greengrass console. Finally, we demonstrate a Python-based sample application running on the NVIDIA Jetson edge device that sources the printed circuit board image from the edge device file system, runs the inference on the Lookout for Vision model using the gRPC interface, and sends the inference data to an MQTT topic in the AWS Cloud.

The following diagram illustrates the solution architecture.

The solution has the following workflow:

  1. Upload a training dataset to Amazon Simple Storage Service (Amazon S3).
  2. Train a Lookout for Vision model in the cloud.
  3. Compile the model to the target architecture (ARM) and deploy the model to the NVIDIA Jetson edge device using the AWS IoT Greengrass console.
  4. Source images from local disk.
  5. Run inferences on the deployed model via the gRPC interface.
  6. Post the inference results to an MQTT client running on the edge device.
  7. Receive the MQTT message on a topic in AWS IoT Core in the AWS Cloud for further monitoring and visualization.

Steps 4, 5 and 6 are coordinated with the sample Python application.

Prerequisites

Before you get started, complete the following prerequisites:

  1. Create an AWS account.
  2. On your NVIDIA Jetson edge device, complete the following:
    1. Set up your edge device (we have set IoT THING_NAME = l4vJetsonXavierNx when installing AWS IoT Greengrass V2).
    2. Clone the sample project containing the Python-based sample application (warmup-model.py to load the model, and sample-client-file-mqtt.py to run inferences). Load the Python modules. See the following code:
git clone https://github.com/aws-samples/ds-peoplecounter-l4v-workshop.git
cd ds-peoplecounter-l4v-workshop 
pip3 install -r requirements.txt
cd lab2/inference_client  
# Replace ENDPOINT variable in sample-client-file-mqtt.py with the 
# value on the AWS console AWS IoT->Things->l4JetsonXavierNX->Interact.  
# Under HTTPS. It will be of type <name>-ats.iot.<region>.amazon.com 

Dataset and model training

We use the printed circuit board dataset to demonstrate the solution. The dataset contains normal and anomalous images. Here are a few sample images from the dataset.

The following image shows a normal printed circuit board.

The following image shows a printed circuit board with scratches.

The following image shows a printed circuit board with a soldering defect.

To train a Lookout for Vision model, we follow the steps outlined in Amazon Lookout for Vision – New ML Service Simplifies Defect Detection for Manufacturing. After you complete these steps, you can navigate to the project and the Models page to check the performance of the trained model. You can start the process of exporting the model to the target edge device any time after the model is trained.

Compile and package the model as an AWS IoT Greengrass component

In this section, we walk through the steps to compile the printed circuit board model to our target edge device and package the model as an AWS IoT Greengrass component.

  1. On the Lookout for Vision console, choose your project.
  2. In the navigation pane, choose Edge model packages.
  3. Choose Create model packaging job.

  1. For Job name, enter a name.
  2. For Job description, enter an optional description.
  3. Choose Browse models.

  1. Select the model version (the printed circuit board model built in the previous section).
  2. Choose Choose.

  1. Select Target device and enter the compiler options.

Our target device is on JetPack 4.5.1. See this page for additional details on supported platforms. You can find the supported compiler options such as trt-ver and cuda-ver in the NVIDIA JetPack 4.5.1 archive.

  1. Enter the details for Component name, Component description (optional), Component version, and Component location.

Amazon Lookout for Vision stores the component recipes and artifacts in this Amazon S3 location.

  1. Choose Create model packaging job.

You can see your job name and status showing as In progress. The model packaging job may take a few minutes to complete.

When the model packaging job is complete, the status shows as Success.

  1. Choose your job name (in our case it’s ComponentCircuitBoard) to see the job details.

The Greengrass component and model artifacts have been created in your AWS account.

  1. Choose Continue deployment to Greengrass to deploy the component to the target edge device.

Deploy the model

In this section, we walk through the steps to deploy the printed circuit board model to the edge device using the AWS IoT Greengrass console.

  1. Choose Deploy to initiate the deployment steps.

  1. Select Core device (because the deployment is to a single device) and enter a name for Target name.

The target name is the same name you used to name the core device during the AWS IoT Greengrass V2 installation process.

  1. Choose your component. In our case, the component name is ComponentCircuitBoard, which contains the circuit board model.
  2. Choose Next.

  1. Configure the component (optional).
  2. Choose Next.

  1. Expand Deployment policies.

  1. For Component update policy, select Notify components.

This allows the already deployed component (a prior version of the component) to defer an update until they are ready to update.

  1. For Failure handling policy, select Don’t roll back.

In case of a failure, this option allows us to investigate the errors in deployment.

  1. Choose Next.

  1. Review the list of components that will be deployed on the target (edge) device.
  2. Choose Next.

You should see the message Deployment successfully created.

  1. To validate the model deployment was successful, run the following command on your edge device:
sudo /greengrass/v2/bin/greengrass-cli component list

You should see a similar looking output running the ComponentCircuitBoard lifecycle startup script:

 Components currently running in Greengrass:
 
 Component Name: aws.iot.lookoutvision.EdgeAgent
    Version: 0.1.34
    State: RUNNING
    Configuration: {"Socket":"unix:///tmp/aws.iot.lookoutvision.EdgeAgent.sock"}
 Component Name: ComponentCircuitBoard
    Version: 1.0.0
    State: RUNNING
    Configuration: {"Autostart":false}

Run inferences on the model

We’re now ready to run inferences on the model. On your edge device, run the following command to load the model:

# run command to load the model
# This will load the model into running state 
python3 warmup-model.py

To generate inferences, run the following command with the source file name:

python3 sample-client-file-mqtt.py /path/to/images

The following screenshot shows that the model correctly predicts the image as anomalous (bent pin) with a confidence score of 0.999766.

The following screenshot shows that the model correctly predicts the image as anomalous (solder blob) with a confidence score of 0.7701461.

The following screenshot shows that the model correctly predicts the image as normal with a confidence score of 0.9568462.

The following screenshot shows that the inference data posted an MQTT topic in AWS IoT Core.

Customer Stories

With AWS IoT Greengrass and Amazon Lookout for Vision, you can now automate visual inspection with CV for processes like quality control and defect assessment – all on the edge and in real time. You can proactively identify problems such as parts damage (like dents, scratches, or poor welding), missing product components, or defects with repeating patterns, on the production line itself – saving you time and money. Customers like Tyson and Baxter are discovering the power of Amazon Lookout for Vision to increase quality and reduce operational costs by automating visual inspection.

“Operational excellence is a key priority at Tyson Foods. Predictive maintenance is an essential asset for achieving this objective by continuously improving overall equipment effectiveness (OEE). In 2021, Tyson Foods launched a machine learning based computer vision project to identify failing product carriers during production to prevent them from impacting Team Member safety, operations, or product quality.

The models trained using Amazon Lookout for Vision performed well. The pin detection model achieved 95% accuracy across both classes. The Amazon Lookout for Vision model was tuned to perform at 99.1% accuracy for failing pin detection. By far the most exciting result of this project was the speedup in development time. Although this project utilizes two models and a more complex application code, it took 12% less developer time to complete. This project for monitoring the condition of the product carriers at Tyson Foods was completed in record time using AWS managed services such as Amazon Lookout for Vision.”

Audrey Timmerman, Sr Applications Developer, Tyson Foods.

“We use Amazon Lookout for Vision to automate inspection tasks and solve complex process management problems that can’t be addressed by manual inspection or traditional machine vision alone. Lookout for Vision’s cloud and edge capabilities provide us the ability to leverage computer vision and AI/ML-based solutions at scale in a rapid and agile manner, helping us to drive efficiencies on the manufacturing shop floor and enhance our operator’s productivity and experience.”

K. Karan, Global Senior Director – Digital Transformation, Integrated Supply Chain, Baxter International Inc.

Conclusion

In this post, we described a typical scenario for industrial defect detection at the edge. We walked through the key components of the cloud and edge lifecycle with an end-to-end example using Lookout for Vision and AWS IoT Greengrass. With Lookout for Vision, we trained an anomaly detection model in the cloud using the printed circuit board dataset, compiled the model to a target architecture, and packaged the model as an AWS IoT Greengrass component. With AWS IoT Greengrass, we deployed the model to an edge device. We demonstrated a Python-based sample application that sources printed circuit board images from the edge device local file system, runs the inferences on the Lookout for Vision model at the edge using the gRPC interface, and sends the inference data to an MQTT topic in the AWS Cloud.

In a future post, we will show how to run inferences on a real-time stream of images using a GStreamer media pipeline.

Start your journey towards industrial anomaly detection and identification by visiting the Amazon Lookout for Vision and AWS IoT Greengrass resource pages.


About the Authors

Amit Gupta is an AI Services Solutions Architect at AWS. He is passionate about enabling customers with well-architected machine learning solutions at scale.

 Ryan Vanderwerf is a partner solutions architect at Amazon Web Services. He previously provided Java virtual machine-focused consulting and project development as a software engineer at OCI on the Grails and Micronaut team. He was chief architect/director of products at ReachForce, with a focus on software and system architecture for AWS Cloud SaaS solutions for marketing data management. Ryan has built several SaaS solutions in several domains such as financial, media, telecom, and e-learning companies since 1996.

Prathyusha Cheruku is an AI/ML Computer Vision Product Manager at AWS. She focuses on building powerful, easy-to-use, no code/ low code deep learning-based image and video analysis services for AWS customers.

Read More