The ARA recipient is pioneering geometric deep learning, an approach that not only promises breakthroughs, but also a way to unify the machine learning “zoo”.Read More
Improving explainable AI’s explanations
Causal analysis improves both the classification accuracy and the relevance of the concepts identified by popular concept-based explanatory models.Read More
Automatically scale Amazon Kendra query capacity units with Amazon EventBridge and AWS Lambda
Data is proliferating inside the enterprise and employees are using more applications than ever before to get their jobs done, in fact according to Okta Inc., the number of software apps deployed by large firms across all industries world-wide has increased 68%, reaching an average of 129 apps per company.
As employees continue to self-serve and the number of applications they use grows, so will the likelihood that critical business information will remain hard to find or get lost between systems, negatively impacting workforce productivity and operating costs.
Amazon Kendra is an intelligent search service powered by machine learning (ML). Unlike conventional search technologies, Amazon Kendra reimagines search by unifying unstructured data across multiple data sources as part of a single searchable index. It’s deep learning and natural language processing capabilities then make it easy for you to get relevant answers when you need them.
Amazon Kendra Enterprise Edition includes storage capacity for 500,000 documents (150 GB of storage) and a query capacity of 40,000 queries per day (0.5 queries per second), and allows you to adjust index capacity by increasing or decreasing your query and storage capacity units as needed.
However, usage patterns and business needs are not always predictable. In this post we’ll demonstrate how you can automatically scale your Amazon Kendra index based on a time schedule using Amazon EventBridge and AWS Lambda. By doing this you can increase capacity for peak usage, avoid service throttling, maintain flexibility, and control costs.
Solution overview
Amazon Kendra provides a dashboard that allows you to evaluate the average number of queries per second for your index. With this information, you can estimate the number of additional capacity units your workload requires at a specific point in time.
For example, the following graph shows that during business hours, a surge occurs in the average queries per second, but after hours, the number of queries reduces. We base our solution on this pattern to set up an EventBridge scheduled event that triggers the automatic scaling Lambda function.
The following diagram illustrates our architecture.
You can deploy the solution into your account two different ways:
- Deploy an AWS Serverless Application Model (AWS SAM) template:
- Clone the project from the aws-samples repository on GitHub and follow the instructions.
- Create the resources by using the AWS Management Console. In this post, we walk you through the following steps:
- Set up the Lambda function for scaling
- Configure permissions for the function
- Test the function
- Set up an EventBridge scheduled event
Set up the Lambda function
To create the Lambda function that we use for scaling, we create a function using the Python runtime (for this post, we use the Python 3.8 runtime).
Use the following code as the content of your lambda_function.py
code:
#
# Copyright 2020 Amazon.com, Inc. or its affiliates. All Rights Reserved.
#
# Permission is hereby granted, free of charge, to any person obtaining a copy of this
# software and associated documentation files (the "Software"), to deal in the Software
# without restriction, including without limitation the rights to use, copy, modify,
# merge, publish, distribute, sublicense, and/or sell copies of the Software, and to
# permit persons to whom the Software is furnished to do so.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED,
# INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A
# PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT
# HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION
# OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE
# SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
#
'''
Changes the number of Amazon Kendra Enterprise Edition index capacity units
Parameters
----------
event : dict
Lambda event
Returns
-------
The additional capacity action or an error
'''
import json
import boto3
from botocore.exceptions import ClientError
# Variable declaration
KENDRA = boto3.client("kendra")
# Define your Amazon Kendra Enterprise Edition index ID
INDEX_ID = "<YOUR-INDEX-ID>"
# Define your baseline units
DEFAULT_UNITS = 0
# Define your the number of Query Capacity Units needed for increased capacity
ADDITIONAL_UNITS= 1
def add_capacity(INDEX_ID,capacity_units):
try:
response = KENDRA.update_index(
Id=INDEX_ID,
CapacityUnits={
'QueryCapacityUnits': int(capacity_units),
'StorageCapacityUnits': 0
})
return(response)
except Exception as e:
raise e
def reset_capacity(INDEX_ID,DEFAULT_UNITS):
try:
response = KENDRA.update_index(
Id=INDEX_ID,
CapacityUnits={
'QueryCapacityUnits': DEFAULT_UNITS,
'StorageCapacityUnits': 0
})
except Exception as e:
raise e
def current_capacity(INDEX_ID):
try:
response = KENDRA.describe_index(
Id=INDEX_ID)
return(response)
except Exception as e:
raise e
def lambda_handler(event,context):
print("Checking for query capacity units......")
response = current_capacity(INDEX_ID)
currentunits = response['CapacityUnits']['QueryCapacityUnits']
print ("Current query capacity units are: "+str(currentunits))
status = response['Status']
print ("Current index status is: "+status)
# If index is stuck in UPDATE state, don't attempt changing the capacity
if status == "UPDATING":
return ("Index is currently being updated. No changes have been applied")
if status == "ACTIVE":
if currentunits == 0:
print ("Adding query capacity...")
response = add_capacity(INDEX_ID,ADDITIONAL_UNITS)
print(response)
return response
else:
print ("Removing query capacity....")
response = reset_capacity(INDEX_ID, DEFAULT_UNITS)
print(response)
return response
else:
response = "Index is not ready to modify capacity. No changes have been applied."
return(response)
You must modify the following variables to match with your environment:
# Define your Amazon Kendra Enterprise Edition index ID
INDEX_ID = "<YOUR-INDEX-ID>"
# Define your baseline units
DEFAULT_UNITS = 1
# Define your the number of Query Capacity Units needed for increased capacity
ADDITIONAL_UNITS = 4
- INDEX_ID – The ID for your index; you can check it on the Amazon Kendra console.
- DEFAULT_UNITS – The number of query processing units that your Amazon Kendra Enterprise Edition requires to operate at minimum capacity. This number can range from 0–20 (you can request more capacity). 0 represents that no extra capacity units are provisioned to your Amazon Kendra Enterprise Edition index, which leaves it with a default capacity of 0.5 queries per second.
- ADDITIONAL_UNITS – The number of query capacity units you require at those times where additional capacity is required. This value can range from 1–20 (you can request additional capacity).
Configure function permissions
To query the status of your index and to modify the number of query capacity units, you need to attach a policy to your Lambda function AWS Identity and Access Management (IAM) execution role with those permissions.
- On the Lambda console, navigate to your function.
- On the Permissions tab, choose the execution role.
The IAM console opens automatically.
- On the Permissions tab, choose Attach policies.
- Choose Create policy.
A new tab opens.
- On the JSON tab, add the following content (make sure to provide your account and user information):
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "MyPolicy",
"Effect": "Allow",
"Action": [
"kendra:UpdateIndex",
"kendra:DescribeIndex"
],
"Resource": "arn:aws:kendra:<YOUR-AWS-REGION>:<YOUR-ACCOUNT-ID>:index/<YOUR-INDEX-ID>"
}
]
}
- Choose Next: Tags.
- Choose Next: Review.
- For Name, enter a policy name (for this post, we use
AmazonKendra_UpdateIndex
). - Choose Create policy.
- On the Attach permissions page, choose the refresh icon.
- Filter to find the policy you created.
- Select the policy and choose Attach policy.
Test the function
You can test your Lambda function by running a test event. For more information, see Invoke the Lambda function.
- On the Lambda console, navigate to your function.
- Create a new test event by choosing Test.
- Select Create new test event.
- For Event template, because your function doesn’t require any input from the event, you can choose the
hello-world
event template.
- Choose Create.
- Choose Test.
On the Lambda function logs, you can see the following messages:
Function Logs
START RequestId: 9b2382b7-0229-4b2b-883e-ba0f6b149513 Version: $LATEST
Checking for capacity units......
Current capacity units are: 1
Current index status is: ACTIVE
Adding capacity...
Set up an EventBridge scheduled event
An EventBridge scheduled event is an EventBridge event that is triggered on a regular schedule. This section shows how to create an EventBridge scheduled event that runs every day at 7 AM UTC and at 8 PM UTC to trigger the kendra-index-scaler
Lambda function. This allows your index to scale up with the additional query capacity units at 7 AM and scale down at 8 PM.
When you set up EventBridge scheduled events, you do so for the UTC time zone, so you need to calculate the time offset. For example, to run the event at 7 AM Central Standard Time (CST), you need to set the time to 1 PM UTC. If you want to accommodate for daylight savings, you have to create a different rule to account for the difference.
- On the EventBridge console, in the navigation pane, under Events, choose Rules.
- Choose Create rule.
- For Name, enter a name for your rule (for this post, we use
kendra-index-scaler
).
- In the Define pattern section, select Schedule.
- Select Cron expression and enter
0 7,20 * * ? *
.
We use this cron expression to trigger the EventBridge event every day at 7 AM and 8 PM.
- In the Select event bus section, select AWS default event bus.
- In the Select targets section, for Target, choose Lambda function.
- For Function, enter the function you created earlier (
lambda_function_kendra_index_handler
).
- Choose Create.
You can check Amazon CloudWatch Logs for the lambda_function_kendra_index_handler
function and see how it behaves depending on your index’s query capacity units.
Conclusion
In this post, you deployed a mechanism to automatically scale additional query processing units for your Amazon Kendra Enterprise Edition index.
As a next step, you could periodically review your usage patterns in order to plan the schedule to accommodate your query volume. To learn more about Amazon Kendra’s use cases, benefits, and how to get started with it, visit the webpage!
About the Authors
Juan Bustos is an AI Services Specialist Solutions Architect at Amazon Web Services, based in Dallas, TX. Outside of work, he loves spending time writing and playing music as well as trying random restaurants with his family.
Tapodipta Ghosh is a Senior Architect. He leads the Content And Knowledge Engineering Machine Learning team that focuses on building models related to AWS Technical Content. He also helps our customers with AI/ML strategy and implementation using our AI Language services like Amazon Kendra.
Tom McMahon is a Product Marketing Manager on the AI Services team at AWS. He’s passionate about technology and storytelling and has spent time across a wide-range of industries including healthcare, retail, logistics, and ecommerce. In his spare time he enjoys spending time with family, music, playing golf, and exploring the amazing Pacific northwest and its surrounds.
Automate multi-modality, parallel data labeling workflows with Amazon SageMaker Ground Truth and AWS Step Functions
This is the first in a two-part series on the Amazon SageMaker Ground Truth hierarchical labeling workflow and dashboards. In Part 1, we look at creating multi-step labeling workflows for hierarchical label taxonomies using AWS Step Functions. In Part 2 (coming soon), we look at how to build dashboards for analyzing dataset annotations and worker performance metrics on data lakes generated as output from the complex workflows and derive insights.
Data labeling often requires a single data object to include multiple types of annotations, or multi-type, such as 2D boxes (bounding boxes), lines, and segmentation masks, all on a single image. Additionally, to create high-quality machine learning (ML) models using labeled data, you need a way to monitor the quality of the labels. You can do this by creating a workflow in which labeled data is audited and adjusted as needed. This post introduces a solution to address both of these labeling challenges using an automotive dataset, and you can extend this solution for use with any type of dataset.
For our use case, assume you have a large quantity of automotive video data filmed from one or more angles on a moving vehicle (for example, some Multi-Object Tracking (MOT) scenes) and you want to annotate the data using multiple types of annotations. You plan to use this data to train a cruise control, lane-keeping ML algorithm. Given the task at hand, it’s imperative that you use high-quality labels to train the model.
First, you must identify the types of annotations you want to add to your video frames. Some of the most important objects to label for this use case are other vehicles in the frame, road boundaries, and lanes. To do this, you define a hierarchical label taxonomy, which defines the type of labels you want to add to each video, and the order in which you want the labels to be added. The Ground Truth video tracking labeling job supports bounding box, polyline, polygon, and keypoint annotations. In this use case, vehicles are annotated using 2-dimensional boxes, or bounding boxes, and road boundaries and curves are annotated with a series of flexible lines segments, referred to as polylines.
Second, you need to establish a workflow to ensure label quality. To do this, you can create an audit workflow to verify the labels generated by your pipeline are of high enough quality to be useful for model training. In this audit workflow, you can greatly improve label accuracy by building a multi-step review pipeline that allows annotations to be audited, and if necessary, adjusted by a second reviewer who may be a subject matter expert.
Based on the size of the dataset and data objects, you should also consider the time and resources required to create and maintain this pipeline. Ideally, you want this series of labeling jobs to be started automatically, only requiring human operation to specify the input data and workflow.
The solution used in this post uses Ground Truth, AWS CloudFormation, Step Functions, and Amazon DynamoDB to create a series of labeling jobs that run in a parallel and hierarchical fashion. You use a hierarchical label taxonomy to create labeling jobs of different modalities (polylines and bounding boxes), and you add secondary human review steps to improve annotation quality and final results.
For this post, we demonstrate the solution in the context of the automotive space, but you can easily apply this general pipeline to labeling pipelines involving images, videos, text, and more. In addition, we demonstrate a workflow that is extensible, allowing you to reduce the total number of frames that need human review by adding automated quality checks and maintaining data quality at scale. In this use case, we use these checks to find anomalies in MOT time series data like video object tracking annotations.
We walk through a use case in which we generate multiple types of annotations for an automotive scene. Specifically, we run four labeling jobs per input video clip: an initial labeling of vehicles, initial labeling of lanes, and then an adjustment job per initial job with a separate quality assurance workforce.
We demonstrate the various extension points in our Step Function workflow that can allow you to run automated quality assurance checks. This allows for clip filtering between and after jobs have completed, which can result in high-quality annotations for a fraction of the cost.
AWS services used to implement this solution
This solution creates and manages Ground Truth labeling jobs to label video frames using multiple types of annotations. Ground Truth has native support for video datasets through its video frame object tracking task type.
This task type allows workers to create annotations across a series of video frames, providing tools to predict the next location of a bounding box in subsequent frames. It also supports multiple annotation types such as bounding boxes or polylines through the label category configuration files provided during job creation. We use these tools in this tutorial, running a job for vehicle bounding boxes and a job for lane polylines.
We use Step Functions to manage the labeling job. This solution abstracts labeling job creation so that you specify the overall workflow you want to run using a hierarchical label taxonomy, and all job management is handled by Step Functions.
The solution is implemented using CloudFormation templates that you can deploy in your AWS account. The interface to the solution is an API managed by Amazon API Gateway, which provides the ability to submit annotation tasks to the solution, which are then translated into Ground Truth labeling jobs.
Estimated costs
By deploying and using this solution, you incur the maximum cost of approximately $20 other than human labeling costs because it only uses fully managed compute resources on demand. Amazon Simple Storage Service (Amazon S3), AWS Lambda, Amazon SageMaker, API Gateway, Amazon Simple Notification Service (Amazon SNS), Amazon Simple Queue Service (Amazon SQS), AWS Glue, and Step Functions are included in the AWS Free Tier, with charges for additional use. For more information, see the following pricing pages:
Ground Truth pricing depends on the type of workforce that you use. If you’re a new user of Ground Truth, we suggest that you use a private workforce and include yourself as a worker to test your labeling job configuration. For more information, see Amazon SageMaker Ground Truth pricing.
Solution overview
In this two-part series, we discuss an architecture pattern that allows you to build a pipeline for orchestrating multi-step data labeling workflows that have workers add different types of annotation in parallel using Ground Truth. You also learn how you can analyze the dataset annotations produced by the workflow as well as worker performance. The first post covers the Step Functions workflow that automates advanced ML data labeling workflows using Ground Truth for chaining and hierarchical label taxonomies. The second post describes how to build data lakes on dataset annotations from Ground Truth and worker metrics and use these data lakes to derive insights or analyze the performance of your workers and dataset annotation quality using advanced analytics.
The following diagram depicts the hierarchical workflow, which you can use to run groups of labeling jobs in sequential steps, or levels, in which each labeling job in a single level runs in parallel.
The solution is composed of two main parts:
- Use an API to trigger the orchestration workflow.
- Run the individual steps of the workflow to achieve the labeling pipeline.
Trigger the orchestration workflow with an API
The CloudFormation template launched in this solution uses API Gateway to expose an endpoint for you to trigger batch labeling jobs. After you send the post request to the API Gateway endpoint, it runs a Lambda function to trigger the workflow.
The following table contains the two main user-facing APIs relevant to running batch, which represents multi-level labeling jobs.
URL | Request Type | Description |
{endpointUrl}/batch/create | POST | API triggers a new batch of labeling jobs |
{endpointUrl}/batch/show | GET | APIs describe current state of the batch job run |
Run the workflow
For the orchestration of steps, we use Step Functions as a managed solution. When the batch job creation API is triggered, a Lambda function triggers a Step Functions workflow like the following. This begins the annotation input processing.
Let’s discuss the steps in more detail.
Transformation step
The first step is to preprocess the data. The current implementation converts the notebook inputs into the internal manifest file data type shared across multiple steps. This step doesn’t currently perform any complex processing, but you can further customize this step by adding custom data preprocessing logic to this function. For example, if your dataset was encoded in raw videos, you could perform frame splitting and manifest generation within transformation rather than in a separate notebook. Alternatively, if you’re using this solution to create a 3D point cloud labeling pipeline, you may want to add logic to extract pose data in a world coordinate system using the camera and LiDAR extrinsic matrices.
TriggerLabelingFirstLevel
When the data preprocessing is complete, the Ground Truth API operation CreateLabelingJob is used to launch labeling jobs. These labeling jobs are responsible for annotating datasets that are tied to the first level.
CheckForFirstLevelComplete
This step waits for the FIRST_LEVEL
Ground Truth labeling jobs triggered from the TriggerLabelingFirstStep
. When the job trigger is complete, this step waits for all the created labeling jobs to complete. An external listener Lambda function monitors the status of the labeling jobs, and when all the pending labeling jobs are done, it runs the sendTokenSucess
API to signal to this state to proceed to the next step. Failure cases are handled using appropriate error clauses and timeouts in the step definition.
SendSecondLevelSNSAndCheckResponse
This step performs postprocessing on the output of the first-level job. For example, if your requirements are to only send 10% of frames to the adjustment jobs, you can implement this logic here by filtering the set of outputs from the first job.
TriggerLabelingSecondLevel
When the data postprocessing from the first-level is complete, CreateLabelingJobs
is used to launch labeling jobs to complete annotations at the second level. At this stage, a private workforce reviews the quality of annotations of the first-level labeling jobs and updates annotations as needed.
CheckForSecondLevelComplete
This step is the same wait step as CheckForFirstLevelComplete
, but this step simply waits for the jobs that are created from the second level.
SendThirdLevelSNSAndCheckResponse
This step is the same post-processing step as SendSecondLevelSNSAndCheckResponse
, but this step does the postprocessing of the second-level output and feeds as input to the third-level labeling job.
TriggerLabelingThirdLevel
This is the same logic as TriggerLabelingSecondLevel
, but the labeling jobs are triggered that are annotated as third level. At this stage, the private workforce is updating annotations for quality of the second-level labeling job.
CopyLogsAndSendBatchCompleted
This Lambda function logs and sends SNS messages to notify users that the batch is complete. It’s also a placeholder for any post-processing logic that you may want to run. Common postprocessing includes transforming the labeled data into a format compatible with a customer-specific data format.
Prerequisites
Before getting started, make sure you have the following prerequisites:
- An AWS account.
- A notebook AWS Identity and Access Management (IAM) role with the permissions required to complete this walkthrough. Your IAM role must have the required permissions attached. If you don’t require granular permission, attach the following AWS managed policies:
AmazonS3FullAccess
AmazonAPIGatewayInvokeFullAccess
AmazonSageMakerFullAccess
- Familiarity with Ground Truth, AWS CloudFormation, and Step Functions.
- A SageMaker workforce. For this post, we use a private workforce. You can create a workforce on the SageMaker console. Note the Amazon Cognito user pool identifier and the app client identifier after your workforce is created. You use these values to tell the CloudFormation stack deployment which workforce to create work teams, which represent the group of labelers. You can find these values in the Private workforce summary section on the console after you create your workforce, or when you call DescribeWorkteam.
The following GIF demonstrates how to create a private workforce. For step-by-step instructions, see Create an Amazon Cognito Workforce Using the Labeling Workforces Page.
Launch the CloudFormation stack
Now that we’ve seen the structure of the solution, we deploy it into our account so we can run an example workflow. All our deployment steps are managed by AWS CloudFormation—it creates resources in Lambda, Step Functions, DynamoDB, and API Gateway for you.
You can launch the stack in AWS Region us-east-1
on the CloudFormation console by choosing Launch Stack:
On the CloudFormation console, select Next, and then modify the following template parameters to customize the solution.
You can locate the CognitoUserPoolClientId and CognitoUserPoolId in the SageMaker console.
- CognitoUserPoolClientId: App client ID of your private workforce.
- CognitoUserPoolId: ID of the user pool associated with your private workforce.
To locate these values in the console:
- Open the SageMaker console at https://console.aws.amazon.com/sagemaker/
- Select Labeling workforces in the navigation pane.
- Choosing the Private
- Use the values in the Private work team summary Use the App client for the CognitoUserPoolClientId and use Amazon Cognito user pool for the CognitoUserPoolId.
For this tutorial, you can use the default values for the following parameters.
- GlueJobTriggerCron: Cron expression to use when scheduling reporting AWS Glue cron job. The results from annotations generated with SageMaker Ground Truth and worker performance metrics are used to create a dashboard in Amazon QuickSight. This will be explained in detail as part of second part. The outputs from SageMaker annotations and worker performance metrics shows up in Athena queries after processing the data with AWS Glue. By default, AWS Glue cron jobs run every hour.
- JobCompletionTimeout: Number of seconds to wait before treating a labeling job as failed and moving to the BatchError state.
- LoggingLevel: This is used internally and can be ignored. Logging level to change verbosity of logs. Accepts values DEBUG and PROD.
Prefix: A prefix to use when naming resources used to creating and manage labeling jobs and worker metrics.
To launch the stack in a different AWS Region, use the instructions found in the README of the GitHub repository.
After you deploy the solution, two new work teams are in the private workforce you created earlier: smgt-workflow-first-level
and smgt-workflow-second-level
. These are the default work teams used by the solution if no overrides are specified, and the smgt-workflow-second-level
work team is used for labeling second-level and third-level jobs. You should add yourself to both work teams to see labeling tasks created by the solution. To learn how to add yourself to a private work team, see Add or Remove Workers.
You also need to go the the API Gateway console and look for the deployed API prefixed with smgt-workflow
and note its ID. The notebook needs to reference this ID so it can determine which API URL to call.
Launch the notebook
After you deploy the solution into your account, you’re ready to launch a notebook to interact with it and start new workflows. In this section, we walk through the following steps:
- Set up and access the notebook instance.
- Obtain the example dataset.
- Prepare Ground Truth input files.
Set up the SageMaker notebook instance
In this example notebook, you learn how to map a simple taxonomy consisting of a vehicle class and a lane class to Ground Truth label category configuration files. A label category configuration file is used to define the labels that workers use to annotation your images. Next, you learn how to launch and configure the solution that runs the pipeline using a CloudFormation template. You can also further customize this code, for example by customizing the batch creation API call to run labeling for a different combination of task types.
To create a notebook instance and access the notebook used in this post, complete the following steps:
- Create a notebook instance with the following parameters:
- Use ml.t2.medium to launch the notebook instance.
- Increase the ML storage volume size to at least 10 GB.
- Select the notebook IAM role described in prerequisites. This role allows your notebook to upload your dataset to Amazon S3 and call the solution APIs.
- Open Jupyter Lab or Jupyter to access your notebook instances.
- In Jupyter, choose the SageMaker Examples In Jupyter Lab, choose the SageMaker icon.
- Choose Ground Truth Labeling Jobs and then choose the job sagemaker_ground_truth_workflows.ipynb.
- If you’re using Jupyter, choose Use to copy the notebook to your instance and run it. If you’re in Jupyter lab, choose Create a Copy.
Obtain the example dataset
Complete the following steps to set up your dataset:
- Download MOT17.zip using the Download Dataset section of the notebook.
This download is approximately 5 GB and takes several minutes.
- Unzip MOT17.zip using the notebook’s Unzip dataset
- Under the Copy Data to S3 header, run the cell to copy one set of video frames dataset to Amazon S3.
Prepare the Ground Truth input files
To use the solution, we need to create a manifest file. This file tells Ground Truth where your dataset is. We also need two label category configuration files to describe our label names, and the labeling tool to use for each (bounding box or polyline).
- Run the cells under Generate Manifest to obtain a list of frames in a video from the dataset. We take 150 frames at half the frame rate of the video as an example.
- Continue running cells under Generate Manifest to build a sequence file describing our video frames, and then to create a manifest file referring to our sequence file.
- Run the cell under Generate Label Category Configuration Files to create two new files: a vehicle label configuration file (which uses the bounding box tool), and a lane label configuration file (which uses the polyline tool).
- Copy the manifest file and label the category configuration files to Amazon S3 by running the Send data to S3
At this point, you have prepared all inputs to the labeling jobs and are ready to begin operating the solution.
To learn more about Ground Truth video frame labeling jobs and chaining, see the following references:
Run an example workflow
In this section, we walk through the steps to run an example workflow on the automotive dataset. We create a multi-modality workflow, perform both initial and audit labeling, then view our completed annotations.
Create a workflow batch
This solution orchestrates a workflow of Ground Truth labeling jobs to run both video object tracking bounding box jobs and polyline jobs, as well as automatically create adjustment jobs after the initial labeling. This workflow batch is configured through the batch_create
API available to the solution.
Run the cell under Batch Creation Demo in the notebook. This passes your input manifest and label category configuration S3 URIs to a new workflow batch.
The cell should output the ID of the newly created workflow batch, for example:
Batch processor successfully triggered with BatchId : nb-ccb0514c
Complete the first round of labeling tasks
To simulate workers completing labeling, we log in as a worker in the first-level Ground Truth work team and complete the labeling task.
- Run the cell under Sign-in To Worker Portal to get a link to log in to the worker portal.
An invitation should have already been sent to your email address if you invited yourself to the solution-generated first-level and second-level work teams.
- Log in and wait for the tasks to appear in the worker portal.
Two tasks should be available, one with ending in vehicle
and one ending in lane
, corresponding to the two jobs we created during workflow batch creation.
- Open each task and add some dummy labels by choosing and dragging on the image frames.
- Choose Submit on each task.
Complete the second round of labeling tasks
Our workflow specified we wanted adjustment jobs auto-launched for each first-level job. We now complete the second round of labeling tasks.
- Still in the worker portal, wait for tasks with
vehicle-audit
andlane-audit
to appear. - Open each task in the worker portal, noting that the prior level’s labels are still visible.
These adjustment tasks could be performed by a more highly trained quality assurance group in a different work team.
- Make adjustments as desired and choose Pass or Fail on each annotation.
- When you’re finished, choose Submit.
View the completed annotations
We can view details about the completed workflow batch by running the batch show API.
- Run the cell under Batch Show Demo.
This queries the solution’s database for all complete workflow run batches, and should output your batch ID when your batch is complete.
- We can get more specific details about a batch by running the cell under Batch Detailed Show Demo.
This takes the ID of a batch in the system and returns status information and the locations of all input and output manifests for each created job.
- Copy and enter the field
jobOutputS3Url
for any of the jobs and verify the manifest file for that job is downloaded.
This file contains a reference to your input data sequence as well as the S3 URI of the output annotations for each sequence.
Final results
When all labeling jobs in the pipeline are complete, an SNS message is published on the default status SNS topic. You can subscribe to SNS topics using an email address for verifying the solution’s functionality. The message includes the batch ID used during batch creation, a message about the batch completion, and the same information the batch/show
API provides under a batchInfo
key. You can parse this message to extract metadata about the completed labeling jobs in the second level of the pipeline.
{
"batchId": "nb-track-823f6d3e",
"message": "Batch processing has completed successfully.",
"batchInfo": {
"batchId": "nb-track-823f6d3e",
"status": "COMPLETE",
"inputLabelingJobs": [
{
"jobName": "nb-track-823f6d3e-vehicle",
"taskAvailabilityLifetimeInSeconds": "864000",
"inputConfig": {
"inputManifestS3Uri": "s3://smgt-workflow-1-322552456788-us-west-2-batch-input/tracking_manifests/MOT17-13-SDP.manifest"
},
"jobModality": "VideoObjectTracking",
"taskTimeLimitInSeconds": "604800",
"maxConcurrentTaskCount": "100",
"workteamArn": "arn:aws:sagemaker:us-west-2:322552456788:workteam/private-crowd/smgt-workflow-1-first-level",
"jobType": "BATCH",
"jobLevel": "1",
"labelCategoryConfigS3Uri": "s3://smgt-workflow-1-322552456788-us-west-2-batch-input/tracking_manifests/vehicle_label_category.json"
},
{
"jobName": "nb-track-823f6d3e-lane",
"taskAvailabilityLifetimeInSeconds": "864000",
"inputConfig": {
"inputManifestS3Uri": "s3://smgt-workflow-1-322552456788-us-west-2-batch-input/tracking_manifests/MOT17-13-SDP.manifest"
},
"jobModality": "VideoObjectTracking",
"taskTimeLimitInSeconds": "604800",
"maxConcurrentTaskCount": "100",
"workteamArn": "arn:aws:sagemaker:us-west-2:322552456788:workteam/private-crowd/smgt-workflow-1-first-level",
"jobType": "BATCH",
"jobLevel": "1",
"labelCategoryConfigS3Uri": "s3://smgt-workflow-1-322552456788-us-west-2-batch-input/tracking_manifests/lane_label_category.json"
},
{
"jobName": "nb-track-823f6d3e-vehicle-audit",
"taskAvailabilityLifetimeInSeconds": "864000",
"inputConfig": {
"chainFromJobName": "nb-track-823f6d3e-vehicle"
},
"jobModality": "VideoObjectTrackingAudit",
"taskTimeLimitInSeconds": "604800",
"maxConcurrentTaskCount": "100",
"workteamArn": "arn:aws:sagemaker:us-west-2:322552456788:workteam/private-crowd/smgt-workflow-1-first-level",
"jobType": "BATCH",
"jobLevel": "2"
},
{
"jobName": "nb-track-823f6d3e-lane-audit",
"taskAvailabilityLifetimeInSeconds": "864000",
"inputConfig": {
"chainFromJobName": "nb-track-823f6d3e-lane"
},
"jobModality": "VideoObjectTrackingAudit",
"taskTimeLimitInSeconds": "604800",
"maxConcurrentTaskCount": "100",
"workteamArn": "arn:aws:sagemaker:us-west-2:322552456788:workteam/private-crowd/smgt-workflow-1-first-level",
"jobType": "BATCH",
"jobLevel": "2"
}
],
"firstLevel": {
"status": "COMPLETE",
"numChildBatches": "2",
"numChildBatchesComplete": "2",
"jobLevels": [
{
"batchId": "nb-track-823f6d3e-first_level-nb-track-823f6d3e-lane",
"batchStatus": "COMPLETE",
"labelingJobName": "nb-track-823f6d3e-lane",
"labelAttributeName": "nb-track-823f6d3e-lane-ref",
"labelCategoryS3Uri": "s3://smgt-workflow-1-322552456788-us-west-2-batch-input/tracking_manifests/lane_label_category.json",
"jobInputS3Uri": "s3://smgt-workflow-1-322552456788-us-west-2-batch-input/tracking_manifests/MOT17-13-SDP.manifest",
"jobInputS3Url": "https://smgt-workflow-1-322552456788-us-west-2-batch-input.s3.amazonaws.com/tracking_manifests/MOT17-13-SDP.manifest?...",
"jobOutputS3Uri": "s3://smgt-workflow-1-322552456788-us-west-2-batch-processing/batch_manifests/VideoObjectDetection/nb-track-823f6d3e-first_level-nb-track-823f6d3e-lane/output/nb-track-823f6d3e-lane/manifests/output/output.manifest",
"jobOutputS3Url": "https://smgt-workflow-1-322552456788-us-west-2-batch-processing.s3.amazonaws.com/batch_manifests/VideoObjectDetection/nb-track-823f6d3e-first_level-nb-track-823f6d3e-lane/output/nb-track-823f6d3e-lane/manifests/output/output.manifest?..."
},
{
"batchId": "nb-track-823f6d3e-first_level-nb-track-823f6d3e-vehicle",
"batchStatus": "COMPLETE",
"labelingJobName": "nb-track-823f6d3e-vehicle",
"labelAttributeName": "nb-track-823f6d3e-vehicle-ref",
"labelCategoryS3Uri": "s3://smgt-workflow-1-322552456788-us-west-2-batch-input/tracking_manifests/vehicle_label_category.json",
"jobInputS3Uri": "s3://smgt-workflow-1-322552456788-us-west-2-batch-input/tracking_manifests/MOT17-13-SDP.manifest",
"jobInputS3Url": "https://smgt-workflow-1-322552456788-us-west-2-batch-input.s3.amazonaws.com/tracking_manifests/MOT17-13-SDP.manifest?...",
"jobOutputS3Uri": "s3://smgt-workflow-1-322552456788-us-west-2-batch-processing/batch_manifests/VideoObjectTracking/nb-track-823f6d3e-first_level-nb-track-823f6d3e-vehicle/output/nb-track-823f6d3e-vehicle/manifests/output/output.manifest",
"jobOutputS3Url": "https://smgt-workflow-1-322552456788-us-west-2-batch-processing.s3.amazonaws.com/batch_manifests/VideoObjectTracking/nb-track-823f6d3e-first_level-nb-track-823f6d3e-vehicle/output/nb-track-823f6d3e-vehicle/manifests/output/output.manifest?..."
}
]
},
"secondLevel": {
"status": "COMPLETE",
"numChildBatches": "2",
"numChildBatchesComplete": "2",
"jobLevels": [
{
"batchId": "nb-track-823f6d3e-second_level-nb-track-823f6d3e-vehicle-audit",
"batchStatus": "COMPLETE",
"labelingJobName": "nb-track-823f6d3e-vehicle-audit",
"labelAttributeName": "nb-track-823f6d3e-vehicle-audit-ref",
"labelCategoryS3Uri": "s3://smgt-workflow-1-322552456788-us-west-2-batch-processing/label_category_input/nb-track-823f6d3e-second_level-nb-track-823f6d3e-vehicle-audit/category-file.json",
"jobInputS3Uri": "s3://smgt-workflow-1-322552456788-us-west-2-batch-processing/batch_manifests/VideoObjectTracking/nb-track-823f6d3e-first_level-nb-track-823f6d3e-vehicle/output/nb-track-823f6d3e-vehicle/manifests/output/output.manifest",
"jobInputS3Url": "https://smgt-workflow-1-322552456788-us-west-2-batch-processing.s3.amazonaws.com/batch_manifests/VideoObjectTracking/nb-track-823f6d3e-first_level-nb-track-823f6d3e-vehicle/output/nb-track-823f6d3e-vehicle/manifests/output/output.manifest?...",
"jobOutputS3Uri": "s3://smgt-workflow-1-322552456788-us-west-2-batch-processing/batch_manifests/VideoObjectTrackingAudit/nb-track-823f6d3e-second_level-nb-track-823f6d3e-vehicle-audit/output/nb-track-823f6d3e-vehicle-audit/manifests/output/output.manifest",
"jobOutputS3Url": "https://smgt-workflow-1-322552456788-us-west-2-batch-processing.s3.amazonaws.com/batch_manifests/VideoObjectTrackingAudit/nb-track-823f6d3e-second_level-nb-track-823f6d3e-vehicle-audit/output/nb-track-823f6d3e-vehicle-audit/manifests/output/output.manifest?..."
},
{
"batchId": "nb-track-823f6d3e-second_level-nb-track-823f6d3e-lane-audit",
"batchStatus": "COMPLETE",
"labelingJobName": "nb-track-823f6d3e-lane-audit",
"labelAttributeName": "nb-track-823f6d3e-lane-audit-ref",
"labelCategoryS3Uri": "s3://smgt-workflow-1-322552456788-us-west-2-batch-processing/label_category_input/nb-track-823f6d3e-second_level-nb-track-823f6d3e-lane-audit/category-file.json",
"jobInputS3Uri": "s3://smgt-workflow-1-322552456788-us-west-2-batch-processing/batch_manifests/VideoObjectDetection/nb-track-823f6d3e-first_level-nb-track-823f6d3e-lane/output/nb-track-823f6d3e-lane/manifests/output/output.manifest",
"jobInputS3Url": "https://smgt-workflow-1-322552456788-us-west-2-batch-processing.s3.amazonaws.com/batch_manifests/VideoObjectDetection/nb-track-823f6d3e-first_level-nb-track-823f6d3e-lane/output/nb-track-823f6d3e-lane/manifests/output/output.manifest?...",
"jobOutputS3Uri": "s3://smgt-workflow-1-322552456788-us-west-2-batch-processing/batch_manifests/VideoObjectDetectionAudit/nb-track-823f6d3e-second_level-nb-track-823f6d3e-lane-audit/output/nb-track-823f6d3e-lane-audit/manifests/output/output.manifest",
"jobOutputS3Url": "https://smgt-workflow-1-322552456788-us-west-2-batch-processing.s3.amazonaws.com/batch_manifests/VideoObjectDetectionAudit/nb-track-823f6d3e-second_level-nb-track-823f6d3e-lane-audit/output/nb-track-823f6d3e-lane-audit/manifests/output/output.manifest?..."
}
]
},
"thirdLevel": {
"status": "COMPLETE",
"numChildBatches": "0",
"numChildBatchesComplete": "0",
"jobLevels": []
}
},
"token": "arn:aws:states:us-west-2:322552456788:execution:smgt-workflow-1-batch-process:nb-track-823f6d3e-8432b929",
"status": "SUCCESS"
}
Within each job metadata blob, a jobOutputS3Url
field contains a presigned URL to access the output manifest of this particular job. The output manifest contains the results of data labeling in augmented manifest format, which you can parse to retrieve annotations by indexing the JSON object with <jobName>-ref
. This field points to an S3 location containing all annotations for the given video clip.
{
"source-ref": "s3://smgt-workflow-1-322552456788-us-west-2-batch-input/tracking_manifests/MOT17-13-SDP_seq.json",
"nb-track-93aa7d01-vehicle-ref": "s3://smgt-workflow-1-322552456788-us-west-2-batch-processing/batch_manifests/VideoObjectTracking/nb-track-93aa7d01-first_level-nb-track-93aa7d01-vehicle/output/nb-track-93aa7d01-vehicle/annotations/consolidated-annotation/output/0/SeqLabel.json",
"nb-track-93aa7d01-vehicle-ref-metadata": {
"class-map": {"0": "Vehicle"},
"job-name": "labeling-job/nb-track-93aa7d01-vehicle",
"human-annotated": "yes",
"creation-date": "2021-04-05T17:43:02.469000",
"type": "groundtruth/video-object-tracking",
},
"nb-track-93aa7d01-vehicle-audit-ref": "s3://smgt-workflow-1-322552456788-us-west-2-batch-processing/batch_manifests/VideoObjectTrackingAudit/nb-track-93aa7d01-second_level-nb-track-93aa7d01-vehicle-audit/output/nb-track-93aa7d01-vehicle-audit/annotations/consolidated-annotation/output/0/SeqLabel.json",
"nb-track-93aa7d01-vehicle-audit-ref-metadata": {
"class-map": {"0": "Vehicle"},
"job-name": "labeling-job/nb-track-93aa7d01-vehicle-audit",
"human-annotated": "yes",
"creation-date": "2021-04-05T17:55:33.284000",
"type": "groundtruth/video-object-tracking",
"adjustment-status": "unadjusted",
},
}
For example, for bounding box jobs, the SeqLabel.json
file contains bounding box annotations for each annotated frame (in this case, only the first frame is annotated):
{
"tracking-annotations": [
{
"annotations": [
{
"height": 66,
"width": 81,
"top": 547,
"left": 954,
"class-id": "0",
"label-category-attributes": {},
"object-id": "3c02d0f0-9636-11eb-90fe-6dd825b8de95",
"object-name": "Vehicle:1"
},
{
"height": 98,
"width": 106,
"top": 545,
"left": 1079,
"class-id": "0",
"label-category-attributes": {},
"object-id": "3d957ee0-9636-11eb-90fe-6dd825b8de95",
"object-name": "Vehicle:2"
}
],
"frame-no": "0",
"frame": "000001.jpg",
"frame-attributes": {}
}
]
}
Because the batch completion SNS message contains all output manifest files from the jobs launched in parallel, you can perform any postprocessing of your annotations based on this message. For example, if you have a specific serialization format for these annotations that combines vehicle bounding boxes and lane annotations, you can get the output manifest of the lane job as well as the vehicle job, then merge based on frame number and convert to your desired final format.
To learn more about Ground Truth output data formats, see Output Data.
Clean up
To avoid incurring future charges, run the Clean up section of the notebook to delete all the resources including S3 objects and the CloudFormation stack. When the deletion is complete, make sure to stop and delete the notebook instance that is hosting the current notebook script.
Conclusion
This two-part series provides you with a reference architecture to build an advanced data labeling workflow comprised of a multi-step data labeling pipeline, adjustment jobs, and data lakes for corresponding dataset annotations and worker metrics as well as updated dashboards.
In this post, you learned how to take video frame data and trigger a workflow to run multiple Ground Truth labeling jobs, generating two different types of annotations (bounding boxes and polylines). You also learned how you can extend the pipeline to audit and verify the labeled dataset and how to retrieve the audited results. Lastly, you saw how to reference the current progress of batch jobs using the BatchShow API.
For more information about the data lake for Ground Truth dataset annotations and worker metrics from Ground Truth, check back to the Ground Truth blog for the second blog post in this series(coming soon).
Try out the notebook and customize it for your input datasets by adding additional jobs or audit steps, or by modifying the data modality of the jobs. Further customization of solution could include, but is not limited, to:
- Adding additional types of annotations such as semantic segmentation masks or keypoints
- Adding automated quality assurance and filtering to the Step Functions workflow to only send low-quality annotations to the next level of review
- Adding third or fourth levels of quality review for additional, more specialized types of reviews
This solution is built using serverless technologies on top of Step Functions, which makes it highly customizable and applicable for a wide variety of applications.
About the Authors
Vidya Sagar Ravipati is a Deep Learning Architect at the Amazon ML Solutions Lab, where he leverages his vast experience in large-scale distributed systems and his passion for machine learning to help AWS customers across different industry verticals accelerate their AI and cloud adoption. Previously, he was a Machine Learning Engineer in Connectivity Services at Amazon who helped to build personalization and predictive maintenance platforms.
Jeremy Feltracco is a Software Development Engineer with the Amazon ML Solutions Lab at Amazon Web Services. He uses his background in computer vision, robotics, and machine learning to help AWS customers accelerate their AI adoption.
Jae Sung Jang is a Software Development Engineer. His passion lies with automating manual process using AI Solutions and Orchestration technologies to ensure business execution.
Talia Chopra is a Technical Writer in AWS specializing in machine learning and artificial intelligence. She works with multiple teams in AWS to create technical documentation and tutorials for customers using Amazon SageMaker, MxNet, and AutoGluon.
3 questions with Michael Kearns: Designing socially aware algorithms and models
Kearns is a featured speaker at the first virtual Amazon Web Services Machine Learning Summit on June 2.Read More
Segment paragraphs and detect insights with Amazon Textract and Amazon Comprehend
Many companies extract data from scanned documents containing tables and forms, such as PDFs. Some examples are audit documents, tax documents, whitepapers, or customer review documents. For customer reviews, you might be extracting text such as product reviews, movie reviews, or feedback. Further understanding of the individual and overall sentiment of the user base from the extracted text can be very useful.
You can extract data through manual data entry, which is slow, expensive, and prone to errors. Alternatively you can use simple optical character recognition (OCR) techniques, which require manual configuration and changes for different inputs. The process of extracting meaningful information from this data is often manual, time-consuming, and may require expert knowledge and skills around data science, machine learning (ML), and natural language processing (NLP) techniques.
To overcome these manual processes, we have AWS AI services such as Amazon Textract and Amazon Comprehend. AWS pre-trained AI services provide ready-made intelligence for your applications and workflows. Because we use the same deep learning technology that powers Amazon.com, you get quality and accuracy from continuously learning APIs. And best of all, AI services on AWS don’t require ML experience.
Amazon Textract uses ML to extract data from documents such as printed text, handwriting, forms, and tables without the need for any manual effort or custom code. Amazon Textract extracts complete text from given documents and provides key information such as page numbers and bounding boxes.
Based on the document layout, you may need to separate paragraphs and headers into logical sections to get more insights from the document at a granular level. This is more useful than simply extracting all of the text. Amazon Textract provides information such as the bounding box location of each detected text and its size and indentation. This information can be very useful for segmenting text responses from Amazon Textract in the form of paragraphs.
In this post, we cover some key paragraph segmentation techniques to postprocess responses from Amazon Textract, and use Amazon Comprehend to generate insights such as sentiment and entity extraction:
- Identify paragraphs by font sizes by postprocessing the Amazon Textract response
- Identify paragraphs by indentation using bounding box information
- Identify segments of the document or paragraphs based on the spacing between lines
- Identify the paragraphs or statements in the document based on full stops
Gain insights from extracted paragraphs using Amazon Comprehend
After you segment the paragraphs using any of these techniques, you can gain further insights from the segmented text by using Amazon Comprehend for the following use cases:
- Detecting key phrases in technical documents – For documents such as whitepapers and request for proposal documents, you can segment the document by paragraphs using the library provided in the post and then use Amazon Comprehend to detect key phrases.
- Detecting named entities from financial and legal documents – In some use cases, you may want to identify key entities associated with paragraph headings and subheadings. For example, you can segment legal documents and financial documents by headings and paragraphs and detect named entities using Amazon Comprehend.
- Sentiment analysis of product or movie reviews – You can perform sentiment analysis using Amazon Comprehend to check when the sentiments of a paragraph changes in product review documents and act accordingly if the reviews are negative.
In this post, we cover the sentiment analysis use case specifically.
We use two different sample movie review PDFs for this use case, which are available on GitHub. The document contains movie names as the headers for individual paragraphs and reviews as the paragraph content. We identify the overall sentiment of each movie as well as the sentiment for each review. However, testing an entire page as a single entity isn’t ideal for getting an overall sentiment. Therefore, we extract the text and identify reviewer names and comments and generate the sentiment of each review.
Solution overview
This solution uses the following AI services, serverless technologies, and managed services to implement a scalable and cost-effective architecture:
- Amazon Comprehend – An NLP service that uses ML to find insights and relationships in text.
- Amazon DynamoDB – A key-value and document database that delivers single-digit millisecond performance at any scale.
- AWS Lambda – Runs code in response to triggers such as changes in data, shifts in system state, or user actions. Because Amazon S3 can directly trigger a Lambda function, you can build a variety of real-time serverless data-processing systems.
- Amazon Simple Notification Service (Amazon SNS) – A fully managed messaging service that is used by Amazon Textract to notify upon completion of extraction process.
- Amazon Simple Storage Service (Amazon S3) – Serves as an object store for your documents and allows for central management with fine-tuned access controls.
- Amazon Textract – Uses ML to extract text and data from scanned documents in PDF, JPEG, or PNG formats.
The following diagram illustrates the architecture of the solution.
Our workflow includes the following steps:
- A movie review document gets uploaded into the designated S3 bucket.
- The upload triggers a Lambda function using Amazon S3 Event Notifications.
- The Lambda function triggers an asynchronous Amazon Textract job to extract text from the input document. Amazon Textract runs the extraction process in the background.
- When the process is complete, Amazon Textract sends an SNS notification. The notification message contains the job ID and the status of the job. The code for Steps 3 and 4 is in the file textraction-inovcation.py.
- Lambda listens to the SNS notification and calls Amazon Textract to get the complete text extracted from document. Lambda uses the text and bounding box data provided by Amazon Textract. The code for the bounding box data extraction can be found in lambda-helper.py.
- The Lambda function uses the bounding box data to identify the headers and paragraphs. We discuss two types of document formats in this post: a document with left indentation differences between headers and paragraphs, and a document with font size differences. The Lambda code that uses left indentation can be found in blog-code-format2.py and the code for font size differences can be found in blog-code-format1.py.
- After the headers and paragraphs are identified, Lambda invokes Amazon Comprehend to get the sentiment. After the sentiment is identified, Lambda stores the information in DynamoDB.
- DynamoDB stores the information extracted and insights identified for each document. The document name is the key and the insights and paragraphs are the values.
Deploy the architecture with AWS CloudFormation
You deploy an AWS CloudFormation template to provision the necessary AWS Identity and Access Management (IAM) roles, services, and components of the solution, including Amazon S3, Lambda, Amazon Textract, Amazon Comprehend.
- Launch the following CloudFormation template and in the US East (N. Virginia) Region:
- For BucketName, enter BucketName
textract-demo-<date>
(adding a date as a suffix makes the bucket name unique). - Choose Next.
- In the Capabilities and transforms section, select all three check boxes to acknowledge that AWS CloudFormation may create IAM resources.
- Choose Create stack.
This template uses AWS Serverless Application Model (AWS SAM), which simplifies how to define functions and APIs for serverless applications, and also has features for these services, like environment variables.
The following screenshot of the stack details page shows the status of the stack as CREATE_IN_PROGRESS
. It can take up to 5 minutes for the status to change to CREATE_COMPLETE
. When it’s complete, you can view the outputs on the Outputs tab.
Process a file through the pipeline
When the setup is complete, the next step is to walk through the process of uploading a file and validating the results after the file is processed through the pipeline.
To process a file and get the results, upload your documents to your new S3 bucket, then choose the S3 bucket URL corresponding to the s3BucketForTextractDemo key on the stack Outputs tab.
You can download the sample document used in this post from the GitHub repo and upload it to the s3BucketForTextractDemo S3 URL. For more information about uploading files, see How do I upload files and folders to an S3 bucket?
After the document is uploaded, the textraction-inovcation.py Lambda function is invoked. This function calls the Amazon Textract StartDocumentTextDetection API, which sets up an asynchronous job to detect text from the PDF you uploaded. The code uses the S3 object location, IAM role, and SNS topic created by the CloudFormation stack. The role ARN and SNS topic ARN were set as environment variables to the function by AWS CloudFormation. The code can be found in textract-post-processing-CFN.yml.
Postprocess the Amazon Textract response to segment paragraphs
When the document is submitted to Amazon Textract for text detection, we get pages, lines, words, or tables as a response. Amazon Textract also provides bounding box data, which is derived based on the position of the text in the document. The bounding box data provides information about where the text position from the left and top, the size of the characters, and the width of the text.
We can use the bounding box data to identify lots of segments of the document, for example, identifying paragraphs from a whitepaper, movie reviews, auditing documents, or items on a menu. After these segments are identified, you can use Amazon Comprehend to find sentiment or key phrases to get insights from the document. For example, we can identify the technologies or algorithms used in a whitepaper or understand the sentiment of each reviewer for a movie.
In this section, we demonstrate the following techniques to identify the paragraphs:
- Identify paragraphs by font sizes by postprocessing the Amazon Textract response
- Identify paragraphs by indentation using Amazon Textract bounding box information
- Identify segments of the document or paragraphs based on the spacing between lines
- Identify the paragraphs or statements in the document based on full stops
Identify headers and paragraphs based on font size
The first technique we discuss is identifying headers and paragraphs based on the font size. If the headers in your document are bigger than the text, you can use font size for the extraction. For example, see the following sample document, which you can download from GitHub.
First, we need to extract all the lines from the Amazon Textract response and the corresponding bounding box data to understand font size. Because the response has a lot of additional information, we’re only extracting lines and bounding box data. We separate the text with different font sizes and order them based on size to determine headers and paragraphs. This process of extracting headers is done as part of the get_headers_to_child_mapping method in the lambda-helpery.py function.
The step-by-step flow is as follows:
- A Lambda function gets triggered by every file drop event using the textract-invocation function.
- Amazon Textract completes the process of text detection and sends notification to the SNS topic.
- The blog-code-format1.py function gets triggered based on the SNS notification.
- Lambda uses the method
get_text_results_from_textract
from lambda-helper.py and extracts the complete text by calling Amazon Textract repeatedly for all the pages. - After the text is extracted, the method
get_text_with_required_info
identifies bounding box data and creates a mapping of line number, left indentation, and font size for each line of the total document text extracted. - We use the bounding box data to call the
get_headers_to_child_mapping
method to get the header information. - After the header information is collected, we use
get_headers_and_their_line_numbers
to get the line numbers of the headers. - After the headers and their line numbers are identified, the
get_header_to_paragraph_data
method gets the complete text for each paragraph and creates a mapping with each header and its corresponding paragraph text. - With the header and paragraph information collected, the
update_paragraphs_info_in_dynamodb
method invokes Amazon Comprehend for each paragraph and stores the information of the header and its corresponding paragraph text and sentiment information into DynamoDB.
Identify paragraphs based on indentation
As a second technique, we explain how to derive headers and paragraphs based on the left indentation of the text. In the following document, headers are aligned at the left of the page, and all the paragraph text is a bit further in the document. You can download this sample PDF on GitHub.
In this document, the main difference between the header and paragraph is left indentation. Similar to the process described earlier, first we need to get line numbers and indentation information. After we this information, all we have to do is separate the text based on the indentation and extract the text between two headers by using line numbers.
The step-by-step flow is as follows:
- A Lambda function gets triggered whenever a file drop event occurs using the textract-invocation Lambda function.
- Amazon Textract completes the process of text detection and sends a notification to the SNS topic.
- The blog-code-format2.py function gets triggered based on the SNS notification.
- Lambda uses the method
get_text_results_from_textract
from lambda-helper.py and extracts the complete text by calling Amazon Textract repeatedly for all the pages. - After the text is extracted, we use the method
get_text_with_required_info
to identify bounding box data and create a mapping of line number, left indentation, and font size for each line of the total document text extracted. - After the text is extracted, we use the method
get_text_with_required_info
to identify the text bounding box data. - The bounding box data
get_header_info
method is called to get the line numbers of all the headers. - After the headers and their line numbers are identified, we use the
get_header_to_paragraph_data
method to get the complete text for each paragraph and create a mapping with each header and its corresponding paragraph text. - With the header and paragraph information collected, we use the
update_paragraphs_info_in_dynamodb
method to invoke Amazon Comprehend for each paragraph and store the information of the header and its corresponding paragraph text and sentiment information into DynamoDB.
Identify paragraphs based on line spacing
Similar to the preceding approach, we can use line spaces to get the paragraphs only from a page. We calculate line spacing using the top indent. The difference in top indentation of the current line and the next line or previous line provides us with the line spacing. We can separate segments if the line spacing is bigger. The detailed code can be found on GitHub. You can also download the sample document from GitHub.
Identify segments or paragraphs based on full stops
We also provide a technique to extract segments or paragraphs of the document based on full stops. Consider preceding document as an example. After the Amazon Textract response is parsed and the lines are separated, we can iterate through each line and whenever we find a line that ends with a full stop, we can consider it as end of paragraph and any line thereafter is part of next paragraph. This is another helpful technique to identify various segments of the document. The code to perform this can be found on GitHub
Get the sentiment of paragraphs or segments of the page
As we described in the preceding processes, we can collect the text using various techniques. After the list of paragraphs are identified, we can use Amazon Comprehend to get the sentiment of the paragraph and key phrases of the text. Amazon Comprehend can give intelligent insights based on the text, which is very valuable to businesses because understanding the sentiment at each segment is very useful.
Query sentiments per paragraph in DynamoDB
After you process the file, you can query the results for each paragraph.
- On the DynamoDB console, choose Tables in the navigation pane.
You should see two tables:
- Textract-job-details – Contains information of the Amazon Textract processing job
- Textract-post-process-data – Contains the sentiment of each paragraph header
- Choose the
Textract-post-process-data
table.
You can see a mix of review sentiments.
- Scan or query the table to find the negative customer reviews.
The DynamoDB table data looks like the following screenshot, file path, header, paragraph data, and sentiment for paragraph.
Conclusion
This post demonstrated how to extract and process data from a PDF and visualize it to review sentiments. We separated the headers and paragraphs via custom coding and ran sentiment analysis for each section separately.
Processing scanned image documents helps you uncover large amounts of data, which can provide meaningful insights. With managed ML services like Amazon Textract and Amazon Comprehend, you can gain insights into your previously undiscovered data. For example, you can build a custom application to get text from a scanned legal document, purchase receipts, and purchase orders.
If this post helps you or inspires you to solve a problem, we would love to hear about it! The code for this solution is available on the GitHub repo for you to use and extend. Contributions are always welcome!
About the Authors
Srinivasarao Daruna is a Data Lab Architect at Amazon Web Services and comes from strong big data and analytics background. In his role, he helps customers with architecture and solutions to their business problem. He enjoys learning new things and solving complex problems for customers.
Mona Mona is a Senior AI/ML Specialist Solutions Architect based out of Arlington, VA. She works with public sector customers and helps them adopt machine learning on a large scale. She is passionate about NLP and ML explainability areas in AI/ML and has published multiple blog posts on these topics in the AWS AI/ML Blogs.
Divyesh Sah is as a Sr. Enterprise Solutions Architect in AWS focusing on financial services customers, helping them with cloud transformation initiatives in the areas of migrations, application modernization, and cloud native solutions. He has over 18 years of technical experience specializing in AI/ML, databases, big data, containers, and BI and analytics. Prior to AWS, he has experience in areas of sales, program management, and professional services.
Sandeep Kariro is an Enterprise Solutions Architect in the Telecom space. Having worked in cloud technologies for over 7 years, Sandeep provides strategic and tactical guidance to enterprise customers around the world. Sandeep also has in-depth experience in data-centric design solutions optimal for cloud deployments while keeping cost, security, compliance, and operations as top design principles. He loves traveling around the world and has traveled to several countries around the globe in the last decade.
Achieve 12x higher throughput and lowest latency for PyTorch Natural Language Processing applications out-of-the-box on AWS Inferentia
AWS customers like Snap, Alexa, and Autodesk have been using AWS Inferentia to achieve the highest performance and lowest cost on a wide variety of machine learning (ML) deployments. Natural language processing (NLP) models are growing in popularity for real-time and offline batched use cases. Our customers deploy these models in many applications like support chatbots, search, ranking, document summarization, and natural language understanding. With AWS Inferentia you can also achieve out-of-the-box highest performance and lowest cost on opensource NLP models, without the need for customizations.
In this post, you learn how to maximize throughput for both real-time applications with tight latency budgets and batch processing where maximum throughput and lowest cost are key performance goals on AWS Inferentia. For this post, you deploy an NLP-based solution using HuggingFace Transformers pretrained BERT base models, with no modifications to the model and one-line code change at the PyTorch framework level. The solution achieves 12 times higher throughput at 70% lower cost on AWS Inferentia, as compared to deploying the same model on GPUs.
To maximize inference performance of Hugging Face models on AWS Inferentia, you use AWS Neuron PyTorch framework integration. Neuron is a software development kit (SDK) that integrates with popular ML frameworks, such as TensorFlow and PyTorch, expanding the frameworks APIs so you can run high-performance inference easily and cost-effectively on Amazon EC2 Inf1 instances. With a minimal code change, you can compile and optimize your pretrained models to run on AWS Inferentia. The Neuron team is consistently releasing updates with new features and increased model performance. With the v1.13 release, the performance of transformers based models improved by an additional 10%–15%, pushing the boundaries of minimal latency and maximum throughput, even for larger NLP workloads.
To test out the Neuron SDK features yourself, check out the latest Utilizing Neuron Capabilities for PyTorch.
The NeuronCore Pipeline mode explained
Each AWS Inferentia chip, available through the Inf1 instance family, contains four NeuronCores. The different instance sizes provide 1 to 16 chips, totaling 64 NeuronCores on the largest instance size, the inf1.24xlarge. The NeuronCore is a compute unit that runs the operations of the Neural Network (NN) graph.
When you compile a model without Pipeline mode, the Neuron compiler optimizes the supported NN operations to run on a single NeuronCore. You can combine the NeuronCores into groups, even across AWS Inferentia chips, to run the compile model. This configuration allows you to use multiple NeuronCores in data parallel mode across AWS Inferentia chips. This means that, even on the smallest instance size, four models can be active at any given time. Data parallel implementation of four (or more) models provides the highest throughput and lowest cost in most cases. This performance boost comes with minimum impact on latency, because AWS Inferentia is optimized to maximize throughput at small batch sizes.
With Pipeline mode, the Neuron compiler optimizes the partitioning and placement of a single NN graph across a requested number of NeuronCores, in a completely automatic process. It allows for an efficient use of the hardware because the NeuronCores in the pipeline run streaming inference requests, using a faster on-chip cache to hold the model weights. When one of the cores in the pipeline finishes processing a first request it can start processing following requests, without waiting for the last core to complete processing the first request. This streaming pipeline inference increases per core hardware utilization, even when running inference of small batch sizes on real-time applications, such as batch size 1.
Finding the optimum number of NeuronCores to fit a single large model is an empirical process. A good starting point is to use the following approximate formula, but we recommend experimenting with multiple configurations to achieve an optimum deployment:
neuronCore_pipeline_cores = 4*round(number-of-weights-in-model/(2E7))
The compiler directly takes the value of neuroncore-pipeline-cores compilation flag, and that is all there is to it! To enable this feature, add the argument to the usual compilation flow of your desired framework.
In TensorFlow Neuron, use the following code:
import numpy as np
import tensorflow.neuron as tfn
example_input = np.zeros([1,224,224,3], dtype='float16')
tfn.saved_model.compile("<Path to your saved model>",
"<Path to write compiled model>/1",
model_feed_dict={'input_1:0' : example_input },
compiler_args = ['--neuroncore-pipeline-cores', '8'])
In PyTorch Neuron, use the following code:
import torch
import torch_neuron
model = torch.jit.load(<Path to your traced model>)
inputs = torch.zeros([1, 3, 224, 224], dtype=torch.float32)
model_compiled = torch.neuron.trace(model,
example_inputs=inputs,
compiler_args = ['--neuroncore-pipeline-cores', '8'])
For more information about the NeuronCore Pipeline and other Neuron features, see Neuron Features.
Run HuggingFace question answering models in AWS Inferentia
To run a Hugging Face BertForQuestionAnswering model on AWS Inferentia, you only need to add a single, extra line of code to the usual Transformers implementation, besides importing the torch_neuron
framework. You can adapt the following example of the forward pass method according to the following snippet:
from transformers import BertTokenizer, BertForQuestionAnswering
import torch
import torch_neuron
tokenizer = BertTokenizer.from_pretrained('twmkn9/bert-base-uncased-squad2')
model = BertForQuestionAnswering.from_pretrained('twmkn9/bert-base-uncased-squad2',return_dict=False)
question, text = "Who was Jim Henson?", "Jim Henson was a nice puppet"
inputs = tokenizer(question, text, return_tensors='pt')
neuron_model = torch.neuron.trace(model,
example_inputs = (inputs['input_ids'],inputs['attention_mask']),
verbose=1)
outputs = neuron_model(*(inputs['input_ids'],inputs['attention_mask']))
The one extra line in the preceding code is the call to the torch.neuron.trace()
method. This call compiles the model and returns a new neuron_model()
method that you can use to run inference over the original inputs, as shown in the last line of the script. If you want to test this example, see PyTorch Hugging Face pretrained BERT Tutorial.
The ability to compile and run inference using the pretrained models—or even fine-tuned, as in the preceding code—directly from the Hugging Face model repository is the initial step towards optimizing deployments in production. This first step can already produce two times greater performance with 70% lower cost when compared to a GPU alternative (which we discuss later in this post). When you combine NeuronCore Groups and Pipelines features, you can explore many other ways of packaging the models within a single Inf1 instance.
Optimize model deployment with NeuronCore Groups and Pipelines
The HuggingFace question answering deployment requires some of the model’s parameters to be set a priori. Neuron is an ahead-of-time (AOT) compiler, which requires knowledge of the tensor shapes at compile time. For that, we define both batch size and sequence length for our model deployment. In the previous example, the Neuron framework inferred those from the example input passed on the trace call: (inputs[‘input_ids’], inputs[‘attention_mask’]
).
Besides those two model parameters, you can set the compiler argument ‘--neuroncore-pipeline-cores
’ and the environment variable ‘NEURONCORE_GROUP_SIZES
‘ to fine-tune how your model server consumes the NeuronCores on the AWS Inferentia chip.
For example, to maximize the number of concurrent server workers processing the inference request on a single AWS Inferentia chip—four cores—you set NEURONCORE_GROUP_SIZES=”1,1,1,1”
and ‘--neuroncore-pipeline-cores
’ to 1, or leave it out as a compiler argument. The following image depicts this split. It’s a full data parallel deployment.
For minimum latency, you can set ‘--neuroncore-pipeline-cores
’ to 4 and NEURONCORE_GROUP_SIZES=”4”
so that the process consumes all four NeuronCores at once, for a single model. The AWS Inferentia chip can process four inference requests concurrently, as a stream. The model pipeline parallel deployment looks like the following figure.
Data parallel deployments favor throughput with multiple workers processing requests concurrently. The pipeline parallel, however, favors latency, but can also improve throughput due to the stream processing behavior. With these two extra parameters, you can fine-tune the serving application architecture according to the most important serving metrics for your use case.
Optimize for minimum latency: Multi-core pipeline parallel
Consider an application that requires minimum latency, such as sequence classification as part of an online chatbot workflow. As the user submits text, a model running on the backend classifies the intent of a single user input and is bounded by how fast it can infer. The model most likely has to provide responses to single input (batch size 1) requests.
The following table compare the performance and cost of Inf1 instances vs. the g4dn.xlarge—the most optimized GPU instance family for inference in the cloud—while running the HuggingFace BERT base model in a data parallel vs. pipeline parallel configuration and batch size 1. Looking at the 95th percentile (p95) of latency, we get lower values in Pipeline mode for both the 4 core inf1.xlarge and the 16 cores inf1.6xlarge instances. The best configuration between Inf1 instances is the 16 cores case, with a 58% reduction in latency, reaching 6 milliseconds.
Instance | Batch Size | Inference Mode | NeuronCores per model | Throughput [sentences/sec] | Latency p95 [seconds] | Cost per 1M inferences | Throughput ratio [inf1/g4dn] | Cost ratio [inf1/g4dn] |
inf1.xlarge | 1 | Data Parallel | 1 | 245 | 0.0165 | $0.42 | 1.6 | 43% |
inf1.xlarge | 1 | Pipeline Parallel | 4 | 291 | 0.0138 | $0.35 | 2.0 | 36% |
inf1.6xlarge | 1 | Data Parallel | 1 | 974 | 0.0166 | $0.54 | 6.5 | 55% |
inf1.6xlarge | 1 | Pipeline Parallel | 16 | 1793 | 0.0069 | $0.30 | 12.0 | 30% |
g4dn.xlarge | 1 | – | – | 149 | 0.0082 | $0.98 |
The model tested was the PyTorch version of HuggingFace bert-base-uncase, with sequence length 128. On AWS Inferentia, we compile the model to use all available cores and run full pipeline parallel. For the data parallel cases, we compile the models for a single core and configured the NeuronCore Groups to run a worker model per core. The GPU deployment used the same setup as AWS Inferentia, where the model was traced with TorchScript JIT and cast to mixed precision using PyTorch AMP Autocast.
Throughput also increased 1.84 times with Pipeline mode on AWS Inferentia, reaching 1,793 sentences per second, which is 12 times the throughput of g4dn.xlarge. The cost of inference on this configuration also favors the inf1.6xlarge over the most cost-effective GPU option, even at a higher cost per hour. The cost per million sentences is 70% lower based on Amazon Elastic Compute Cloud (Amazon EC2) On-Demand instance pricing. For latency sensitive applications that can’t utilize the full throughput of the inf1.6xlarge, or for smaller models such as BERT Small, we recommend using Pipeline mode on inf1.xlarge for a cost-effective deployment.
Optimize for maximum throughput: Single-core data parallel
An NLP use case that requires increase throughput over minimum latency is extractive question answering tasks, as part of a search and document retrieval pipeline. In this case, increasing the number of document sections processed in parallel can speed up the search result or improve the quality and breadth of searched answers. In such a setup, inferences are more likely to run in batches (batch size larger than 1).
To achieve maximum throughput, we found through experimentation the optimum batch size to be 6 on AWS Inferentia, for the same model tested before. On g4dn.xlarge, we ran batch 64 without running out of GPU memory. The following results help show how batch size 6 can provide 9.2 times more throughput on inf1.6xlarge at 61% lower cost, when compared to GPU.
Instance | Batch Size | Inference Mode | NeuronCores per model | Throughput [sentences/sec] | Latency p95 [seconds] | Cost per 1M inferences | Throughput ratio [inf1/g4dn] | Cost ratio [inf1/g4dn] |
inf1.xlarge | 6 | Data Parallel | 1 | 985 | 0.0249 | $0.10 | 2.3 | 30% |
inf1.xlarge | 6 | Pipeline Parallel | 4 | 945 | 0.0259 | $0.11 | 2.2 | 31% |
inf1.6xlarge | 6 | Data Parallel | 1 | 3880 | 0.0258 | $0.14 | 9.2 | 39% |
inf1.6xlarge | 6 | Pipeline Parallel | 16 | 2302 | 0.0310 | $0.23 | 5.5 | 66% |
g4dn.xlarge | 64 | – | – | 422 | 0.1533 | $0.35 |
In this application, cost considerations can also impact the final serving infrastructure design. The most cost-efficient way of running the batched inferences is using the inf1.xlarge instance. It achieves 2.3 times higher throughput than the GPU alternative, at 70% lower cost. Choosing between inf1.xlarge and inf1.6xlarge depends only on the main objective: minimum cost or maximum throughput.
To test out the NeuronCore Pipeline and Groups feature yourself, check out the latest Utilizing Neuron Capabilities tutorials for PyTorch.
Conclusion
In this post, we explored ways to optimize your NLP deployments using the NeuronCore Groups and Pipeline features. The native integration of AWS Neuron SDK and PyTorch allowed you to compile and optimize the HuggingFace Transformers model to run on AWS Inferentia with minimal code change. By tunning the deployment architecture to be pipeline parallel, the BERT models achieve minimum latency for real-time applications, with 12 times higher throughput than a g4dn.xlarge alternative, while costing 70% less to run. For batch inferencing, we achieve 9.2 times higher throughput at 60% less cost.
The Neuron SDK features described in this post also apply to other ML model types and frameworks. For more information, see the AWS Neuron Documentation.
Learn more about the AWS Inferentia chip and the Amazon EC2 Inf1 instances to get started running your own custom ML pipelines on AWS Inferentia using the Neuron SDK.
About the Authors
Fabio Nonato de Paula is a Sr. Manager, Solutions Architect for Annapurna Labs at AWS. He helps customers use AWS Inferentia and the AWS Neuron SDK to accelerate and scale ML workloads in AWS. Fabio is passionate about democratizing access to accelerated ML and putting deep learning models in production. Outside of work, you can find Fabio riding his motorcycle on the hills of Livermore valley or reading ComiXology.
Mahadevan Balasubramaniam is a Principal Solutions Architect for Autonomous Computing with nearly 20 years of experience in the area of physics infused deep learning, building and deploying digital twins for industrial systems at scale. Mahadevan obtained his PhD in Mechanical Engineering from Massachusetts Institute of Technology and has over 25 patents and publications to his credit.
How Endel’s AI-powered Focus soundscapes earned the backing of neuroscience
A new study has found that when compared to curated playlists and silence, personalized AI soundscapes generated by Alexa Fund company Endel are more effective in helping people focus.Read More
Creating an end-to-end application for orchestrating custom deep learning HPO, training, and inference using AWS Step Functions
Amazon SageMaker hyperparameter tuning provides a built-in solution for scalable training and hyperparameter optimization (HPO). However, for some applications (such as those with a preference of different HPO libraries or customized HPO features), we need custom machine learning (ML) solutions that allow retraining and HPO. This post offers a step-by-step guide to build a custom deep learning web application on AWS from scratch, following the Bring Your Own Container (BYOC) paradigm. We show you how to create a web application to enable non-technical end users to orchestrate different deep learning operations and perform advanced tasks such as HPO and retraining from a UI. You can modify the example solution to create a deep learning web application for any regression and classification problem.
Solution overview
Creating a custom deep learning web application consists of two main steps:
- ML component (focusing on how to dockerize a deep learning solution)
- Full-stack application to use ML component
In the first step, we need to create a custom Docker image and register it in Amazon Elastic Container Registry. Amazon SageMaker will use this image to run Bayesian HPO, training/re-training, and inference. Details of dockerizing a deep learning code are described in Appendix A.
In the second step, we deploy a full-stack application with AWS Serverless Application Model (SAM). We use AWS Step Functions and AWS Lambda to orchestrate different stages of ML pipeline. Then we create the frontend application hosted in Amazon Simple Storage Service (Amazon S3) and Amazon CloudFront. We also use AWS Amplify with Amazon Cognito for authentication. The following diagram shows the solution architecture.
After you deploy the application, you can authenticate with Amazon Cognito to trigger training or HPO jobs from the UI (Step 2 in the diagram). User requests go through Amazon API Gateway to Step Functions, which is responsible for orchestrating the training or HPO (Step 3). When it’s complete, you can submit a set of input parameters through the UI to API Gateway and Lambda to get the inference results (Step 4).
Deploy the application
For instructions on deploying the application, see the GitHub repo README file. This application consists of four main components:
- machine-learning – Contains SageMaker notebooks and scripts for building an ML Docker image (for HPO and training), discussed in Appendix A
- shared-infra – Contains AWS resources used by both the backend and frontend in an AWS CloudFormation
- backend – Contains the backend code: APIs and a step function for retraining the model, running HPO, and an Amazon DynamoDB database
- frontend – Contains the UI code and infrastructure to host it.
Deployment details can be found here.
Create a step for HPO and training in Step Functions
Training a model for inference using Step Functions requires multiple steps:
- Create a training job.
- Create a model.
- Create an endpoint configuration.
- Optionally, delete the old endpoint.
- Create a new endpoint.
- Wait until the new endpoint is deployed.
Running HPO is simpler because we only create an HPO job and output the result to Amazon CloudWatch Logs. We orchestrate both model training and HPO using Step Functions. We can define these steps as a state machine, using Amazon State Language (ASL) definition. The following figure is the graphical representation of this state machine.
As the first step, we use the Choice state to decide whether to have an HPO or training mode using the following code:
"Mode Choice": {
"Type": "Choice",
"Choices": [
{
"Variable": "$.Mode",
"StringEquals": "HPO",
"Next": "HPOFlow"
}
],
"Default": "TrainingModelFlow"
},
Many states have the names Create a … Record
and Update Status to….
These steps either create or update records in DynamoDB tables. The API queries these tables to return the status of the job and the ARN of created resources (the endpoint ARN for making an inference).
Each record has the Step Function execution ID as a key and a field called status
. As the state changes, its status changes from TRAINING_MODEL, all the way to READY. The state machine records important outputs like S3 model output, model ARN, endpoint config ARN, and endpoint ARN.
For example, the following state runs right before endpoint deployment. The endpointConfigArn
field is updated in the record.
"Update Status to DEPLOYING_ENDPOINT": {
"Type": "Task",
"Resource": "arn:aws:states:::dynamodb:updateItem",
"Parameters": {
"TableName": "${ModelTable}",
"Key": {
"trainingId": {
"S.$": "$$.Execution.Id"
},
"created": {
"S.$": "$$.Execution.StartTime"
}
},
"UpdateExpression": "SET #st = :ns, #eca = :cf",
"ExpressionAttributeNames": {
"#st" : "status",
"#eca" : "endpointConfigArn"
},
"ExpressionAttributeValues": {
":ns" : {
"S": "DEPLOYING_ENDPOINT"
},
":cf" : {
"S.$": "$.EndpointConfigArn"
}
}
},
"ResultPath": "$.taskresult",
"Next": "Deploy"
}
The following screenshot shows the content in the DynamoDB table.
In the preceding screenshot, the last job is still running. It finished training and creating an endpoint configuration, but hasn’t deployed the endpoint yet. Therefore, there is no endpointArn
in this record.
Another important state is Delete Old Endpoint. When you deploy an endpoint, an Amazon Elastic Compute Cloud (Amazon EC2) instance is running 24/7. As you train more models and create more endpoints, your inference cost grows linearly with the number of models. Therefore, we create this state to delete the old endpoint to reduce our cost.
The Delete Old Endpoint state calls a Lambda function that deletes the oldest endpoint if it exceeds the maximum number specified. The default value is 5, but you could change it in the parameter of the CloudFormation template for the backend. Although you can change this value to any arbitrary number, SageMaker has a soft limit on how many endpoints you can have at a given time. There is also a limit per each instance type.
Finally, we have states for updating status to ERROR (one for HPO and another one for model training). These steps are used in the Catch field when any part of the step throws an error. These steps update the DynamoDB record with the fields error
and errorCause
from Step Functions (see the following screenshot).
Although we can retrieve this data from the Step Functions APIs, we keep them in DynamoDB records so that the front end can retrieve all the related information in one place.
Automate state machine creation with AWS CloudFormation
We can use the state machine definition to recreate this state machine on any accounts. The template contains several variables, such as DynamoDB table names for tracking job status or Lambda functions that are triggered by states. The ARN of these resources changes in each deployment. Therefore, we use AWS SAM to inject these variables. You can find the state machine resource here. The following code is an excerpt of how we refer to the ASL file and how resources ARNs are passed:
TrainingModelStateMachine:
Type: AWS::Serverless::StateMachine
Properties:
DefinitionUri: statemachine/model-training.asl.json
DefinitionSubstitutions:
DeleteOldestEndpointFunctionArn: !GetAtt DeleteOldestEndpointFunction.Arn
CheckDeploymentStatusFunctionArn: !GetAtt CheckDeploymentStatusFunction.Arn
ModelTable: !Ref ModelTable
HPOTable: !Ref HPOTable
Policies:
- LambdaInvokePolicy:
FunctionName: !Ref DeleteOldestEndpointFunction
# .. the rest of policies is omitted for brevity
ModelTable:
Type: AWS::DynamoDB::Table
Properties:
AttributeDefinitions:
- AttributeName: "trainingId"
AttributeType: "S"
- AttributeName: "created"
AttributeType: "S"
# .. the rest of policies is omitted for brevity
AWS::Serverless::StateMachine
is an AWS SAM resource type. The DefinitionUri
refers to the state machine definition we discussed in the last step. The definition has some variables, such as ${ModelTable}
. See the following code:
"Update Status to READY": {
"Type": "Task",
"Resource": "arn:aws:states:::dynamodb:updateItem",
"Parameters": {
"TableName": "${ModelTable}",
"Key": {
…
When we run the AWS SAM CLI, the variables in this template are replaced by the key-value declared in DefinitionSubstitutions
. In this case, the ${ModelTable}
is replaced by the table name of the ModelTable
resource created by AWS CloudFormation.
This way, the template is reusable and can be redeployed multiple times without any change to the state machine definition.
Build an API for the application
This application has five APIs:
- POST /infer – Retrieves the inference result for the given model
- GET /model – Retrieves all model information
- POST /model – Starts a new model training job with data in the given S3 path
- GET /hpo – Retrieves all HPO job information
- POST /hpo – Starts a new HPO job with data in the given S3 path
We create each API with an AWS SAM template. The following code is a snippet of the POST /model endpoint:
StartTrainingFunction:
Type: AWS::Serverless::Function
Properties:
CodeUri: functions/api/
Handler: start_job.post
Runtime: python3.7
Environment:
Variables:
MODE: "MODEL"
TRAINING_STATE_MACHINE_ARN: !Ref TrainingModelStateMachine
# Other variables removed for brevity
Policies:
- AWSLambdaExecute
- DynamoDBCrudPolicy:
TableName: !Ref ModelTable
- Version: 2012-10-17
Statement:
- Effect: Allow
Action:
- states:StartExecution
Resource: !Ref TrainingModelStateMachine
Events:
PostModel:
Type: Api
Properties:
Path: /model
Method: post
Auth:
Authorizer: MyCognitoAuth
Layers:
- !Ref APIDependenciesLayer
We utilize several features from the AWS SAM template in this Lambda function. First, we pass the created state machine ARN via environment variables, using !Ref
. Because the ARN isn’t available until the stack creation time, we use this method to avoid hardcoding.
Second, we follow the security best practices of the least privilege policy by using DynamoDBCrudPolicy
in the AWS SAM policy template to give permission to modify the data in the specific DynamoDB table. For the permissions that aren’t available as a policy template (states:StartExecution
), we define the policy statement directly.
Third, we control the access to this API by setting the Authorizer
property. In the following example code, we allow only authenticated users in by an Amazon Cognito user pool to call this API. The authorizer is defined in the global section because it’s shared by all functions.
Globals:
# Other properties are omitted for brevity…
Api:
Auth:
Authorizers:
MyCognitoAuth:
UserPoolArn: !GetAtt UserPool.Arn # Can also accept an array
Finally, we use the Layers
section to install API dependencies. This reduces the code package size and the build time during the development cycle. The referred APIDependenciesLayer
is defined as follows:
APIDependenciesLayer:
Type: AWS::Serverless::LayerVersion
Properties:
LayerName: APIDependencies
Description: Dependencies for API
ContentUri: dependencies/api
CompatibleRuntimes:
- python3.7
Metadata:
BuildMethod: python3.7 # This line tells SAM to install the library before packaging
Other APIs follow the same pattern. With this set up, our backend resources are managed in a .yaml file that you can version in Git and redeploy in any other account.
Build the front end and call the API
We build our front end using the React framework, which is hosted in an S3 bucket and CloudFront. We use the following template to deploy those resources and a shell script to build the static site and upload to the bucket.
We use the Amplify library to reduce coding efforts. We create a config file to specify which Amazon Cognito user pool to sign in to and which API Gateway URL to use. The example config file can be found here. The installation script generates the actual deployment file from the template and updates the pool ARN and URL automatically.
When we first open the website, we’re prompted to sign in with an Amazon Cognito user.
This authentication screen is generated by the Amplify library’s withAuthenticator() function in the App.js file. This function wraps the existing component and checks if the user has already logged in to the configured Amazon Cognito pool. If not, it shows the login screen before showing the component. See the following code:
import {withAuthenticator} from '@aws-amplify/ui-react';
// ...create an App that extends React.Component
// Wrap the application inside the Authenticator to require user to log in
export default withAuthenticator(withRouter(App));
After we sign in, the app component is displayed.
We can upload data to an S3 bucket and start HPO or train a new model. The UI also uses Amplify to upload data to Amazon S3. Amplify handles the authentication details for us, so we can easily upload files using the following code:
import { Storage} from "aws-amplify";
// … React logic to get file object when we click the Upload button
const stored = await Storage.vault.put(file.name, file, {
contentType: file.type,
});
// stored.key will be passed to API for training
After we train a model, we can switch to inference functionality by using the drop-down menu on the top right.
On the next page, we select the model endpoint that has the READY status. Then we need to change the number of inputs. The number of inputs has to be the same as the number of features in the input file used to train the model. For example, if your input file has 19 features and one target value, we need to enter the first 18 inputs. For the last input, we have a range for the values from 1.1, 1.2, 1.3, all the way to 3.0. The purpose of allowing the last input to vary in a certain range is to understand the effects of changing that parameter on the model outcomes.
When we choose Predict, the front end calls the API to retrieve the result and display it in a graph.
The graph shows the target value as a function of values for the last input. Here, we can discover how the last input affects the target value, for the first given 18 inputs.
In the code, we also use Amplify to call the APIs. Just like in the Amazon S3 scenario, Amplify handles the authentication automatically, so we can call the API with the following code:
import {API} from "aws-amplify";
// Code to retrieve inputs and the selected endpoint from drop down box
const inferResult = await API.post("pyapi", `infer`, {
body: {
input: inputParam,
modelName: selectedEndpoint,
range: rangeInput
}
});
Summary
In this post, we learned how to create a web application for performing custom deep learning model training and HPO using SageMaker. We learned how to orchestrate training, HPO, and endpoint creation using Step Functions. Finally, we learned how to create APIs and a web application to upload training data to Amazon S3, start and monitor training and HPO jobs, and perform inference.
Appendix A: Dockerize custom deep learning models on SageMaker
When working on deep learning projects, you can either use pre-built Docker images in SageMaker or build your own custom Docker image from scratch. In the latter case, you can still use SageMaker for training, hosting, and inference. This method allows developers and data scientists to package software into standardized units that run consistently on any platform that supports Docker. Containerization packages the code, runtime, system tools, system libraries, and settings all in the same place, isolating it from its surroundings, and ensures a consistent runtime regardless of where it runs.
When you develop a model in SageMaker, you can provide separate Docker images for the training code and the inference code, or you can combine them into a single Docker image. In this post, we build a single image to support both training and hosting.
We build on the approach used in the post Train and host Scikit-Learn models in Amazon SageMaker by building a Scikit Docker container, which uses the following example container folder to explain how SageMaker runs Docker containers for training and hosting your own algorithms. We strongly recommend you first review the aforementioned post, because it contains many details about how to run Docker containers on SageMaker. In this post, we skip the details of how containers work on SageMaker and focus on how to create them from an existing notebook that runs locally. If you use the folder structure that was described in preceding references, the key files are shown in the following container:
container/
scripts/
nginx.conf
predictor.py
serve
train
wsgi.py
Dockerfile
We use Flask to launch an API to serve HTTP requests for inference. If you choose to run Flask for your service, you can use the following files from SageMaker sample notebooks as is:
Therefore, you only need to modify three files:
- Dockerfile
- train
- py
We provide the local version of the code and briefly explain how to transform it into train and predictor.py formats that you can use inside a Docker container. We recommend you write your local code in a format that can be easily used in a Docker container. For training, there is not a significant difference between the two versions (local vs. Docker). However, the inference code requires significant changes.
Before going into details of how to prepare the train and predictor.py files, let’s look at the Dockerfile, which is a modified version of the previous work:
FROM python:3.6
RUN apt-get -y update && apt-get install -y --no-install-recommends
wget
python
nginx
ca-certificates
&& rm -rf /var/lib/apt/lists/*
# Install all of the packages
RUN wget https://bootstrap.pypa.io/get-pip.py && python get-pip.py
# install code dependencies
COPY "requirements.txt" .
RUN ["pip", "install", "-r", "requirements.txt"]
RUN pip list
# Env Variables
ENV PYTHONUNBUFFERED=TRUE
ENV PYTHONDONTWRITEBYTECODE=TRUE
ENV PATH="/opt/ml:${PATH}"
# Set up the program in the image
COPY scripts /opt/ml
WORKDIR /opt/ml
We use a different name (scripts
) for the folder that contains the train and inference scripts.
SageMaker stores external model artifacts, training data, and other configuration information available to Docker containers in /opt/ml/
. This is also where SageMaker processes model artifacts. We create local folders /opt/ml/
to make local testing mode similar to what happens inside the Docker container.
To understand how to modify your local code (in a Jupyter or SageMaker notebook) to be used in a Docker container, the easiest way is to compare it to what it looks like inside a Docker container.
The following notebook contains code (along with some dummy data after cloning the GitHub repo) for running Bayesian HPO and training for a deep learning regression model using Keras (with a TensorFlow backend) and Hyperopt library (for Bayesian HPO).
The notebook contains an example of running Bayesian HPO or training (referred to as Final Training in the code) for regression problems. Although HPO and Final Training are very similar processes, we treat these two differently in the code.
HPO and Final Training setup and parameters are quite similar. However, they have some important differences:
- Only a fraction of the training data is used for HPO to reduce the runtime (controlled by the parameter
used_data_percentage
in the code). - Each iteration of HPO should be run by a very small number of epochs. The constructed networks allow different numbers of layers for the deep network (optimal number of layers to be found using HPO).
- The number of nodes for each layer can be optimized.
For example, for a neural network with six dense layers, the network structure (controlled by user input) looks like the following visualizations.
The following image shows a neural network with five dense layers.
The following image shows a neural network with five dense layers, which also has dropout and batch normalization.
We have the option to have both dropout and batch normalization, or have only one, or not include either in your network.
The notebook loads the required libraries (Section 1) and preprocesses the data (Section 2). In Section 3, we define the train_final_model
function to perform a final training, and in Section 4, we define the objective
function to perform Bayesian HPO. In both functions (Sections 3 and 4), we define network architectures (in case of HPO in Section 4, we do it iteratively). You can evaluate the training and HPO using any metric. In this example, we are interested in minimizing the value of 95% quantile for the mean absolute error. You can modify this based on your interests.
Running this notebook up to Section 9 performs a training or HPO, based on the flag that you set up in the first line of code in Section 5 (currently defaulted to run the Final Training):
final_training = True
Every section in the notebook up to Section 9, except for Sections 5 and 8, is used as they are (with no change) in the train script for the Docker. Sections 5 and 8 have to be prepared differently for the Docker. In Section 5, we define parameters for Final Training or HPO. In Section 8, we simply define directories that contain the training data data
and the directories that the training or HPO artifacts are saved to. We create an opt/ml
folder to mimic what happens in the Docker, but we keep it outside of our main folder because it’s not required when Dockerizing.
To make the script in this notebook work in a Docker container, we need to modify Sections 5, 8, and 9. You can compare the difference in the train script. We have two new sections in the train script called 5-D and 8-D. D stands for the Docker version of the code (the order of sections has changed). Section 8-D defines directory names for storing the model artifacts. Therefore, you can use it with no changes for your future work. Section 5-D (the equivalent to Section 5 in the local notebook), might require modification for other use cases because we define the hyperparameters that are ingested by our Docker container.
As an example of how to add a hyperparameter in Section 5-D, check the variable nb_epochs
, which specifies the number of epochs that each HPO job runs:
nb_epochs = trainingParams.get('nb_epochs', None)
if nb_epochs is not None:
nb_epochs = int(nb_epochs)
else:
nb_epochs = 5
For your use case, you might need to process these parameters differently. For instance, the optimizer is specified as a list of integers. Therefore, we need an eval
function to turn it into a proper format and use the default value [‘adam’
] when it’s not provided. See the following code:
optimizer = trainingParams.get('optimizer', None)
if optimizer is not None:
optimizer = eval(optimizer)
else:
optimizer =['adam']
Now let’s see how we need to write the inference code in local and Docker mode in Sections 10 and 11 of the notebook. This isn’t how you write an inference code locally, but if you’re working with Docker containers, we recommend writing your inference code as shown in Sections 10 and 11 so that you can quickly use it inside Dockers.
In Section 10, we define the model_path
to load the saved model using the loadmodel
function. We use ScoringService
to keep the local code similar to what we have in predictor.py. You might need to modify this class depending on which framework you’re using for creating your model. This has been modified from its original form to work for a Keras model.
Then we define transform_data
to prepare data sent for inference. Here, we load the scaler.pkl
to normalize our data in the same way we normalized our training data.
In Section 11, we define the transformation
function, which performs inference by reading the df_test.csv file. We removed the column names (headers) in this file from the data. Running the transformation
function returns an array of predictions.
To use this code in a Docker container, we need to modify the path in Section 10:
prefix = '../opt/ml/'
The code is modified to the following line (line 38) in predictor.py:
prefix = '/opt/ml/'
This is because in local mode, we keep the model artifact outside of the Docker files. We need to include an extra section (Section 10b-D in predictor.py), which wasn’t used in the notebook. This section can be used as is for other Dockers as well. The next section that needs to be included in predictor.py is Section 11-D (a modified version of Section 11 in the notebook).
After making these changes, you can build your Docker container, push it to Amazon ECR, and test if it can complete a training job and do inference. You can use the following notebook to test your Docker.
About the Authors
Mehdi E. Far is a Sr Machine Learning Specialist SA within the Manufacturing and Industrial Global and Strategic Accounts organization. He helps customers build Machine Learning and Cloud solutions for their challenging problems.
Chadchapol Vittavutkarnvej is a Specialist Solutions Architect Builder Based in Amsterdam, Netherlands.
How Muthoni Ngatia’s childhood in Kenya led her to a career in economics
The Amazon economist says lessons from her mother taught her a lot about how the world works, and why economics plays such a vital role.Read More