Transfer learning for TensorFlow text classification models in Amazon SageMaker

Transfer learning for TensorFlow text classification models in Amazon SageMaker

Amazon SageMaker provides a suite of built-in algorithms, pre-trained models, and pre-built solution templates to help data scientists and machine learning (ML) practitioners get started training and deploying ML models quickly. You can use these algorithms and models for both supervised and unsupervised learning. They can process various types of input data, including tabular, image, and text.

This post is the third in a series on the new built-in algorithms in SageMaker. In the first post, we showed how SageMaker provides a built-in algorithm for image classification. In the second post, we showed how SageMaker provides a built-in algorithm for object detection. Today, we announce that SageMaker provides a new built-in algorithm for text classification using TensorFlow. This supervised learning algorithm supports transfer learning for many pre-trained models available in TensorFlow hub. It takes a piece of text as input and outputs the probability for each of the class labels. You can fine-tune these pre-trained models using transfer learning even when a large corpus of text isn’t available. It’s available through the SageMaker built-in algorithms, as well as through the SageMaker JumpStart UI in Amazon SageMaker Studio. For more information, refer to Text Classification and the example notebook Introduction to JumpStart – Text Classification.

Text Classification with TensorFlow in SageMaker provides transfer learning on many pre-trained models available in the TensorFlow Hub. According to the number of class labels in the training data, a classification layer is attached to the pre-trained TensorFlow hub model. The classification layer consists of a dropout layer and a dense layer, fully connected layer, with 2-norm regularizer, which is initialized with random weights. The model training has hyper-parameters for the dropout rate of dropout layer, and L2 regularization factor for the dense layer. Then, either the whole network, including the pre-trained model, or only the top classification layer can be fine-tuned on the new training data. In this transfer learning mode, training can be achieved even with a smaller dataset.

How to use the new TensorFlow text classification algorithm

This section describes how to use the TensorFlow text classification algorithm with the SageMaker Python SDK. For information on how to use it from the Studio UI, see SageMaker JumpStart.

The algorithm supports transfer learning for the pre-trained models listed in Tensorflow models. Each model is identified by a unique model_id. The following code shows how to fine-tune BERT base model identified by model_id tensorflow-tc-bert-en-uncased-L-12-H-768-A-12-2 on a custom training dataset. For each model_id, to launch a SageMaker training job through the Estimator class of the SageMaker Python SDK, you must fetch the Docker image URI, training script URI, and pre-trained model URI through the utility functions provided in SageMaker. The training script URI contains all of the necessary code for data processing, loading the pre-trained model, model training, and saving the trained model for inference. The pre-trained model URI contains the pre-trained model architecture definition and the model parameters. The pre-trained model URI is specific to the particular model. The pre-trained model tarballs have been pre-downloaded from TensorFlow and saved with the appropriate model signature in Amazon Simple Storage Service (Amazon S3) buckets, so that the training job runs in network isolation. See the following code:

from sagemaker import image_uris, model_uris, script_urisfrom sagemaker.estimator import Estimator

model_id, model_version = "tensorflow-tc-bert-en-uncased-L-12-H-768-A-12-2", "*"
training_instance_type = "ml.p3.2xlarge"
# Retrieve the docker image
train_image_uri = image_uris.retrieve(model_id=model_id,model_version=model_version,image_scope="training",instance_type=training_instance_type,region=None,framework=None)# Retrieve the training script
train_source_uri = script_uris.retrieve(model_id=model_id, model_version=model_version, script_scope="training")# Retrieve the pre-trained model tarball for transfer learning
train_model_uri = model_uris.retrieve(model_id=model_id, model_version=model_version, model_scope="training")

output_bucket = sess.default_bucket()
output_prefix = "jumpstart-example-tensorflow-tc-training"
s3_output_location = f"s3://{output_bucket}/{output_prefix}/output"

With these model-specific training artifacts, you can construct an object of the Estimator class:

# Create SageMaker Estimator instance
tf_tc_estimator = Estimator(
    role=aws_role,
    image_uri=train_image_uri,
    source_dir=train_source_uri,
    model_uri=train_model_uri,
    entry_point="transfer_learning.py",
    instance_count=1,
    instance_type=training_instance_type,
    max_run=360000,
    hyperparameters=hyperparameters,
    output_path=s3_output_location,)

Next, for transfer learning on your custom dataset, you might need to change the default values of the training hyperparameters, which are listed in Hyperparameters. You can fetch a Python dictionary of these hyperparameters with their default values by calling hyperparameters.retrieve_default, update them as needed, and then pass them to the Estimator class. Note that the default values of some of the hyperparameters are different for different models. For large models, the default batch size is smaller and the train_only_top_layer hyperparameter is set to True. The hyperparameter Train_only_top_layer defines which model parameters change during the fine-tuning process. If train_only_top_layer is True, then parameters of the classification layers change and the rest of the parameters remain constant during the fine-tuning process. On the other hand, if train_only_top_layer is False, then all of the parameters of the model are fine-tuned. See the following code:

from sagemaker import hyperparameters# Retrieve the default hyper-parameters for fine-tuning the model
hyperparameters = hyperparameters.retrieve_default(model_id=model_id, model_version=model_version)# [Optional] Override default hyperparameters with custom values
hyperparameters["epochs"] = "5"

We provide the SST2 as a default dataset for fine-tuning the models. The dataset contains positive and negative movie reviews. It has been downloaded from TensorFlow under Apache 2.0 License. The following code provides the default training dataset hosted in S3 buckets.

# Sample training data is available in this bucket
training_data_bucket = f"jumpstart-cache-prod-{aws_region}"
training_data_prefix = "training-datasets/SST2/"

training_dataset_s3_path = f"s3://{training_data_bucket}/{training_data_prefix}"

Finally, to launch the SageMaker training job for fine-tuning the model, call .fit on the object of the Estimator class, while passing the Amazon S3 location of the training dataset:

# Launch a SageMaker Training job by passing s3 path of the training data
tf_od_estimator.fit({"training": training_dataset_s3_path}, logs=True)

For more information about how to use the new SageMaker TensorFlow text classification algorithm for transfer learning on a custom dataset, deploy the fine-tuned model, run inference on the deployed model, and deploy the pre-trained model as is without first fine-tuning on a custom dataset, see the following example notebook: Introduction to JumpStart – Text Classification.

Input/output interface for the TensorFlow text classification algorithm

You can fine-tune each of the pre-trained models listed in TensorFlow Models to any given dataset made up of text sentences with any number of classes. The pre-trained model attaches a classification layer to the Text Embedding model and initializes the layer parameters to random values. The output dimension of the classification layer is determined based on the number of classes detected in the input data. The objective is to minimize classification errors on the input data. The model returned by fine-tuning can be further deployed for inference.

The following instructions describe how the training data should be formatted for input to the model:

  • Input – A directory containing a data.csv file. Each row of the first column should have integer class labels between 0 and the number of classes. Each row of the second column should have the corresponding text data.
  • Output – A fine-tuned model that can be deployed for inference or further trained using incremental training. A file mapping class indexes to class labels is saved along with the models.

The following is an example of an input CSV file. Note that the file should not have any header. The file should be hosted in an S3 bucket with a path similar to the following: s3://bucket_name/input_directory/. Note that the trailing / is required.

|0 |hide new secretions from the parental units|
|0 |contains no wit , only labored gags|
|1 |that loves its characters and communicates something rather beautiful about human nature|
|...|...|

Inference with the TensorFlow text classification algorithm

The generated models can be hosted for inference and support text as the application/x-text content type. The output contains the probability values, class labels for all of the classes, and the predicted label corresponding to the class index with the highest probability encoded in the JSON format. The model processes a single string per request and outputs only one line. The following is an example of a JSON format response:

accept: application/json;verbose
{"probabilities": [prob_0, prob_1, prob_2, ...],
 "labels": [label_0, label_1, label_2, ...],
 "predicted_label": predicted_label}

If accept is set to application/json, then the model only outputs probabilities. For more details on training and inference, see the sample notebook Introduction to Introduction to JumpStart – Text Classification.

Use SageMaker built-in algorithms through the JumpStart UI

You can also use SageMaker TensorFlow text classification and any of the other built-in algorithms with a few clicks via the JumpStart UI. JumpStart is a SageMaker feature that lets you train and deploy built-in algorithms and pre-trained models from various ML frameworks and model hubs through a graphical interface. Furthermore, it lets you deploy fully-fledged ML solutions that string together ML models and various other AWS services to solve a targeted use case.

Following are two videos that show how you can replicate the same fine-tuning and deployment process we just went through with a few clicks via the JumpStart UI.

Fine-tune the pre-trained model

Here is the process to fine-tune the same pre-trained text classification model.

Deploy the finetuned model

After model training is finished, you can directly deploy the model to a persistent, real-time endpoint with one click.

Conclusion

In this post, we announced the launch of the SageMaker TensorFlow text classification built-in algorithm. We provided example code for how to do transfer learning on a custom dataset using a pre-trained model from TensorFlow hub using this algorithm.

For more information, check out the documentation and the example notebook Introduction to JumpStart – Text Classification.


About the authors

Dr. Vivek Madan is an Applied Scientist with the Amazon SageMaker JumpStart team. He got his PhD from University of Illinois at Urbana-Champaign and was a Post Doctoral Researcher at Georgia Tech. He is an active researcher in machine learning and algorithm design and has published papers in EMNLP, ICLR, COLT, FOCS and SODA conferences.

João Moura is an AI/ML Specialist Solutions Architect at Amazon Web Services. He is mostly focused on NLP use-cases and helping customers optimize deep learning model training and deployment. He is also an active proponent of low-code ML solutions and ML-specialized hardware.

Dr. Ashish Khetan is a Senior Applied Scientist with Amazon SageMaker built-in algorithms and helps develop machine learning algorithms. He got his PhD from University of Illinois Urbana Champaign. He is an active researcher in machine learning and statistical inference and has published many papers in NeurIPS, ICML, ICLR, JMLR, ACL, and EMNLP conferences.

Read More

Meet the Omnivore: Indie Showrunner Transforms Napkin Doodles Into Animated Shorts With NVIDIA Omniverse

Meet the Omnivore: Indie Showrunner Transforms Napkin Doodles Into Animated Shorts With NVIDIA Omniverse

Editor’s note: This post is a part of our Meet the Omnivore series, which features individual creators and developers who use NVIDIA Omniverse to accelerate their 3D workflows and create virtual worlds.

Rafi Nizam

3D artist Rafi Nizam has worn many hats since starting his career as a web designer more than two decades ago, back when “designing for the web was still wild,” as he put it.

He’s now becoming a leader in the next wave of creation — using extended reality and virtual production — with the help of NVIDIA Omniverse, a platform for building and connecting custom 3D pipelines.

The London-based showrunner, creative consultant and entertainment executive previously worked at advertising agencies and led creative teams at Sony Pictures, BBC and NBCUniversal.

In addition to being an award-winning independent animator, director, character designer and storyteller who serves as chief creative officer at Masterpiece Studio, he’s head of story at game developer Opis Group, and showrunner at Lunar-X, a next-gen entertainment company.

Plus, in recent years, he’s taken on what he considers his most important role of all — being a father. And his art is now often inspired by family.

“Being present in the moment with my children and observing the world without preconceptions often sparks ideas for me,” Nizam said.

His animated shorts have so far focused on themes of self care and finding stillness amidst chaos. He’s at work on a new computer-graphics-animated series, ArtSquad, in which fun-loving, vibrant 3D characters form a band, playing instruments made of classroom objects and solving problems through the power of art.

“The myriad of 3D apps in my animation pipeline can sync and come together in Omniverse using the Universal Scene Description framework,” he said. “This interoperability allows me to be 10x more productive when visualizing my show concepts — and I’ve cut my outsourcing costs by 50%, as Omniverse enables me to render, lookdev, lay out scenes and manipulate cameras by myself.”

From Concept to Creation

Nizam said he often starts his projects with “good ol’ pencil and paper on a Post-it note or napkin, whenever inspiration strikes.”

He then takes his ideas to a drawing desk, where he creates a simple sketch before honing in on pre-production using digital content-creation apps like Adobe Illustrator, Adobe Photoshop and Procreate.

Nizam next creates 3D production assets from his 2D sketches, manipulating them in virtual reality using Adobe Substance 3D Modeler software.

“Things start to move pretty rapidly from here,” he said, “because VR is such an intuitive way to make 3D assets. Plus, rigging and texturing in the Masterpiece Studio creative suite and Adobe Substance 3D can be near automatic.”

The artist uses the Omniverse Create XR spatial computing app to lay out his scenes in VR. He blocks out character actions, designs sets and finalizes textures using Unreal Engine 5, Autodesk Maya and Blender software.

Performance capture through Perception Neuron Studio quickly gets Nizam close to final animation. And with the easily extensible USD framework, Nizam brings his 3D assets into the Omniverse Create app for rapid look development. Here he enhances character animation with built-in hyperrealistic physics and renders final shots in real time.

“Omniverse offers me an easy entry point to USD-based workflows, live collaboration across disciplines, rapid visualization, real-time rendering, an accessible physics engine and the easy modification of preset simulations,” Nizam said. “I can’t wait to get back in and try out more ideas.”

At home, Nizam uses an NVIDIA Studio workstation powered by an NVIDIA RTX A6000 GPU. To create on the go, the artist turns to his NVIDIA Studio laptop from ASUS, equipped with a GeForce RTX 3060 GPU.

In addition, his entire workflow is accelerated by NVIDIA Studio, a platform of NVIDIA RTX and AI-accelerated creator apps, Studio Drivers and a suite of exclusive creative tools.

When not creating transmedia projects and franchises for his clients, Nizam can be found mentoring young creators for Sony Talent League, playing make believe with his children or chilling with his two cats, Hamlet and Omelette.

Join In on the Creation

Creators and developers across the world can download NVIDIA Omniverse for free, and enterprise teams can use the platform for their 3D projects.

Check out artwork from other “Omnivores” and submit projects in the gallery. Connect your workflows to Omniverse with software from Adobe, Autodesk, Epic Games, Maxon, Reallusion and more.

Follow NVIDIA Omniverse on Instagram, Medium, Twitter and YouTube for additional resources and inspiration. Check out the Omniverse forums, and join our Discord server and Twitch channel to chat with the community.

The post Meet the Omnivore: Indie Showrunner Transforms Napkin Doodles Into Animated Shorts With NVIDIA Omniverse appeared first on NVIDIA Blog.

Read More

Intelligent document processing with AWS AI and Analytics services in the insurance industry: Part 2

Intelligent document processing with AWS AI and Analytics services in the insurance industry: Part 2

In Part 1 of this series, we discussed intelligent document processing (IDP), and how IDP can accelerate claims processing use cases in the insurance industry. We discussed how we can use AWS AI services to accurately categorize claims documents along with supporting documents. We also discussed how to extract various types of documents in an insurance claims package, such as forms, tables, or specialized documents such as invoices, receipts, or ID documents. We looked into the challenges in legacy document processes, which is time-consuming, error-prone, expensive, and difficult to process at scale, and how you can use AWS AI services to help implement your IDP pipeline.

In this post, we walk you through advanced IDP features for document extraction, querying, and enrichment. We also look into how to further use the extracted structured information from claims data to get insights using AWS Analytics and visualization services. We highlight on how extracted structured data from IDP can help against fraudulent claims using AWS Analytics services.

Intelligent document processing with AWS AI and Analytics services in the insurance industry

Solution overview

The following diagram illustrates the phases if IDP using AWS AI services. In Part 1, we discussed the first three phases of the IDP workflow. In this post, we expand on the extraction step and the remaining phases, which include integrating IDP with AWS Analytics services.

The different phases of intelligent document processing in insurance industry

We use these analytics services for further insights and visualizations, and to detect fraudulent claims using structured, normalized data from IDP. The following diagram illustrates the solution architecture.

IDP architecture diagram

The phases we discuss in this post use the following key services:

  • Amazon Comprehend Medical is a HIPAA-eligible natural language processing (NLP) service that uses machine learning (ML) models that have been pre-trained to understand and extract health data from medical text, such as prescriptions, procedures, or diagnoses.
  • AWS Glue is a part of the AWS Analytics services stack, and is a serverless data integration service that makes it easy to discover, prepare, and combine data for analytics, ML, and application development.
  • Amazon Redshift is another service in the Analytics stack. Amazon Redshift is a fully managed, petabyte-scale data warehouse service in the cloud.

Prerequisites

Before you get started, refer to Part 1 for a high-level overview of the insurance use case with IDP and details about the data capture and classification stages.

For more information regarding the code samples, refer to our GitHub repo.

Extraction phase

In Part 1, we saw how to use Amazon Textract APIs to extract information like forms and tables from documents, and how to analyze invoices and identity documents. In this post, we enhance the extraction phase with Amazon Comprehend to extract default and custom entities specific to custom use cases.

Insurance carriers often come across dense text in insurance claims applications, such a patient’s discharge summary letter (see the following example image). It can be difficult to automatically extract information from such types of documents where there is no definite structure. To address this, we can use the following methods to extract key business information from the document:

Discharge summary sample

Extract default entities with the Amazon Comprehend DetectEntities API

We run the following code on the sample medical transcription document:

comprehend = boto3.client('comprehend') 

response = comprehend.detect_entities( Text=text, LanguageCode='en')

#print enitities from the response JSON

for entity in response['Entities']:
    print(f'{entity["Type"]} : {entity["Text"]}')

The following screenshot shows a collection of entities identified in the input text. The output has been shortened for the purposes of this post. Refer to the GitHub repo for a detailed list of entities.

Extract custom entities with Amazon Comprehend custom entity recognition

The response from the DetectEntities API includes the default entities. However, we’re interested in knowing specific entity values, such as the patient’s name (denoted by the default entity PERSON), or the patient’s ID (denoted by the default entity OTHER). To recognize these custom entities, we train an Amazon Comprehend custom entity recognizer model. We recommend following the comprehensive steps on how to train and deploy a custom entity recognition model in the GitHub repo.

After we deploy the custom model, we can use the helper function get_entities() to retrieve custom entities like PATIENT_NAME and PATIENT_D from the API response:

def get_entities(text):
try:
    #detect entities
    entities_custom = comprehend.detect_entities(LanguageCode="en",
                      Text=text, EndpointArn=ER_ENDPOINT_ARN) 
    df_custom = pd.DataFrame(entities_custom["Entities"], columns = ['Text',  
                'Type', 'Score'])
    df_custom = df_custom.drop_duplicates(subset=['Text']).reset_index()
    return df_custom
except Exception as e:
    print(e)

# call the get_entities() function 
response = get_entities(text) 
#print the response from the get_entities() function
print(response)

The following screenshot shows our results.

Enrichment phase

In the document enrichment phase, we perform enrichment functions on healthcare-related documents to draw valuable insights. We look at the following types of enrichment:

  • Extract domain-specific language – We use Amazon Comprehend Medical to extract medical-specific ontologies like ICD-10-CM, RxNorm, and SNOMED CT
  • Redact sensitive information – We use Amazon Comprehend to redact personally identifiable information (PII), and Amazon Comprehend Medical for protected health information (PHI) redaction

Extract medical information from unstructured medical text

Documents such as medical providers’ notes and clinical trial reports include dense medical text. Insurance claims carriers need to identify the relationships among the extracted health information from this dense text and link them to medical ontologies like ICD-10-CM, RxNorm, and SNOMED CT codes. This is very valuable in automating claim capture, validation, and approval workflows for insurance companies to accelerate and simplify claim processing. Let’s look at how we can use the Amazon Comprehend Medical InferICD10CM API to detect possible medical conditions as entities and link them to their codes:

cm_json_data = comprehend_med.infer_icd10_cm(Text=text)

print("nMedical codingn========")

for entity in cm_json_data["Entities"]:
      for icd in entity["ICD10CMConcepts"]:
           description = icd['Description']
           code = icd["Code"]
           print(f'{description}: {code}')

For the input text, which we can pass in from the Amazon Textract DetectDocumentText API, the InferICD10CM API returns the following output (the output has been abbreviated for brevity).

Extract medical information from unstructured medical text

Similarly, we can use the Amazon Comprehend Medical InferRxNorm API to identify medications and the InferSNOMEDCT API to detect medical entities within healthcare-related insurance documents.

Perform PII and PHI redaction

Insurance claims packages require a lot of privacy compliance and regulations because they contain both PII and PHI data. Insurance carriers can reduce compliance risk by redacting information like policy numbers or the patient’s name.

Let’s look at an example of a patient’s discharge summary. We use the Amazon Comprehend DetectPiiEntities API to detect PII entities within the document and protect the patient’s privacy by redacting these entities:

resp = call_textract(input_document = f's3://{data_bucket}/idp/textract/dr-note-sample.png')
text = get_string(textract_json=resp, output_type=[Textract_Pretty_Print.LINES])

# call Amazon Comprehend Detect PII Entities API
entity_resp = comprehend.detect_pii_entities(Text=text, LanguageCode="en") 

pii = []
for entity in entity_resp['Entities']:
      pii_entity={}
      pii_entity['Type'] = entity['Type']
      pii_entity['Text'] = text[entity['BeginOffset']:entity['EndOffset']]
      pii.append(pii_entity)
print(pii)

We get the following PII entities in the response from the detect_pii_entities() API :

response from the detect_pii_entities() API

We can then redact the PII entities that were detected from the documents by utilizing the bounding box geometry of the entities from the document. For that, we use a helper tool called amazon-textract-overlayer. For more information, refer to Textract-Overlayer. The following screenshots compare a document before and after redaction.

Similar to the Amazon Comprehend DetectPiiEntities API, we can also use the DetectPHI API to detect PHI data in the clinical text being examined. For more information, refer to Detect PHI.

Review and validation phase

In the document review and validation phase, we can now verify if the claim package meets the business’s requirements, because we have all the information collected from the documents in the package from earlier stages. We can do this by introducing a human in the loop that can review and validate all the fields or just an auto-approval process for low dollar claims before sending the package to downstream applications. We can use Amazon Augmented AI (Amazon A2I) to automate the human review process for insurance claims processing.

Now that we have all required data extracted and normalized from claims processing using AI services for IDP, we can extend the solution to integrate with AWS Analytics services such as AWS Glue and Amazon Redshift to solve additional use cases and provide further analytics and visualizations.

Detect fraudulent insurance claims

In this post, we implement a serverless architecture where the extracted and processed data is stored in a data lake and is used to detect fraudulent insurance claims using ML. We use Amazon Simple Storage Service (Amazon S3) to store the processed data. We can then use AWS Glue or Amazon EMR to cleanse the data and add additional fields to make it consumable for reporting and ML. After that, we use Amazon Redshift ML to build a fraud detection ML model. Finally, we build reports using Amazon QuickSight to get insights into the data.

Setup Amazon Redshift external schema

For the purpose of this example, we have created a sample dataset the emulates the output of an ETL (extract, transform, and load) process, and use AWS Glue Data Catalog as the metadata catalog. First, we create a database named idp_demo in the Data Catalog and an external schema in Amazon Redshift called idp_insurance_demo (see the following code). We use an AWS Identity and Access Management (IAM) role to grant permissions to the Amazon Redshift cluster to access Amazon S3 and Amazon SageMaker. For more information about how to set up this IAM role with least privilege, refer to Cluster and configure setup for Amazon Redshift ML administration.

CREATE EXTERNAL SCHEMA idp_insurance_demo
FROM DATA CATALOG
DATABASE 'idp_demo' 
IAM_ROLE '<<<your IAM Role here>>>'
CREATE EXTERNAL DATABASE IF NOT EXISTS;

Create Amazon Redshift external table

The next step is to create an external table in Amazon Redshift referencing the S3 location where the file is located. In this case, our file is a comma-separated text file. We also want to skip the header row from the file, which can be configured in the table properties section. See the following code:

create external table idp_insurance_demo.claims(id INTEGER,
date_of_service date,
patients_address_city VARCHAR,
patients_address_state VARCHAR,
patients_address_zip VARCHAR,
patient_status VARCHAR,
insured_address_state VARCHAR,
insured_address_zip VARCHAR,
insured_date_of_birth date,
insurance_plan_name VARCHAR,
total_charges DECIMAL(14,4),
fraud VARCHAR,
duplicate varchar,
invalid_claim VARCHAR
)
row format delimited
fields terminated by ','
stored as textfile
location '<<<S3 path where file is located>>>'
table properties ( 'skip.header.line.count'='1');

Create training and test datasets

After we create the external table, we prepare our dataset for ML by splitting it into training set and test set. We create a new external table called claim_train, which consists of all records with ID <= 85000 from the claims table. This is the training set that we train our ML model on.

CREATE EXTERNAL TABLE
idp_insurance_demo.claims_train
row format delimited
fields terminated by ','
stored as textfile
location '<<<S3 path where file is located>>>/train'
table properties ( 'skip.header.line.count'='1')
AS select * from idp_insurance_demo.claims where id <= 850000

We create another external table called claim_test that consists of all records with ID >85000 to be the test set that we test the ML model on:

CREATE EXTERNAL TABLE
idp_insurance_demo.claims_test
row format delimited
fields terminated by ','
stored as textfile
location '<<<S3 path where file is located>>>/test'
table properties ( 'skip.header.line.count'='1')
AS select * from idp_insurance_demo.claims where id > 850000

Create an ML model with Amazon Redshift ML

Now we create the model using the CREATE MODEL command (see the following code). We select the relevant columns from the claims_train table that can determine a fraudulent transaction. The goal of this model is to predict the value of the fraud column; therefore, fraud is added as the prediction target. After the model is trained, it creates a function named insurance_fraud_model. This function is used for inference while running SQL statements to predict the value of the fraud column for new records.

CREATE MODEL idp_insurance_demo.insurance_fraud_model
FROM (SELECT 
total_charges ,
fraud ,
duplicate,
invalid_claim
FROM idp_insurance_demo.claims_train
)
TARGET fraud
FUNCTION insurance_fraud_model
IAM_ROLE '<<<your IAM Role here>>>'
SETTINGS (
S3_BUCKET '<<<S3 bucket where model artifacts will be stored>>>'
);

Evaluate ML model metrics

After we create the model, we can run queries to check the accuracy of the model. We use the insurance_fraud_model function to predict the value of the fraud column for new records. Run the following query on the claims_test table to create a confusion matrix:

SELECT 
fraud,
idp_insurance_demo.insurance_fraud_model (total_charges ,duplicate,invalid_claim ) as fraud_calculcated,
count(1)
FROM idp_insurance_demo.claims_test
GROUP BY fraud , fraud_calculcated;

Detect fraud using the ML model

After we create the new model, as new claims data is inserted into the data warehouse or data lake, we can use the insurance_fraud_model function to calculate the fraudulent transactions. We do this by first loading the new data into a temporary table. Then we use the insurance_fraud_model function to calculate the fraud flag for each new transaction and insert the data along with the flag into the final table, which in this case is the claims table.

Visualize the claims data

When the data is available in Amazon Redshift, we can create visualizations using QuickSight. We can then share the QuickSight dashboards with business users and analysts. To create the QuickSight dashboard, you first need to create an Amazon Redshift dataset in QuickSight. For instructions, refer to Creating a dataset from a database.

After you create the dataset, you can create a new analysis in QuickSight using the dataset. The following are some sample reports we created:

  • Total number of claims by state, grouped by the fraud field – This chart shows us the proportion of fraudulent transactions compared to the total number of transactions in a particular state.
  • Sum of the total dollar value of the claims, grouped by the fraud field – This chart shows us the proportion of dollar amount of fraudulent transactions compared to the total dollar amount of transactions in a particular state.
  • Total number of transactions per insurance company, grouped by the fraud field – This chart shows us how many claims were filed for each insurance company and how many of them are fraudulent.

• Total number of transactions per insurance company, grouped by the fraud field

  • Total sum of fraudulent transactions by state displayed on a US map – This chart just shows the fraudulent transactions and displays the total charges for those transactions by state on the map. The darker shade of blue indicates higher total charges. We can further analyze this by city within that state and zip codes with the city to better understand the trends.

Clean up

To prevent incurring future charges to your AWS account, delete the resources that you provisioned in the setup by following the instructions in the Cleanup section in our repo.

Conclusion

In this two-part series, we saw how to build an end-to-end IDP pipeline with little or no ML experience. We explored a claims processing use case in the insurance industry and how IDP can help automate this use case using services such as Amazon Textract, Amazon Comprehend, Amazon Comprehend Medical, and Amazon A2I. In Part 1, we demonstrated how to use AWS AI services for document extraction. In Part 2, we extended the extraction phase and performed data enrichment. Finally, we extended the structured data extracted from IDP for further analytics, and created visualizations to detect fraudulent claims using AWS Analytics services.

We recommend reviewing the security sections of the Amazon Textract, Amazon Comprehend, and Amazon A2I documentation and following the guidelines provided. To learn more about the pricing of the solution, review the pricing details of Amazon Textract, Amazon Comprehend, and Amazon A2I.


About the Authors

authorChinmayee Rane is an AI/ML Specialist Solutions Architect at Amazon Web Services. She is passionate about applied mathematics and machine learning. She focuses on designing intelligent document processing solutions for AWS customers. Outside of work, she enjoys salsa and bachata dancing.


Uday Narayanan
is an Analytics Specialist Solutions Architect at AWS. He enjoys helping customers find innovative solutions to complex business challenges. His core areas of focus are data analytics, big data systems, and machine learning. In his spare time, he enjoys playing sports, binge-watching TV shows, and traveling.


Sonali Sahu
is leading the Intelligent Document Processing AI/ML Solutions Architect team at Amazon Web Services. She is a passionate technophile and enjoys working with customers to solve complex problems using innovation. Her core area of focus is artificial intelligence and machine learning for intelligent document processing.

Read More

Intelligent document processing with AWS AI services in the insurance industry: Part 1

Intelligent document processing with AWS AI services in the insurance industry: Part 1

The goal of intelligent document processing (IDP) is to help your organization make faster and more accurate decisions by applying AI to process your paperwork. This two-part series highlights the AWS AI technologies that insurance companies can use to speed up their business processes. These AI technologies can be used across insurance use cases such as claims, underwriting, customer correspondence, contracts, or handling disputes resolutions. This series focuses on a claims processing use case in the insurance industry; for more information about the fundamental concepts of the AWS IDP solution, refer to the following two-part series.

Claims processing consists of multiple checkpoints in a workflow that is required to review, verify authenticity, and determine the correct financial responsibility to adjudicate a claim. Insurance companies go through these checkpoints for claims before adjudication of the claims. If a claim successfully goes through all these checkpoints without issues, the insurance company approves it and processes any payment. However, they may require additional supporting information to adjudicate a claim. This claims processing process is often manual, making it expensive, error-prone, and time-consuming. Insurance customers can automate this process using AWS AI services to automate the document processing pipeline for claims processing.

In this two-part series, we take you through how you can automate and intelligently process documents at scale using AWS AI services for an insurance claims processing use case.

Intelligent document processing with AWS AI and Analytics services in the insurance industry

Solution overview

The following diagram represents each stage that we typically see in an IDP pipeline. We walk through each of these stages and how they connect to the steps involved in a claims application process, starting from when an application is submitted, to investigating and closing the application. In this post, we cover the technical details of the data capture, classification, and extraction stages. In Part 2, we expand the document extraction stage and continue to document enrichment, review and verification, and extend the solution to provide analytics and visualizations for a claims fraud use case.

The different phases of intelligent document processing in insurance industry

The following architecture diagram shows the different AWS services used during the phases of the IDP pipeline according to different stages of a claims processing application.

IDP architecture diagram

The solution uses the following key services:

  • Amazon Textract is a machine learning (ML) service that automatically extracts text, handwriting, and data from scanned documents. It goes beyond simple optical character recognition (OCR) to identify, understand, and extract data from forms and tables. Amazon Textract uses ML to read and process any type of document, accurately extracting text, handwriting, tables, and other data with no manual effort.
  • Amazon Comprehend is a natural language processing (NLP) service that uses ML to extract insights from text. Amazon Comprehend can detect entities such as person, location, date, quantity, and more. It can also detect the dominant language, personally identifiable information (PII) information, and classify documents into their relevant class.
  • Amazon Augmented AI (Amazon A2I) is an ML service that makes it easy to build the workflows required for human review. Amazon A2I brings human review to all developers, removing the undifferentiated heavy lifting associated with building human review systems or managing large numbers of human reviewers. Amazon A2I integrates both with Amazon Textract and Amazon Comprehend to provide the ability to introduce human review or validation within the IDP workflow.

Prerequisites

In the following sections, we walk through the different services relating to the first three phases of the architecture, i.e., the data capture, classification and extraction phases.

Refer to our GitHub repository for full code samples along with the document samples in the claims processing packet.

Data capture phase

Claims and its supporting documents can come through various channels, such as fax, email, an admin portal, and more. You can store these documents in a highly scalable and durable storage like Amazon Simple Storage Service (Amazon S3). These documents can be of various types, such as PDF, JPEG, PNG, TIFF, and more. Documents can come in various formats and layouts, and can come from different channels to the data store.

Classification phase

In the document classification stage, we can combine Amazon Comprehend with Amazon Textract to convert text to document context to classify the documents that are stored in the data capture stage. We can then use custom classification in Amazon Comprehend to organize documents into classes that we defined in the claims processing packet. Custom classification is also helpful for automating the document verification process and identifying any missing documents from the packet. There are two steps in custom classification, as shown in the architecture diagram:

  1. Extract text using Amazon Textract from all the documents in the data storage to prepare training data for the custom classifier.
  2. Train an Amazon Comprehend custom classification model (also called a document classifier) to recognize the classes of interest based on the text content.

Document classification of insurance claims packet

After the Amazon Comprehend custom classification model is trained, we can use the real-time endpoint to classify documents. Amazon Comprehend returns all classes of documents with a confidence score linked to each class in an array of key-value pairs (Doc_nameConfidence_score). We recommend going through the detailed document classification sample code on GitHub.

Extraction phase

In the extraction phase, we extract data from documents using Amazon Textract and Amazon Comprehend. For this post, use the following sample documents in the claims processing packet: a Center of Medicaid and Medicare Services (CMS)-1500 claim form, driver’s license and insurance ID, and invoice.

Extract data from a CMS-1500 claim form

The CMS-1500 form is the standard claim form used by a non-institutional provider or supplier to bill Medicare carriers.

It’s important to process the CMS-1500 form accurately, otherwise it can slow down the claims process or delay payment by the carrier. With the Amazon Textract AnalyzeDocument API, we can speed up the extraction process with higher accuracy to extract text from documents in order to understand further insights within the claim form. The following is sample document of a CMS-1500 claim form.

A CMS1500 Claim form

We now use the AnalyzeDocument API to extract two FeatureTypes, FORMS and TABLES, from the document:

from IPython.display import display, JSON
form_resp = textract.analyze_document(Document={'S3Object':{"Bucket": data_bucket, "Name": cms_key}}, FeatureTypes=['FORMS', 'TABLES'])

# print tables
print(get_string(textract_json=form_resp, output_type=[Textract_Pretty_Print.TABLES], table_format=Pretty_Print_Table_Format.fancy_grid))

# using our constructed helper function - values returned as a dictionary

display(JSON(getformkeyvalue(form_resp), root="Claim Form"))

The following results have been shortened for better readability. For more detailed information, see our GitHub repo.

The FORMS extraction is identified as key-value pairs.

The TABLES extraction contains cells, merged cells, and column headers within a detected table in the claim form.

Tables extraction from CMS1500 form

Extract data from ID documents

For identity documents like an insurance ID, which can have different layouts, we can use the Amazon Textract AnalyzeDocument API. We use the FeatureType FORMS as the configuration for the AnalyzeDocument API to extract the key-value pairs from the insurance ID (see the following sample):

Run the following code:

ins_form_resp = textract.analyze_document(Document={'S3Object':{"Bucket": data_bucket, "Name": ins_card_key}}, FeatureTypes=['FORMS'])

# using our constructed helper function - values returned as a dictionary

display(JSON(getformkeyvalue(ins_form_resp), root="Insurance card"))

We get the key-value pairs in the result array, as shown in the following screenshot.

For ID documents like a US driver’s license or US passport, Amazon Textract provides specialized support to automatically extract key terms without the need for templates or formats, unlike what we saw earlier for the insurance ID example. With the AnalyzeID API, businesses can quickly and accurately extract information from ID documents that have different templates or formats. The AnalyzeID API returns two categories of data types:

  • Key-value pairs available on the ID such as date of birth, date of issue, ID number, class, and restrictions
  • Implied fields on the document that may not have explicit keys associated with them, such as name, address, and issuer

We use the following sample US driver’s license from our claims processing packet.

Run the following code:

ID_resp = textract.analyze_id(DocumentPages=[{'S3Object':{"Bucket": data_bucket, "Name": key}}])
# once again using the textract response parser
from trp.trp2_analyzeid import TAnalyzeIdDocument, TAnalyzeIdDocumentSchema

t_doc = TAnalyzeIdDocumentSchema().load(ID_resp)

list_of_results = t_doc.get_values_as_list()
print(tabulate([x[1:3] for x in list_of_results]))

The following screenshot shows our result.

From the results screenshot, you can observe that certain keys are presented that were not in the driver’s license itself. For example, Veteran is not a key found in the license; however, it’s a pre-populated key-value that AnalyzeID supports, due to the differences found in licenses between states.

Extract data from invoices and receipts

Similar to the AnalyzeID API, the AnalyzeExpense API provides specialized support for invoices and receipts to extract relevant information such as vendor name, subtotal and total amounts, and more from any format of invoice documents. You don’t need any template or configuration for extraction. Amazon Textract uses ML to understand the context of ambiguous invoices as well as receipts.

The following is a sample medical insurance invoice.

A sample of insurance invoice

We use the AnalyzeExpense API to see a list of standardized fields. Fields that aren’t recognized as standard fields are categorized as OTHER:

expense_resp = textract.analyze_expense(Document={'S3Object':{"Bucket": data_bucket, "Name": invc_key}})

# print invoice summary

print(get_expensesummary_string(textract_json=expense_resp, table_format=Pretty_Print_Table_Format.fancy_grid))

# print invoice line items

print(get_expenselineitemgroups_string(textract_json=expense_resp, table_format=Pretty_Print_Table_Format.fancy_grid))

We get the following list of fields as key-value pairs (see screenshot on the left) and the entire row of individual line items purchased (see screenshot on the right) in the results.

Conclusion

In this post, we showcased the common challenges in claims processing, and how we can use AWS AI services to automate an intelligent document processing pipeline to automatically adjudicate a claim. We saw how to classify documents into various document classes using an Amazon Comprehend custom classifier, and how to use Amazon Textract to extract unstructured, semi-structured, structured, and specialized document types.

In Part 2, we expand on the extraction phase with Amazon Textract. We also use Amazon Comprehend pre-defined entities and custom entities to enrich the data, and show how to extend the IDP pipeline to integrate with analytics and visualization services for further processing.

We recommend reviewing the security sections of the Amazon Textract, Amazon Comprehend, and Amazon A2I documentation and following the guidelines provided. To learn more about the pricing of the solution, review the pricing details of Amazon Textract, Amazon Comprehend, and Amazon A2I.


About the Authors

Chinmayee Rane is an AI/ML Specialist Solutions Architect at Amazon Web Services. She is passionate about applied mathematics and machine learning. She focuses on designing intelligent document processing solutions for AWS customers. Outside of work, she enjoys salsa and bachata dancing.


Sonali Sahu is leading the Intelligent Document Processing AI/ML Solutions Architect team at Amazon Web Services. She is a passionate technophile and enjoys working with customers to solve complex problems using innovation. Her core area of focus is artificial intelligence and machine learning for intelligent document processing.


Tim Condello is a Senior AI/ML Specialist Solutions Architect at Amazon Web Services. His focus is natural language processing and computer vision. Tim enjoys taking customer ideas and turning them into scalable solutions.

Read More

DALL·E API Now Available in Public Beta

DALL·E API Now Available in Public Beta

DALL·E API Now Available in Public Beta

Starting today, developers can begin building apps with the DALL·E API.

Read documentation

Developers can now integrate DALL·E directly into their apps and products through our API. More than 3 million people are already using DALL·E to extend their creativity and speed up their workflows, generating over 4 million images a day. Developers can start building with this same technology in a matter of minutes.




#generations
curl https://api.openai.com/v1/images/generations 
  -H "Content-Type: application/json" 
  -H "Authorization: Bearer $OPENAI_API_KEY" 
  -d '{
    "prompt": "a photo of a happy corgi puppy sitting and facing forward, studio light, longshot",
    "n":1,
    "size":"1024x1024"
   }'
down
DALL·E API Now Available in Public Beta
#edits
curl https://api.openai.com/v1/images/edits 
  -H "Authorization: Bearer $OPENAI_API_KEY" 
  -F image="@/Users/openai/happy_corgi.png" 
  -F mask="@/Users/openai/mask.png" 
  -F prompt="a photo of a happy corgi puppy with fancy sunglasses on sitting and facing forward, studio light, longshot" 
  -F n=1 
  -F size="1024x1024"
down
DALL·E API Now Available in Public Beta
#variations
curl https://api.openai.com/v1/images/variations 
  -H "Authorization: Bearer $OPENAI_API_KEY" 
  -F image="@/Users/openai/corgi_with_sunglasses.png" 
  -F n=4 
  -F size="1024x1024"
down
DALL·E API Now Available in Public Beta
DALL·E API Now Available in Public Beta
DALL·E API Now Available in Public Beta
DALL·E API Now Available in Public Beta

State-of-the-art image generation

DALL·E’s flexibility allows users to create and edit original images ranging from the artistic to the photorealistic. DALL·E excels at following natural language descriptions so users can plainly describe what they want to see. As our research evolves, we will continue to bring the state of the art into the API, including advances in image quality, latency, scalability, and usability.

Built-in moderation

Incorporating the trust & safety lessons we’ve learned while deploying DALL·E to 3 million artists and users worldwide, developers can ship with confidence knowing that built-in mitigations—like filters for hate symbols and gore—will handle the challenging aspects of moderation. As a part of OpenAI’s commitment to responsible deployment, we will continue to make trust & safety a top priority so that developers can focus on building.

DALL·E applications

We’ve worked closely with a few early customers who have already built DALL·E into their apps and products across a variety of use cases.

Microsoft Bing

Microsoft is bringing DALL·E to a new graphic design app called Designer, which helps users create professional quality social media posts, invitations, digital postcards, graphics, and more.

Microsoft is also integrating DALL·E in Bing and Microsoft Edge with Image Creator, allowing users to create images if web results don’t return what they’re looking for.

CALA

CALA is the world’s first fashion and lifestyle operating system. CALA unifies the entire design process—from product ideation all the way through e-commerce enablement and order fulfillment—into a single digital platform. Powered by DALL·E, CALA’s new artificial intelligence tools will allow users to generate new design ideas from natural text descriptions or uploaded reference images.

Mixtiles

Mixtiles is a fast-growing photo startup. They use software and an easy hanging experience to help millions of people create beautiful photo walls. Mixtiles uses the DALL·E API to create and frame emotionally resonating artwork, by guiding users through a creative process that captures childhood memories, dream destinations, and more.

We’re excited to see what our customers will do with DALL·E and what creative ideas they’ll come up with.

Build with OpenAI’s powerful models

DALL·E joins GPT-3, Embeddings, and Codex in our API platform, adding a new building block that developers can use to create novel experiences and applications. All API customers can use the DALL·E API today.

OpenAI

Beyond Tabula Rasa: Reincarnating Reinforcement Learning

Beyond Tabula Rasa: Reincarnating Reinforcement Learning

Reinforcement learning (RL) is an area of machine learning that focuses on training intelligent agents using related experiences so they can learn to solve decision making tasks, such as playing video games, flying stratospheric balloons, and designing hardware chips. Due to the generality of RL, the prevalent trend in RL research is to develop agents that can efficiently learn tabula rasa, that is, from scratch without using previously learned knowledge about the problem. However, in practice, tabula rasa RL systems are typically the exception rather than the norm for solving large-scale RL problems. Large-scale RL systems, such as OpenAI Five, which achieves human-level performance on Dota 2, undergo multiple design changes (e.g., algorithmic or architectural changes) during their developmental cycle. This modification process can last months and necessitates incorporating such changes without re-training from scratch, which would be prohibitively expensive. 

Furthermore, the inefficiency of tabula rasa RL research can exclude many researchers from tackling computationally-demanding problems. For example, the quintessential benchmark of training a deep RL agent on 50+ Atari 2600 games in ALE for 200M frames (the standard protocol) requires 1,000+ GPU days. As deep RL moves towards more complex and challenging problems, the computational barrier to entry in RL research will likely become even higher.

To address the inefficiencies of tabula rasa RL, we present “Reincarnating Reinforcement Learning: Reusing Prior Computation To Accelerate Progress” at NeurIPS 2022. Here, we propose an alternative approach to RL research, where prior computational work, such as learned models, policies, logged data, etc., is reused or transferred between design iterations of an RL agent or from one agent to another. While some sub-areas of RL leverage prior computation, most RL agents are still largely trained from scratch. Until now, there has been no broader effort to leverage prior computational work for the training workflow in RL research. We have also released our code and trained agents to enable researchers to build on this work.

Tabula rasa RL vs. Reincarnating RL (RRL). While tabula rasa RL focuses on learning from scratch, RRL is based on the premise of reusing prior computational work (e.g., prior learned agents) when training new agents or improving existing agents, even in the same environment. In RRL, new agents need not be trained from scratch, except for initial forays into new problems.

Why Reincarnating RL?

Reincarnating RL (RRL) is a more compute and sample-efficient workflow than training from scratch. RRL can democratize research by allowing the broader community to tackle complex RL problems without requiring excessive computational resources. Furthermore, RRL can enable a benchmarking paradigm where researchers continually improve and update existing trained agents, especially on problems where improving performance has real-world impact, such as balloon navigation or chip design. Finally, real-world RL use cases will likely be in scenarios where prior computational work is available (e.g., existing deployed RL policies).

RRL as an alternative research workflow. Imagine a researcher who has trained an agent A1 for some time, but now wants to experiment with better architectures or algorithms. While the tabula rasa workflow requires retraining another agent from scratch, RRL provides the more viable option of transferring the existing agent A1 to another agent and training this agent further, or simply fine-tuning A1.

While there have been some ad hoc large-scale reincarnation efforts with limited applicability, e.g., model surgery in Dota2, policy distillation in Rubik’s cube, PBT in AlphaStar, RL fine-tuning a behavior-cloned policy in AlphaGo / Minecraft, RRL has not been studied as a research problem in its own right. To this end, we argue for developing general-purpose RRL approaches as opposed to prior ad-hoc solutions.

Case Study: Policy to Value Reincarnating RL

Different RRL problems can be instantiated depending on the kind of prior computational work provided. As a step towards developing broadly applicable RRL approaches, we present a case study on the setting of Policy to Value reincarnating RL (PVRL) for efficiently transferring an existing sub-optimal policy (teacher) to a standalone value-based RL agent (student). While a policy directly maps a given environment state (e.g., a game screen in Atari) to an action, value-based agents estimate the effectiveness of an action at a given state in terms of achievable future rewards, which allows them to learn from previously collected data.

For a PVRL algorithm to be broadly useful, it should satisfy the following requirements:

  • Teacher Agnostic: The student shouldn’t be constrained by the existing teacher policy’s architecture or training algorithm.
  • Weaning off the teacher: It is undesirable to maintain dependency on past suboptimal teachers for successive reincarnations.
  • Compute / Sample Efficient: Reincarnation is only useful if it is cheaper than training from scratch.

Given the PVRL algorithm requirements, we evaluate whether existing approaches, designed with closely related goals, will suffice. We find that such approaches either result in small improvements over tabula rasa RL or degrade in performance when weaning off the teacher.

To address these limitations, we introduce a simple method, QDagger, in which the agent distills knowledge from the suboptimal teacher via an imitation algorithm while simultaneously using its environment interactions for RL. We start with a deep Q-network (DQN) agent trained for 400M environment frames (a week of single-GPU training) and use it as the teacher for reincarnating student agents trained on only 10M frames (a few hours of training), where the teacher is weaned off over the first 6M frames. For benchmark evaluation, we report the interquartile mean (IQM) metric from the RLiable library. As shown below for the PVRL setting on Atari games, we find that the QDagger RRL method outperforms prior approaches.

Benchmarking PVRL algorithms on Atari, with teacher-normalized scores aggregated across 10 games. Tabula rasa DQN (–·–) obtains a normalized score of 0.4. Standard baseline approaches include kickstarting, JSRL, rehearsal, offline RL pre-training and DQfD. Among all methods, only QDagger surpasses teacher performance within 10 million frames and outperforms the teacher in 75% of the games.

Reincarnating RL in Practice

We further examine the RRL approach on the Arcade Learning Environment, a widely used deep RL benchmark. First, we take a Nature DQN agent that uses the RMSProp optimizer and fine-tune it with the Adam optimizer to create a DQN (Adam) agent. While it is possible to train a DQN (Adam) agent from scratch, we demonstrate that fine-tuning Nature DQN with the Adam optimizer matches the from-scratch performance using 40x less data and compute.

Reincarnating DQN (Adam) via Fine-Tuning. The vertical separator corresponds to loading network weights and replay data for fine-tuning. Left: Tabula rasa Nature DQN nearly converges in performance after 200M environment frames. Right: Fine-tuning this Nature DQN agent using a reduced learning rate with the Adam optimizer for 20 million frames obtains similar results to DQN (Adam) trained from scratch for 400M frames.

Given the DQN (Adam) agent as a starting point, fine-tuning is restricted to the 3-layer convolutional architecture. So, we consider a more general reincarnation approach that leverages recent architectural and algorithmic advances without training from scratch. Specifically, we use QDagger to reincarnate another RL agent that uses a more advanced RL algorithm (Rainbow) and a better neural network architecture (Impala-CNN ResNet) from the fine-tuned DQN (Adam) agent.

Reincarnating a different architecture / algorithm via QDagger. The vertical separator is the point at which we apply offline pre-training using QDagger for reincarnation. Left: Fine-tuning DQN with Adam. Right: Comparison of a tabula rasa Impala-CNN Rainbow agent (sky blue) to an Impala-CNN Rainbow agent (pink) trained using QDagger RRL from the fine-tuned DQN (Adam). The reincarnated Impala-CNN Rainbow agent consistently outperforms its scratch counterpart. Note that further fine-tuning DQN (Adam) results in diminishing returns (yellow).

Overall, these results indicate that past research could have been accelerated by incorporating a RRL approach to designing agents, instead of re-training agents from scratch. Our paper also contains results on the Balloon Learning Environment, where we demonstrate that RRL allows us to make progress on the problem of navigating stratospheric balloons using only a few hours of TPU-compute by reusing a distributed RL agent trained on TPUs for more than a month.

Discussion

Fairly comparing reincarnation approaches involves using the exact same computational work and workflow. Furthermore, the research findings in RRL that broadly generalize would be about how effective an algorithm is given access to existing computational work, e.g., we successfully applied QDagger developed using Atari for reincarnation on Balloon Learning Environment. As such, we speculate that research in reincarnating RL can branch out in two directions:

  • Standardized benchmarks with open-sourced computational work: Akin to NLP and vision, where typically a small set of pre-trained models are common, research in RRL may also converge to a small set of open-sourced computational work (e.g., pre-trained teacher policies) on a given benchmark.
  • Real-world domains: Since obtaining higher performance has real-world impact in some domains, it incentivizes the community to reuse state-of-the-art agents and try to improve their performance.

See our paper for a broader discussion on scientific comparisons, generalizability and reproducibility in RRL. Overall, we hope that this work motivates researchers to release computational work (e.g., model checkpoints) on which others could directly build. In this regard, we have open-sourced our code and trained agents with their final replay buffers. We believe that reincarnating RL can substantially accelerate research progress by building on prior computational work, as opposed to always starting from scratch.

Acknowledgements

This work was done in collaboration with Pablo Samuel Castro, Aaron Courville and Marc Bellemare. We’d like to thank Tom Small for the animated figure used in this post. We are also grateful for feedback by the anonymous NeurIPS reviewers and several members of the Google Research team, DeepMind and Mila.

Read More

Improving stability and flexibility of ML pipelines at Amazon Packaging Innovation with Amazon SageMaker Pipelines

Improving stability and flexibility of ML pipelines at Amazon Packaging Innovation with Amazon SageMaker Pipelines

To delight customers and minimize packaging waste, Amazon must select the optimal packaging type for billions of packages shipped every year. If too little protection is used for a fragile item such as a coffee mug, the item will arrive damaged and Amazon risks their customer’s trust. Using too much protection will result in increased costs and overfull recycling bins. With hundreds of millions of products available, a scalable decision mechanism is needed to continuously learn from product testing and customer feedback.

To solve these problems, the Amazon Packaging Innovation team developed machine learning (ML) models that classify whether products are suitable for Amazon packaging types such as mailers, bags, or boxes, or could even be shipped with no additional packaging. Previously, the team developed a custom pipeline based on AWS Step Functions to perform weekly training and daily or monthly inference jobs. However, over time the pipeline didn’t provide sufficient flexibility to launch models with new architectures. Development for the new pipelines presented an overhead and required coordination between data scientists and developers. To overcome these difficulties and improve speed of deploying new models and architectures, the team chose to orchestrate model training and inference with Amazon SageMaker Pipelines.

In this post, we discuss the previous orchestration architecture based on Step Functions, outline training and inference architectures using Pipelines, and highlight the flexibility the Amazon Packaging Innovation team achieved.

Challenges of the former ML pipeline at Amazon Packaging Innovation

To incorporate continuous feedback about performance of packages, a new model is trained every week using a growing number of labels. The inference for the entire inventory of products is performed monthly, and a daily inference is performed to deliver just-in-time predictions for the newly added inventory.

To automate the process of training multiple models and provide predictions, the team had developed a custom pipeline based on Step Functions to orchestrate the following steps:

  • Data preparation for training and inference jobs and loading of predictions to the database (Amazon Redshift) with AWS Glue.
  • Model training and inference with Amazon SageMaker.
  • Calculation of model performance metrics on the validation set with AWS Batch.
  • Using Amazon DynamoDB to store model configurations (such as data split ratio for training and validation, model artifact location, model type, and number of instances for training and inference), model performance metrics, and the latest successfully trained model version.
  • Calculation of the differences in the model performance scores, changes in the distribution of the training labels, and comparing the size of the input data between the previous and the new model versions with AWS Lambda functions.
  • Given the large number of steps, the pipeline also required a reliable alarming system at each step to alert the stakeholders of any issues. This was accomplished via a combination of Amazon Simple Queue Service (Amazon SQS) and Amazon Simple Notification Service (Amazon SNS). The alarms were created to notify the business stakeholders, data scientists, and developers about any failed steps and large deviations in the model and data metrics.

After using this solution for nearly 2 years, the team realized that this implementation only worked well for a typical ML workflow where a single model was trained and scored on a validation dataset. However, the solution wasn’t sufficiently flexible for complex models and wasn’t resilient to failures. For example, the architecture didn’t easily accommodate sequential model training. It was difficult to add or remove a step without duplicating the entire pipeline and modifying the infrastructure. Even simple changes in the data processing steps such as adjusting the data split ratio or selecting a different set of features required coordination from both a data scientist and a developer. When the pipeline failed at any step, it had to be restarted from the beginning, which resulted in repeated runs and increased cost. To avoid repeated runs and having to restart from the failed step, the team would create a new copy of an abridged state machine. This troubleshooting led to a proliferation of the state machines, each starting from the commonly failing steps. Finally, if a training job encountered a deviation in the distribution of labels, model score, or number of labels, a data scientist had to review the model and its metrics manually. Then a data scientist would access a DynamoDB table with the model versions and update the table to ensure that the correct model was used for the next inference job.

The maintenance of this architecture required at least one dedicated resource and an additional full-time resource for development. Given the difficulties of expanding the pipeline to accommodate new use cases, the data scientists had begun developing their own workflows, which in turn had led to a growing code base, multiple data tables with similar data schemes, and decentralized model monitoring. Accumulation of these issues had resulted in lower team productivity and increased overhead.

To address these challenges, the Amazon Packaging Innovation team evaluated other existing solutions for MLOps, including SageMaker Pipelines (December 2020 release announcement). Pipelines is a capability of SageMaker for building, managing, automating, and scaling end-to-end ML workflows. Pipelines allows you to reduce the number of steps across the entire ML workflow and is flexible enough to allow data scientists to define a custom ML workflow. It takes care of monitoring and logging the steps. It also comes with a model registry that automatically versions new models. The model registry has built-in approval workflows to select models for inference in production. Pipelines also allows for caching steps called with the same arguments. If a previous run is found, a cache is created, which allows for an easy restart instead of recomputing of the successfully completed steps.

In the evaluation process, Pipelines stood out from the other solutions for its flexibility and availability of features for supporting and expanding current and future workflows. Switching to Pipelines freed up developers’ time from platform maintenance and troubleshooting and redirected attention towards the addition of the new features. In this post, we present the design for training and inference workflows at the Amazon Packaging Innovation team using Pipelines. We also discuss the benefits and the reduction in costs the team realized by switching to Pipelines.

Training pipeline

The Amazon Packaging Innovation team trains models for every package type using a growing number of labels. The following diagram outlines the entire process.

PackagingInnovation-training-architecture

The workflow begins by extracting labels and features from an Amazon Redshift database and unloading the data to Amazon Simple Storage Service (Amazon S3) via a scheduled extract, transform, and load (ETL) job. Along with the input data, a file object with the model type and parameters is placed in the S3 bucket. This file serves as the pipeline trigger via a Lambda function.

The next steps are completely customizable and defined entirely by a data scientist using the SageMaker Python SDK for Pipelines. In the scenario we present in this post, the input data is split into training and validation sets and saved back in an S3 bucket by launching a SageMaker Processing job.

When the data is ready in Amazon S3, a SageMaker training job starts. After the model is successfully trained and created, the model evaluation step is performed on the validation data via a SageMaker batch transform job. The model metrics are then compared to the previous week’s model metrics using a SageMaker Processing job. The team has defined multiple custom criteria for evaluating deviations in the model performance. The model is either rejected or approved based on these criteria. If the model is rejected, the previous approved model is used for the next inference jobs. If the model is approved, its version is registered and that model is used for inference jobs. The stakeholders receive a notification about the outcome via Amazon CloudWatch alarms.

The following screenshot from Amazon SageMaker Studio shows the steps of the training pipeline.

PackagingInnovation-SMP-training

Pipelines tracks each pipeline run, which you can monitor in Studio. Alternatively, you can query the progress of the run using Boto3 or the AWS Command Line Interface (AWS CLI). You can visualize the model metrics in Studio and compare different model versions.

Inference pipeline

The Amazon Packaging Innovation team refreshes predictions for the entire inventory of products monthly. Daily predictions are generated to provide just-in-time packaging recommendations for newly added inventory using the latest trained model. This requires the inference pipeline to run daily with different volumes of data. The following diagram illustrates this workflow.

PackagingInnovation-inference-architecture

Similar to the training pipeline, the inference begins with unloading the data from Amazon Redshift to an S3 bucket. A file object placed in Amazon S3 triggers the Lambda function that initiates the inference pipeline. The features are prepared for inference and the data is split into appropriately sized files using a SageMaker Processing job. Next, the pipeline identifies the latest approved model to run the predictions and load them to an S3 bucket. Finally, the predictions are loaded back to Amazon Redshift using the boto3-data API within the SageMaker Processing job.

The following screenshot from Studio shows the inference pipeline details.

Benefits of choosing to architect ML workflows with SageMaker Pipelines

In this section, we discuss the gains the Amazon Packaging Innovation team realized by switching to Pipelines for model training and inference.

Out-of-the-box production-level MLOps features

While comparing different internal and external solutions for the next ML pipeline solution, a single data scientist was able to prototype and develop a full version of an ML workflow with Pipelines in a Studio Jupyter environment in less than 3 weeks. Even at the prototyping stage, it became clear that Pipelines provided all necessary infrastructure components required for a production level workflow: model versioning, caching, and alarms. Immediate availability of these features meant that no additional time would be spent developing and customizing them. This was a clear demonstration of value, which convinced the Amazon Packaging Innovation team that Pipelines was the right solution.

Flexibility in developing ML models

The biggest gain for the data scientists on the team was the ability to experiment easily and iterate through different models. Regardless of what framework they preferred for their ML work and the number of steps and features it involved, Pipelines accommodated their needs. The data scientists were empowered to experiment without having to wait to get on the software development sprint to add an additional feature or step.

Reduced Costs

The Pipelines capability of SageMaker is free: you pay only for the compute resources and the storage associated with training and inference. However, when thinking about the cost, you need to account not only for the cost of the services used but also the developer hours needed to maintain the workflow, debug, and patch it. Orchestrating with Pipelines is simpler because it consists of fewer pieces and familiar infrastructure. Previously, adding a new feature required at least two people (data scientist and software engineer) at the Amazon Packaging Innovation team to implement it. With the redesigned pipeline, engineering efforts are now directed towards additional custom infrastructure around the pipeline, such as creation of a single repository for tracking of the machine learning code, simplification of the model deployment across AWS accounts, development of the integrated ETL jobs and common reusable functions.

The ability to cache the steps with a similar input also contributed to the reduction in cost, because the teams were less likely to rerun the entire pipeline. Instead, they could easily start it from the point of failure.

Conclusion

The Amazon Packaging Innovation team trains ML models on a monthly basis and regularly updates predictions for the recommended product packaging types. These recommendations helped them achieve multiple team- and company-wide goals by reducing waste and delighting customers with each order. The training and inference pipelines must run reliably on a regular basis yet allow for constant improvement of the models.

Transitioning to Pipelines allowed the team to deploy four new multi-modal model architectures to production under 2 months. Deploying a new model using the previous architecture would have required 5 days (with the same model architecture) to 1 month (with a new model architecture). Deploying the same model using Pipelines enabled the team to reduce the development time to 4 hours with the same model architecture and to 5 days with a new model architecture. That evaluates to a savings of almost 80% of working hours.

Additional resources

For more information, see the following resources:


About the Authors

Ankur-Shukla-authorAnkur Shukla is a Principal Data Scientist at AWS-ProServe based in Palo Alto. Ankur has more than 15 years of consulting experience working directly with the customer and help them solve business problem with technology. He leads multiple global applied science and ML-Ops initiatives within AWS. In his free time, he enjoys reading and spending time with family.

Akash-Singla-authorAkash Singla is a Sr. System Dev Engineer with Amazon Packaging Innovation team. He has more than 17 years of experience solving critical business problems through technology for several business verticals. He currently focuses on upgrading NAWS infrastructure for variety of packaging centric applications to scale them better.

Vitalina-Komashko-authorVitalina Komashko is a Data Scientist with AWS Professional Services. She holds a PhD in Pharmacology and Toxicology but transitioned to data science from experimental work because she wanted “to own data generation and the interpretation of the results”. Earlier in her career she worked with biotech and pharma companies. At AWS she enjoys solving problems for customers from variety of industries and learning about their unique challenges.

Prasanth-Meiyappan-authorPrasanth Meiyappan is an Sr. Applied Scientist with Amazon Packaging Innovation for 4+ years. He has 6+ years of industry experience in machine learning and has shipped products to improve search customer experience and improve customer packaging experience. Prasanth is passionate about sustainability and has a PhD in statistical modeling of climate change.

Matthew-Bales-authorMatthew Bales is a Sr. Research Scientist working to optimize package type selection using customer feedback and machine learning. Prior to Amazon, Matt worked as a post doc performing simulations of particle physics in Germany and in a previous life, a production manager of radioactive medical implant devices in a startup. He holds a Ph.D. in Physics from the University of Michigan.

Read More

Take the Green Train: NVIDIA BlueField DPUs Drive Data Center Efficiency

Take the Green Train: NVIDIA BlueField DPUs Drive Data Center Efficiency

The numbers are in, and they paint a picture of data centers going a deeper shade of green, thanks to energy-efficient networks accelerated with data processing units (DPUs).

A suite of tests run with help from Ericsson, RedHat and VMware show power reductions up to 24% on servers using NVIDIA BlueField-2 DPUs. In one case, they delivered 54x the performance of CPUs.

The work, described in a recent whitepaper, offloaded core networking jobs from power-hungry host processors to DPUs designed to run them more efficiently.

Accelerated computing with DPUs for networking, security and storage jobs is one of the next big steps for making data centers more power efficient. It’s the latest of a handful of optimizations, described in the whitepaper, for data centers moving into the era of green computing.

DPUs Tested on VMware vSphere

Seeing the trend toward energy-efficient networks, VMware enabled DPUs to run its virtualization software, used by thousands of companies worldwide. NVIDIA has run several tests with VMware since its vSphere 8 software release this fall.

For example, on VMware vSphere Distributed Services Engine —  software that offloads and accelerates networking and security functions using DPUs — BlueField-2 delivered higher performance while freeing up 20% of the CPU’s resources required without DPUs.

That means users can deploy fewer servers to run the same workload, or run more applications on the same servers.

Power Costs Cut Nearly $2 Million

Few data centers face a more demanding job than those run by telecoms providers. Their networks shuttle every bit of data that smartphone users generate or request between their cellular networks and the internet.

Researchers at Ericsson tested whether operators could reduce their power consumption on this massive workload using SmartNICs, the network interface cards that handle DPU functions. Their test let CPUs slow down or sleep while an NVIDIA ConnectX SmartNIC handled the networking tasks.

The results, detailed in a recent article, were stunning.

Energy consumption of server CPUs fell 24%, from 190 to 145 watts on a fully loaded network. This single DPU application could cut power costs by nearly $2 million over three years for a large data center.

Ericsson tests with Bluefield DPUs
Ericsson ran the user-plane function for 5G networks on DPUs in three scenarios.

In the article, Ericsson’s CTO, Erik Ekudden, underscored the importance of the work.

“There’s a growing sense of urgency among communication service providers to find and implement innovative solutions that reduce network energy consumption,” he wrote. And the DPU techniques “save energy across a wide range of traffic conditions.”

70% Less Overhead, 54x More Performance

Results were even more dramatic for tests on Red Hat OpenShift, used by half of all Fortune 500 banks, airlines and telcos to manage software containers.

In the tests, BlueField-2 DPUs handled virtualization, encryption and networking jobs needed to manage these portable packages of applications and code.

The DPUs slashed networking demands on CPUs by 70%, freeing them up to run other applications. What’s more, they accelerated networking jobs by a whopping 54x.

A technical blog provides more detail on the tests.

Speeding the Way to Zero Trust

Across every industry, businesses are embracing a philosophy of zero trust to improve network security. So, NVIDIA tested IPsec, one of the most popular data center encryption protocols, on BlueField DPUs.

The test showed data centers could improve performance and cut power consumption 21% for servers and 34% for clients on networks running IPsec on DPUs. For large data centers, that could translate to nearly $9 million in savings on electric bills over three years.

NVIDIA and its partners continue to put DPUs to the test in an expanding portfolio of use cases, but the big picture is clear.

“In a world facing rising energy costs and rising demand for green IT infrastructure, the use of DPUs will become increasingly popular,” the whitepaper concludes.

It’s good to know the numbers, but seeing is believing. So apply to run your own test of DPUs on VMware’s vSphere.

The post Take the Green Train: NVIDIA BlueField DPUs Drive Data Center Efficiency appeared first on NVIDIA Blog.

Read More