Eliminating the need for annotation makes bias testing much more practical.Read More
Cost-effective data preparation for machine learning using SageMaker Data Wrangler
Amazon SageMaker Data Wrangler is a capability of Amazon SageMaker that makes it faster for data scientists and engineers to prepare high-quality features for machine learning (ML) applications via a visual interface. Data Wrangler reduces the time it takes to aggregate and prepare data for ML from weeks to minutes. With Data Wrangler, you can simplify the process of data preparation and feature engineering, and complete each step of the data preparation workflow, including data selection, cleansing, exploration, and visualization from a single visual interface.
In this post, we dive into different aspects of data preparation and the associated features of Data Wrangler to understand the cost components of data preparation and how Data Wrangler offers a cost-effective approach to data preparation. We also cover cost optimization best practices to further reduce data preparation costs in Data Wrangler.
Overview of exploratory data analysis (EDA) and data preparation in Data Wrangler
To understand the cost-effectiveness of Data Wrangler, it’s important to look at different aspects of EDA and data preparation phase of ML. This blog will not compare different platforms or services for EDA, but understand different steps in EDA, their cost considerations, and how Data Wrangler facilitates EDA in a cost-effective way.
The typical EDA experience of a data scientist consists of the following steps:
- Launch a Jupyter notebook instance to carry out EDA.
- Import required packages for data analysis and visualization.
- Import the data from multiple sources.
- Carry out transformations such as handling missing values and outliers, one-hot encoding, balancing data, and more to clean the data and make it ready for modeling.
- Visualize the data.
- Create mechanisms to repeat the steps.
- Export processed data for downstream analytics or ML.
These steps are complex, and require flexibility in compute and memory requirements so you can run each step with appropriate compute and memory. You also need an integrated system that can import data from multiple sources and mechanisms to repeat or reuse so that you can apply the same EDA steps you already built to larger, similar, or different datasets as required by your downstream ML pipeline.
EDA cost considerations
The following are some of the cost considerations for EDA:
Compute
- Some EDA environments require data in a certain format. In such cases, you need to process the data to the format accepted by the EDA environment. For example, if the environment accepts only CSV format but you have data in Parquet or another format, you have to convert your dataset to CSV format. Reformatting data requires compute.
- Not all environments have the flexibility to change compute or memory configuration with the click of a button. You may need to have the highest compute capacity and memory footprint as applicable to each transformation you’re performing.
Storage and data transfer
- Data in multiple sources has to be collected. If only selected sources are supported by the EDA environment, you may have to move your data from different sources to that single supported source, which increases both storage and data transfer cost.
Labor cost and expertise
- Managing the EDA platform and the underlying compute infrastructure involves expertise, effort, and cost. When you manage the infrastructure, you have the operational burden of managing operating systems and applications such as provisioning, patching, and upgrading. Make sure to identify issues quickly. If you don’t validate the data before building your model, you have wasted a lot of resources as well as engineer time.
- Note that EDA requires data science and data experience expertise.
- Additionally, some EDA environments don’t offer a point-and-click interface and require you to write code to explore, visualize, and transform data, which involves labor cost.
Operations cost
- To move the data from the source to carry out transformations and then to downstream ML pipelines, you may have to carry out the repetitive EDA steps again from the beginning of fetching the data in each phase of EDA, which is time consuming and carries a cumulative labor cost. If you can use the transformed data from the previous step, it doesn’t cumulatively increase cost.
- Having an easy mechanism to repeat the same set of EDA steps on similar or incremental datasets saves time as well as cost from a people and compute resources perspective.
Let’s see how Data Wrangler facilitates EDA or data preparation in a cost-effective manner in regards to these different areas.
Compute
When you carry out EDA on a notebook, you may not have the flexibility to scale the compute or memory on demand, which may force you to run the transformation and visualizations in an oversized environment. If you have an undersized environment, you may run into out of memory issues. In Data Wrangler, you can choose a smaller instance type for certain transformations or analysis and then upscale the instance to a larger type and carry out complex transformations. When the complex transformation is complete, you can downscale the Data Wrangler instance to a smaller instance type. This gives you the flexibility to scale your compute based on the transformation requirements.
Data Wrangler supports a variety of instance types, and you can choose the right one for your workload, thereby eliminating the costs of oversized or undersized environments.
Storage and data transfer
In this section, we discuss some of the cost considerations for storage and data transfer.
Import
Data for ML is often available from multiple sources and in different formats. With Data Wrangler, you can import data from the following data sources: Amazon Simple Storage Service (Amazon S3), Amazon Athena, Amazon Redshift, AWS Lake Formation, Amazon SageMaker Feature Store and Snowflake. Data can be in any of the following formats: CSV, Parquet, JSON, and Optimized Row Columnar (ORC), and more data formats will be added based on customer demand. Because the important data sources are already supported in Data Wrangler, data can be directly imported from the respective sources, and you only pay for the GB-month of provisioned storage. For more information, refer to Amazon SageMaker Pricing.
All the iterative data exploration, data transformation, and visualization can be carried out within Data Wrangler itself. This eliminates further data movement compared to other environments where you may have to move the data to different locations for ingestion, transformation, and processing. From a cost perspective, this eliminates duplicate data storage as well as reduced data movement.
Data quality cost
If you don’t identify bad data and correct it early, it will become a costly problem to solve later. The Data Quality and Insights Report helps you eliminate this problem. You can use the Data Quality and Insights Report to perform an analysis of your data to gain insights into your dataset, such as the number of missing values and the number of outliers. If you have issues with your data, such as target leakage or imbalance, the insights report can bring those issues to your attention. As soon as you import your data, you can run an insights report with a click of a button. This reduces the effort of importing libraries and writing code to get the required insights on the dataset, which reduces the labor cost and expertise required.
When you create the Data Quality and Insights Report, Data Wrangler gives you the option to select a target column (the column that you’re trying to predict). When you choose a target column, Data Wrangler automatically creates a target column analysis. It also ranks the features in the order of their predictive power (see the following screenshot). This contributes to the direct business benefit of high-quality features for the downstream ML process.
Transformation
If your EDA tool supports only certain transformations, you may need to move the data to a different environment to carry out the custom transformations such as Spark jobs. Data Wrangler supports custom transformations, which can be written in PySpark, Pandas, and SQL (see the following screenshot for an example). They’re developer friendly and all seamlessly packaged into one place, reducing data movement and saving cost associated with data transfer and storage.
You may also need to carry out mathematical operations on your datasets, such as taking an absolute value of a column. If your EDA tool doesn’t support mathematical operations, you may have to carry out the operations externally, which requires additional effort and cost. Some tools might support mathematical operations on the dataset but require you to import libraries, which involves additional effort. In Data Wrangler, you can also use a custom formula to define a new column using a Spark SQL expression to query data in the current data frame without incurring any additional cost for custom transformations or custom queries.
Labor cost and expertise
Managing the EDA platform and the underlying compute infrastructure involves expertise, effort, and cost. Data Wrangler offers a selection of over 300 preconfigured data transformations written in PySpark, so you can process datasets up to hundreds of gigabytes efficiently without having to worry about writing code to transform the data. You can use transformations such as convert column type, one hot encoding, impute missing data with mean or median, rescale columns, and data/time embeddings to transform your data into formats that the models can use without even writing a single line of code. This reduces time and effort, thereby reducing labor cost.
Data Wrangler offers a point-and-click interface to visualize and validate data (see the following screenshot). No expertise is required on data engineering or analytics because all the data preparation can be done via simple point and click.
Visualization
Data Wrangler helps you understand your data and identify potential errors and extreme values with a set of robust preconfigured visualization templates. You don’t need familiarity or to spend additional time to import any external libraries or dependencies to carry out the visualizations. Histograms, scatter plots, box and whisker plots, line plots, and bar charts are all available (see the following screenshots for some examples). Templates such as histograms make it simple to create and edit your own visualizations without writing code.
Validation
Data Wrangler enables you to quickly identify inconsistencies in your data preparation workflow and diagnose issues before models are deployed into production (see the following screenshot). You can quickly identify if your prepared data will result in an accurate model so you can determine if additional feature engineering is needed to improve performance. All of this occurs before the model building phase, so there is no additional labor cost for building a model that’s not performing as expected (low performance metrics) that would result in additional transformations after the model build. The validation also results in the business benefit of better quality features.
Build scalable data preparation pipelines
When you carry out EDA you have to build data preparation pipelines that can scale with datasets (see the following screenshot). This is important for repetition as well as downstream ML processes. Typically, customers use Spark for its distributed, scalable, and in-memory processing nature; however, this requires a lot of expertise on Spark. Setting up a Spark environment is time consuming and requires expertise for optimal configuration. With Data Wrangler, you can create data processing jobs and export to Amazon S3 and Amazon feature store purely via the visual interface without having to generate, run, or manage Jupyter notebooks, which facilitates scalable data preparation pipelines without any Spark expertise. For more information, refer to Launch processing jobs with a few clicks using Amazon SageMaker Data Wrangler.
Operations cost
Integration may not be a direct cost benefit; however, there are indirect cost benefits when you work in an integrated environment such as SageMaker. Because Data Wrangler is integrated with AWS services, you can export your data preparation workflow to a Data Wrangler job notebook, and launch Amazon SageMaker Autopilot training experiment, Amazon SageMaker Pipelines notebook, or code script. You can also create a Data Wrangler processing job with one click without needing to set up and manage infrastructure to carry out repetitive steps or automation in an ML workflow.
In your Data Wrangler flow, you can export some or all of the transformations that you made to your data processing pipelines. When you export your data flow, you’re charged for the AWS resources that you use. From a cost perspective, exporting the transformation gives you the ability to repeat the transformation on additional datasets with no incremental effort.
With Data Wrangler, you can export all the transformations that you made to a dataset to a destination node with just a few clicks. This allows you to create data processing jobs and export to Amazon S3 purely via the visual interface without having to generate, run, or manage Jupyter notebooks, thereby enhancing the low-code experience.
Data Wrangler allows you to export your data preparation steps or data flow into different environments. Data Wrangler has seamless integration with other AWS services and features, such as the following:
- SageMaker Feature Store – You can engineer your model features using Data Wrangler and then ingest into your feature store, which is a centralized store for features and their associated metadata
- SageMaker Pipelines – You can use the data flow exported from Data Wrangler in SageMaker pipelines, which are used to build and deploy large-scale ML workflows
- Amazon S3 – You can export the data to Amazon S3 and use it to create Data Wrangler jobs
- Python – Finally, you can export all the steps in your data flow to a Python file, which you can manually integrate into any data processing workflow.
Such tight integration helps reduce effort, time, expertise, and cost.
Cost optimization best practices
In this section, we discuss best practices to further optimize cost in Data Wrangler.
Update Data Wrangler to the latest release
When you update Data Wrangler to the latest release, you get all the latest features, security, and overall optimizations made to Data Wrangler, which may improve its cost-effectiveness.
Use built-in Data Wrangler transformers
Use the built-in Data Wrangler transformers over custom Pandas transforms when processing larger and wider datasets.
Choose the right instance type for your Data Wrangler flow
There are two families of ml instance types supported for Data Wrangler: m5 and r5. m5 instances are general purpose instances that provide a balance between compute and memory, whereas r5 instances are designed to deliver fast performance to process large datasets in memory.
We recommend choosing an instance that is best optimized around your workloads. For example, the r5.8xlarge might have a higher price than the m5.4xlarge, but the r5.8xlarge might be better optimized for your workloads. With better optimized instances, you can run your data flows in less time at lower cost.
Process larger and wider datasets
For datasets larger than tens of gigabytes, we recommend using built-in transforms, or sampling data on import to run custom Pandas transforms interactively. In the post, we share our findings from two benchmark tests to demonstrate how to do this.
Shut down unused instances
You are charged for all running instances. To avoid incurring additional charges, shut down the instances that you aren’t using manually. To shut down an instance that is running, complete the following steps:
- On your data flow page, choose the instance icon in the navigation pane under Running instances.
- Choose Shut down.
If you shut down an instance used to run a flow, you can’t access the flow temporarily. If you get an error in opening the flow running an instance you previously shut down, wait for approximately 5 minutes and try opening it again.
When you’re not using Data Wrangler, it’s important to shut down the instance on which it runs to avoid incurring additional fees. For more information, refer to Shut Down Data Wrangler.
For information about shutting down Data Wrangler resources automatically, refer to Save costs by automatically shutting down idle resources within Amazon SageMaker Studio.
Export
When you export your Data Wrangler flow or transformations, you can use cost allocation tags to organize and manage the costs of those resources. You create these tags for your user profile and Data Wrangler automatically applies them to the resources used to export the data flow. For more information, see Using Cost Allocation Tags.
Pricing
Data Wrangler pricing has three components: Data Wrangler instances, Data Wrangler jobs, and ML storage. You can perform all the steps for EDA or data preparation within Data Wrangler and you pay for the instance, jobs, and storage pricing based on usage or consumption, with no upfront or licensing fees. For more information, refer to On-Demand Pricing.
Conclusion
In this post, we reviewed different cost aspects of EDA and data preparation to discover how feature-rich and integrated Data Wrangler reduces the time it takes to aggregate and prepare data for ML use cases from weeks to minutes, thereby facilitating cost-effective data preparation for ML. We also inspected the pricing components of Data Wrangler and best practices for cost optimization when using Data Wrangler for your ML data preparation requirements.
For more information, see the following resources:
- Prepare ML Data with Amazon SageMaker Data Wrangler
- Accelerate data preparation with data quality and insights in Amazon SageMaker Data Wrangler
- Amazon SageMaker Studio
- Save costs by automatically shutting down idle resources within Amazon SageMaker Studio
- Process larger and wider datasets with Amazon SageMaker Data Wrangler
- Using Cost Allocation Tags
About the Authors
Rajakumar Sampathkumar is a Principal Technical Account Manager at AWS, providing customer guidance on business-technology alignment and supporting the reinvention of their cloud operation models and processes. He is passionate about cloud and machine learning. Raj is also a machine learning specialist and works with AWS customers to design, deploy, and manage their AWS workloads and architectures.
Rahul Nabera is a Data Analytics Consultant in AWS Professional Services. His current work focuses on enabling customers build their data and machine learning workloads on AWS. In his spare time, he enjoys playing cricket and volleyball.
Generate images from text with the stable diffusion model on Amazon SageMaker JumpStart
In December 2020, AWS announced the general availability of Amazon SageMaker JumpStart, a capability of Amazon SageMaker that helps you quickly and easily get started with machine learning (ML). JumpStart provides one-click fine-tuning and deployment of a wide variety of pre-trained models across popular ML tasks, as well as a selection of end-to-end solutions that solve common business problems. These features remove the heavy lifting from each step of the ML process, making it easier to develop high-quality models and reducing time to deployment.
This post is the fifth in a series on using JumpStart for specific ML tasks. In the first post, we showed how you can run image classification use cases on JumpStart. In the second post, we demonstrated how to run text classification use cases. In the third post, we discussed image segmentation use cases. In the fourth post, we showed how you can run text generation use cases.
In this post, we provide a step-by-step walkthrough on how to deploy pre-trained stable diffusion models for generating images from text. We explore two ways of obtaining the same result: via JumpStart’s graphical interface on Amazon SageMaker Studio, and programmatically through JumpStart APIs.
If you want to jump straight into the JumpStart API code we go through in this post, you can refer to the following sample Jupyter notebook: Introduction to JumpStart – Text to Image.
JumpStart overview
JumpStart helps you get started with ML models for a variety of tasks without writing a single line of code. Currently, JumpStart enables you to do the following:
- Deploy pre-trained models for common ML tasks – JumpStart enables you to address common ML tasks with no development effort by providing easy deployment of models pre-trained on large, publicly available datasets. The ML research community has put a large amount of effort into making a majority of recently developed models publicly available for use. JumpStart hosts a collection of over 300 models, spanning the 15 most popular ML tasks such as object detection, text classification, and text generation, making it easy for beginners to use them. These models are drawn from popular model hubs such as TensorFlow, PyTorch, Hugging Face, and MXNet.
- Fine-tune pre-trained models – JumpStart allows you to fine-tune pre-trained models without needing to write your own training algorithm. In ML, the ability to transfer the knowledge learned in one domain to another domain is called transfer learning. You can use transfer learning to produce accurate models on your smaller datasets, with much lower training costs than the ones involved in training the original model. JumpStart also includes popular training algorithms based on LightGBM, CatBoost, XGBoost, and Scikit-learn, which you can train from scratch for tabular regression and classification.
- Use pre-built solutions – JumpStart provides a set of 17 solutions for common ML use cases, such as demand forecasting and industrial and financial applications, which you can deploy with just a few clicks. Solutions are end-to-end ML applications that string together various AWS services to solve a particular business use case. They use AWS CloudFormation templates and reference architectures for quick deployment, which means they’re fully customizable.
- Refer to notebook examples for SageMaker algorithms – SageMaker provides a suite of built-in algorithms to help data scientists and ML practitioners get started with training and deploying ML models quickly. JumpStart provides sample notebooks that you can use to quickly use these algorithms.
- Review training videos and blogs – JumpStart also provides numerous blog posts and videos that teach you how to use different functionalities within SageMaker.
JumpStart accepts custom VPC settings and AWS Key Management Service (AWS KMS) encryption keys, so you can use the available models and solutions securely within your enterprise environment. You can pass your security settings to JumpStart within Studio or through the SageMaker Python SDK.
Text-to-image and stable diffusion models
Text-to-image is the task of generating realistic images describing a raw input text. Stable diffusion is a popular model for this task. It’s a latent diffusion model where input text is first embedded into a latent space via a language model. Then, a series of noise addition and removal operations are performed in the latent space with U-Net architecture. Finally, denoised output is decoded into the pixel space.
For example, with the input “Garden painted in impressionist style,” we get the following output image.
Although primarily used to generate images conditioned on text, stable diffusion models can also be used for other tasks such as inpainting, outpainting, and generating image-to-image translations guided by text.
Solution overview
The following sections provide a step-by-step demo to perform inference, both via the Studio UI and via JumpStart APIs. We walk through the following steps:
- Access JumpStart through the Studio UI to deploy and run inference on the pre-trained model.
- Use JumpStart programmatically with the SageMaker Python SDK to deploy the pre-trained model and run inference.
Access JumpStart through the Studio UI and run inference with a pre-trained model
In this section, we demonstrate how to train and deploy JumpStart models through the Studio UI.
The following video shows you how to find a pre-trained text-to-image model on JumpStart and deploy it. The model page contains valuable information about the model and how to use it. You can deploy any of the pre-trained models available in JumpStart. For inference, we pick the ml.p3.2xlarge instance type, because it provides the GPU acceleration needed for low inference latency at a low price point. After you configure the SageMaker hosting instance, choose Deploy. It may take 5–10 minutes until your persistent endpoint is up and running.
After a few minutes, your endpoint is operational and ready to respond to inference requests!
To accelerate your time to inference, JumpStart provides a sample notebook that shows you how to run inference on your freshly deployed endpoint. Choose Open Notebook under Use Endpoint from Studio.
Use JumpStart programmatically with the SageMaker SDK
In the preceding section, we showed how you can use the JumpStart UI to deploy a pre-trained model interactively, in a matter of a few clicks. However, you can also use JumpStart’s models programmatically by using APIs that are integrated into the SageMaker SDK.
In this section, we go over a quick example of how you can replicate the preceding process with the SageMaker SDK. We choose an appropriate pre-trained model in JumpStart, deploy this model to a SageMaker endpoint, and run inference on the deployed endpoint. All the steps in this demo are available in the accompanying notebook Introduction to JumpStart – Text to Image.
Deploy the pre-trained model
SageMaker is a platform that makes extensive use of Docker containers for build and runtime tasks. JumpStart uses the available framework-specific SageMaker Deep Learning Containers (DLCs). We first fetch any additional packages, as well as scripts to handle training and inference for the selected task. Finally, the pre-trained model artifacts are separately fetched with model_uris
, which provides flexibility to the platform. You can use any number of models pre-trained on the same task with a single inference script. See the following code:
Next, we feed the resources into a SageMaker model instance and deploy an endpoint:
After our model is deployed, we can get predictions from it in real time!
Run inference
The following code snippet gives you a glimpse of what the outputs look like. To send requests to a deployed model, input text needs to be supplied in a utf-8 encoded format.
The endpoint response is a JSON object containing the generated image and the prompt:
We get the following image as our output.
Conclusion
In this post, we showed how to deploy a pre-trained text generation model using JumpStart. You can accomplish this without needing to write code. Try out the solution on your own and send us your comments. To learn more about JumpStart and how you can use open-source pre-trained models for a variety of other ML tasks, check out the following AWS re:Invent 2020 video.
About the Authors
Dr. Vivek Madan is an Applied Scientist with the Amazon SageMaker JumpStart team. He got his PhD from University of Illinois at Urbana-Champaign and was a Post Doctoral Researcher at Georgia Tech. He is an active researcher in machine learning and algorithm design and has published papers in EMNLP, ICLR, COLT, FOCS, and SODA conferences.
Santosh Kulkarni is an Enterprise Solutions Architect at Amazon Web Services who works with sports customers in Australia. He is passionate about building large-scale distributed applications to solve business problems using his knowledge in AI/ML, big data, and software development.
Leonardo Bachega is a Senior Scientist and Manager in the Amazon SageMaker JumpStart team. He’s passionate about building AI services for computer vision.
Run text generation with GPT and Bloom models on Amazon SageMaker JumpStart
In December 2020, AWS announced the general availability of Amazon SageMaker JumpStart, a capability of Amazon SageMaker that helps you quickly and easily get started with machine learning (ML). JumpStart provides one-click fine-tuning and deployment of a wide variety of pre-trained models across popular ML tasks, as well as a selection of end-to-end solutions that solve common business problems. These features remove the heavy lifting from each step of the ML process, making it easier to develop high-quality models and reducing time to deployment.
This post is the fourth in a series on using JumpStart for specific ML tasks. In the first post, we showed how to run image classification use cases on JumpStart. In the second post, we demonstrated how to run text classification use cases. In the third post, we ran image segmentation use cases.
In this post, we provide a step-by-step walkthrough on how to deploy pre-trained text generation models. We explore two ways of obtaining the same result: via JumpStart’s graphical interface on Amazon SageMaker Studio, and programmatically through JumpStart APIs.
If you want to jump straight into the JumpStart API code we go through in this post, you can refer to the following sample Jupyter notebook: Introduction to JumpStart – Text Generation.
JumpStart overview
JumpStart helps you get started with ML models for a variety of tasks without writing a single line of code. Currently, JumpStart enables you to do the following:
- Deploy pre-trained models for common ML tasks – JumpStart enables you to address common ML tasks with no development effort by providing easy deployment of models pre-trained on large, publicly available datasets. The ML research community has put a large amount of effort into making a majority of recently developed models publicly available for use. JumpStart hosts a collection of over 300 models, spanning the 15 most popular ML tasks such as object detection, text classification, and text generation, making it easy for beginners to use them. These models are drawn from popular model hubs such as TensorFlow, PyTorch, Hugging Face, and MXNet.
- Fine-tune pre-trained models – JumpStart allows you to fine-tune pre-trained models without needing to write your own training algorithm. In ML, the ability to transfer the knowledge learned in one domain to another domain is called transfer learning. You can use transfer learning to produce accurate models on your smaller datasets, with much lower training costs than the ones involved in training the original model. JumpStart also includes popular training algorithms based on LightGBM, CatBoost, XGBoost, and Scikit-learn, which you can train from scratch for tabular regression and classification.
- Use pre-built solutions – JumpStart provides a set of 17 solutions for common ML use cases, such as demand forecasting and industrial and financial applications, which you can deploy with just a few clicks. Solutions are end-to-end ML applications that string together various AWS services to solve a particular business use case. They use AWS CloudFormation templates and reference architectures for quick deployment, which means they’re fully customizable.
- Refer to notebook examples for SageMaker algorithms – SageMaker provides a suite of built-in algorithms to help data scientists and ML practitioners get started with training and deploying ML models quickly. JumpStart provides sample notebooks that you can use to quickly use these algorithms.
- Review training videos and blogs – JumpStart also provides numerous blog posts and videos that teach you how to use different functionalities within SageMaker.
JumpStart accepts custom VPC settings and AWS Key Management Service (AWS KMS) encryption keys, so you can use the available models and solutions securely within your enterprise environment. You can pass your security settings to JumpStart within Studio or through the SageMaker Python SDK.
Text generation, GPT-2, and Bloom
Text generation is the task of generating text that is fluent and appears indistinguishable from human-written text. It is also known as natural language generation.
GPT-2 is a popular transformer-based text generation model. It’s pre-trained on a large corpus of raw English text with no human labeling. It’s trained on the task where, given a partial sequence (sentence or piece of text), the model needs to predict the next word or token in the sequence.
Bloom is also a transformer-based text generation model and trained similarly to GPT-2. However, Bloom is pre-trained on 46 different languages and 13 programming languages. The following is an example of running text generation with the Bloom model:
Solution overview
The following sections provide a step-by-step demo to perform inference, both via the Studio UI and via JumpStart APIs. We walk through the following steps:
- Access JumpStart through the Studio UI to deploy and run inference on the pre-trained model.
- Use JumpStart programmatically with the SageMaker Python SDK to deploy the pre-trained model and run inference.
Access JumpStart through the Studio UI and run inference with a pre-trained model
In this section, we demonstrate how to train and deploy JumpStart models through the Studio UI.
The following video shows you how to find a pre-trained text generation model on JumpStart and deploy it. The model page contains valuable information about the model and how to use it. You can deploy any of the pre-trained models available in JumpStart. For inference, we pick the ml.p3.2xlarge instance type, because it provides the GPU acceleration needed for low inference latency at a low price point. After you configure the SageMaker hosting instance, choose Deploy. It may take 20–25 minutes until your persistent endpoint is up and running.
Once your endpoint is operational, it’s ready to respond to inference requests!
To accelerate your time to inference, JumpStart provides a sample notebook that shows you how to run inference on your freshly deployed endpoint. Choose Open Notebook under Use Endpoint from Studio.
Use JumpStart programmatically with the SageMaker SDK
In the preceding section, we showed how you can use the JumpStart UI to deploy a pre-trained model interactively, in a matter of a few clicks. However, you can also use JumpStart’s models programmatically by using APIs that are integrated into the SageMaker SDK.
In this section, we go over a quick example of how you can replicate the preceding process with the SageMaker SDK. We choose an appropriate pre-trained model in JumpStart, deploy this model to a SageMaker endpoint, and run inference on the deployed endpoint. All the steps in this demo are available in the accompanying notebook Introduction to JumpStart – Text Generation.
Deploy the pre-trained model
SageMaker is a platform that makes extensive use of Docker containers for build and runtime tasks. JumpStart uses the available framework-specific SageMaker Deep Learning Containers (DLCs). We first fetch any additional packages, as well as scripts to handle training and inference for the selected task. Finally, the pre-trained model artifacts are separately fetched with model_uris
, which provides flexibility to the platform. You can use any number of models pre-trained on the same task with a single inference script. See the following code:
Bloom is a very large model and can take up to 20–25 minutes to deploy. You can also use a smaller model such as GPT-2. To deploy a pre-trained GPT-2 model, you can set model_id = huggingface-textgeneration-gpt2
. For a list of other available models in JumpStart, refer to JumpStart Available Model Table.
Next, we feed the resources into a SageMaker model instance and deploy an endpoint:
After our model is deployed, we can get predictions from it in real time!
Run inference
The following code snippet gives you a glimpse of what the outputs look like. To send requests to a deployed model, input text needs to be supplied in a utf-8
encoded format.
The endpoint response is a JSON object containing the input text followed by the generated text:
Our output is as follows:
Conclusion
In this post, we showed how to deploy a pre-trained text generation model using JumpStart. You can accomplish this without needing to write code. Try out the solution on your own and send us your comments. To learn more about JumpStart and how you can use open-source pre-trained models for a variety of other ML tasks, check out the following AWS re:Invent 2020 video.
About the Authors
Dr. Vivek Madan is an Applied Scientist with the Amazon SageMaker JumpStart team. He got his PhD from University of Illinois at Urbana-Champaign and was a Post Doctoral Researcher at Georgia Tech. He is an active researcher in machine learning and algorithm design and has published papers in EMNLP, ICLR, COLT, FOCS, and SODA conferences.
Santosh Kulkarni is an Enterprise Solutions Architect at Amazon Web Services who works with sports customers in Australia. He is passionate about building large-scale distributed applications to solve business problems using his knowledge in AI/ML, big data, and software development.
Dr. Ashish Khetan is a Senior Applied Scientist with Amazon SageMaker built-in algorithms and helps develop machine learning algorithms. He got his PhD from University of Illinois Urbana Champaign. He is an active researcher in machine learning and statistical inference and has published many papers in NeurIPS, ICML, ICLR, JMLR, ACL, and EMNLP conferences.
Deploy BLOOM-176B and OPT-30B on Amazon SageMaker with large model inference Deep Learning Containers and DeepSpeed
The last few years have seen rapid development in the field of deep learning. Although hardware has improved, such as with the latest generation of accelerators from NVIDIA and Amazon, advanced machine learning (ML) practitioners still regularly encounter issues deploying their large deep learning models for applications such as natural language processing (NLP).
In an earlier post, we discussed capabilities and configurable settings in Amazon SageMaker model deployment that can make inference with these large models easier. Today, we announce a new Amazon SageMaker Deep Learning Container (DLC) that you can use to get started with large model inference in a matter of minutes. This DLC packages some of the most popular open-source libraries for model parallel inference, such as DeepSpeed and Hugging Face Accelerate.
In this post, we use a new SageMaker large model inference DLC to deploy two of the most popular large NLP models: BigScience’s BLOOM-176B and Meta’s OPT-30B from the Hugging Face repository. In particular, we use Deep Java Library (DJL) serving and tensor parallelism techniques from DeepSpeed to achieve 0.1 second latency per token in a text generation use case.
You can find our complete example notebooks in our GitHub repository.
Large model inference techniques
Language models have recently exploded in both size and popularity. With easy access from model zoos such as Hugging Face and improved accuracy and performance in NLP tasks such as classification and text generation, practitioners are increasingly reaching for these large models. However, large models are often too big to fit within the memory of a single accelerator. For example, the BLOOM-176B model can require more than 350 gigabytes of accelerator memory, which far exceeds the capacity of hardware accelerators available today. This necessitates the use of model parallel techniques from libraries like DeepSpeed and Hugging Face Accelerate to distribute a model across multiple accelerators for inference. In this post, we use the SageMaker large model inference container to generate and compare latency and throughput performance using these two open-source libraries.
DeepSpeed and Accelerate use different techniques to optimize large language models for inference. The key difference is DeepSpeed’s use of optimized kernels. These kernels can dramatically improve inference latency by reducing bottlenecks in the computation graph of the model. Optimized kernels can be difficult to develop and are typically specific to a particular model architecture; DeepSpeed supports popular large models such as OPT and BLOOM with these optimized kernels. In contrast, Hugging Face’s Accelerate library doesn’t include optimized kernels at the time of writing. As we discuss in our results section, this difference is responsible for much of the performance edge that DeepSpeed has over Accelerate.
A second difference between DeepSpeed and Accelerate is the type of model parallelism. Accelerate uses pipeline parallelism to partition a model between the hidden layers of a model, whereas DeepSpeed uses tensor parallelism to partition the layers themselves. Pipeline parallelism is a flexible approach that supports more model types and can improve throughput when larger batch sizes are used. Tensor parallelism requires more communication between GPUs because model layers can be spread across multiple devices, but can improve inference latency by engaging multiple GPUs simultaneously. You can learn more about parallelism techniques in Introduction to Model Parallelism and Model Parallelism.
Solution overview
To effectively host large language models, we need features and support in the following key areas:
- Building and testing solutions – Given the iterative nature of ML development, we need the ability to build, rapidly iterate, and test how the inference endpoint will behave when these models are hosted, including the ability to fail fast. These models can typically be hosted only on larger instances like p4dn or g5, and given the size of the models, it can take a while to spin up an inference instance and run any test iteration. Local testing usually has constraints because you need a similar instance in size to test, and these models aren’t easy to obtain.
- Deploying and running at scale – The model files need to be loaded onto the inference instances, which presents a challenge in itself given the size. Tar / Un-Tar as an example for the Bloom-176B takes about 1 hour to create and another hour to load. We need an alternate mechanism to allow easy access to the model files.
- Loading the model as singleton – For a multi-worker process, we need to ensure the model gets loaded only once so we don’t run into race conditions and further spend unnecessary resources. In this post, we show a way to load directly from Amazon Simple Storage Service (Amazon S3). However, this only works if we use the default settings of the DJL. Furthermore, any scaling of the endpoints needs to be able to spin up in a few minutes, which calls for reconsidering how the models might be loaded and distributed.
- Sharding frameworks – These models typically need to be , usually by a tensor parallelism mechanism or by pipeline sharding as the typical sharding techniques, and we have advanced concepts like ZeRO sharding built on top of tensor sharding. For more information about sharding techniques, refer to Model Parallelism. To achieve this, we can have various combinations and use frameworks from NIVIDIA, DeepSpeed, and others. This needs the ability to test BYOC or use 1P containers and iterate over solutions and run benchmarking tests. You might also want to test various hosting options like asynchronous, serverless, and others.
- Hardware selection – Your choice in hardware is determined by all the aforementioned points and further traffic patterns, use case needs, and model sizes.
In this post, we use DeepSpeed’s optimized kernels and tensor parallelism techniques to host BLOOM-176B and OPT-30B on SageMaker. We also compare results from Accelerate to demonstrate the performance benefits of optimized kernels and tensor parallelism. For more information on DeepSpeed and Accelerate, refer to DeepSpeed Inference: Enabling Efficient Inference of Transformer Models at Unprecedented Scale and Incredibly Fast BLOOM Inference with DeepSpeed and Accelerate.
We use DJLServing as the model serving solution in this example. DJLServing is a high-performance universal model serving solution powered by the Deep Java Library (DJL) that is programming language agnostic. To learn more about the DJL and DJLServing, refer to Deploy large models on Amazon SageMaker using DJLServing and DeepSpeed model parallel inference.
It’s worth noting that optimized kernels can result in precision changes and a modified computation graph, which could theoretically result in changed model behavior. Although this could occasionally change the inference outcome, we do not expect these differences to materially impact the basic evaluation metrics of a model. Nevertheless, practitioners are advised to confirm the model outputs are as expected when using these kernels.
The following steps demonstrate how to deploy a BLOOM-176B model in SageMaker using DJLServing and a SageMaker large model inference container. The complete example is also available in our GitHub repository.
Using the DJLServing SageMaker DLC image
Use the following code to use the DJLServing SageMaker DLC image after replacing the region with your specific region you are running the notebook in:
Create our model file
First, we create a file called serving.properties
that contains only one line of code. This tells the DJL model server to use the DeepSpeed engine. The file contains the following code:
serving.properties
is a file defined by DJLServing that is used to configure per-model configuration.
Next, we create our model.py
file, which defines the code needed to load and then serve the model. In our code, we read in the TENSOR_PARALLEL_DEGREE
environment variable (the default value is 1). This sets the number of devices over which the tensor parallel modules are distributed. Note that DeepSpeed provides a few built-in partition definitions, including one for BLOOM models. We use it by specifying replace_method
and relpace_with_kernel_inject
. If you have a customized model and need DeepSpeed to partition effectively, you need to change relpace_with_kernel_inject
to false
and add injection_policy
to make the runtime partition work. For more information, refer to Initializing for Inference. For our example, we used the pre-partitioned BLOOM model on DeepSpeed.
Secondly, in the model.py
file, we also load the model from Amazon S3 after the endpoint has been spun up. The model is loaded into the /tmp
space on the container because SageMaker maps the /tmp
to the Amazon Elastic Block Store (Amazon EBS) volume that is mounted when we specify the endpoint creation parameter VolumeSizeInGB
. For instances like p4dn, which come pre-built with the volume instance, we can continue to leverage the /tmp
on the container. See the following code:
DJLServing manages the runtime installation on any pip packages defined in requirement.txt
. This file will have:
We have created a directory called code
and the model.py
, serving.properties
, and requirements.txt
files are already created in this directory. To view the files, you can run the following code from the terminal:
The following figure shows the structure of the model.tar.gz
.
Lastly, we create the model file and upload it to Amazon S3:
Download and store the model from Hugging Face (Optional)
We have provided the steps in this section in case you want to download the model to Amazon S3 and use it from there. The steps are provided in the Jupyter file on GitHub. The following screenshot shows a snapshot of the steps.
Create a SageMaker model
We now create a SageMaker model. We use the Amazon Elastic Container Registry (Amazon ECR) image provided by and the model artifact from the previous step to create the SageMaker model. In the model setup, we configure TENSOR_PARALLEL_DEGREE=8
, which means the model is partitioned along 8 GPUs. See the following code:
After you run the preceding cell in the Jupyter file, you see output similar to the following:
Create a SageMaker endpoint
You can use any instances with multiple GPUs for testing. In this demo, we use a p4d.24xlarge instance. In the following code, note how we set the ModelDataDownloadTimeoutInSeconds
, ContainerStartupHealthCheckTimeoutInSeconds
, and VolumeSizeInGB
parameters to accommodate the large model size. The VolumeSizeInGB
parameter is applicable to GPU instances supporting the EBS volume attachment.
Lastly, we create a SageMaker endpoint:
You see it printed out in the following code:
Starting the endpoint might take a while. You can try a few more times if you run into the InsufficientInstanceCapacity
error, or you can raise a request to AWS to increase the limit in your account.
Performance tuning
If you intend to use this post and accompanying notebook with a different model, you may want to explore some of the tunable parameters that SageMaker, DeepSpeed, and the DJL offer. Iteratively experimenting with these parameters can have a material impact on the latency, throughput, and cost of your hosted large model. To learn more about tuning parameters such as number of workers, degree of tensor parallelism, job queue size, and others, refer to DJL Serving configurations and Deploy large models on Amazon SageMaker using DJLServing and DeepSpeed model parallel inference.
Results
In this post, we used DeepSpeed to host BLOOM-176B and OPT-30B on SageMaker ML instances. The following table summarizes our performance results, including a comparison with Hugging Face’s Accelerate. Latency reflects the number of milliseconds it takes to produce a 256-token string four times (batch_size=4
) from the model. Throughput reflects the number of tokens produced per second for each test. For Hugging Face Accelerate, we used the library’s default loading with GPU memory mapping. For DeepSpeed, we used its faster checkpoint loading mechanism.
Model | Library | Model Precision | Batch Size | Parallel Degree | Instance | Time to Load (s) |
Latency (4 x 256 Token Output) | . | ||
. | . | . | . | . | . | . | P50 (ms) |
P90 (ms) |
P99 (ms) |
Throughput (tokens/sec) |
BLOOM-176B | DeepSpeed | INT8 | 4 | 8 | p4d.24xlarge | 74.9 | 27,564 | 27,580 | 32,179 | 37.1 |
BLOOM-176B | Accelerate | INT8 | 4 | 8 | p4d.24xlarge | 669.4 | 92,694 | 92,735 | 103,292 | 11.0 |
OPT-30B | DeepSpeed | FP16 | 4 | 4 | g5.24xlarge | 239.4 | 11,299 | 11,302 | 11,576 | 90.6 |
OPT-30B | Accelerate | FP16 | 4 | 4 | g5.24xlarge | 533.8 | 63,734 | 63,737 | 67,605 | 16.1 |
From a latency perspective, DeepSpeed is about 3.4 times faster for BLOOM-176B and 5.6 times faster for OPT-30B than Accelerate. DeepSpeed’s optimized kernels are responsible for much of this difference in latency. Given these results, we recommend using DeepSpeed over Accelerate if your model of choice is supported.
It’s also worth noting that model loading times with DeepSpeed were much shorter, making it a better option if you anticipate needing to quickly scale up your number of endpoints. Accelerate’s more flexible pipeline parallelism technique may be a better option if you have models or model precisions that aren’t supported by DeepSpeed.
These results also demonstrate the difference in latency and throughput of different model sizes. In our tests, OPT-30B generates 2.4 times the number of tokens per unit time than BLOOM-176B on an instance type that is more than three times cheaper. On a price per unit throughput basis, OPT-30B on a g5.24xl instance is 8.9 times better than BLOOM-176B on a p4d.24xl instance. If you have strict latency, throughput, or cost limitations, consider using the smallest model possible that will still achieve functional requirements.
Clean up
As part of best practices it is always recommended to delete idle instances. The below code shows you how to delete the instances.
Optionally delete the model check point from your S3
Conclusion
In this post, we demonstrated how to use SageMaker large model inference containers to host two large language models, BLOOM-176B and OPT-30B. We used DeepSpeed’s model parallel techniques with multiple GPUs on a single SageMaker ML instance.
For more details about Amazon SageMaker and its large model inference capabilities, refer to Amazon SageMaker now supports deploying large models through configurable volume size and timeout quotas and Real-time inference.
About the authors
Simon Zamarin is an AI/ML Solutions Architect whose main focus is helping customers extract value from their data assets. In his spare time, Simon enjoys spending time with family, reading sci-fi, and working on various DIY house projects.
Rupinder Grewal is a Sr Ai/ML Specialist Solutions Architect with AWS. He currently focuses on serving of models and MLOps on SageMaker. Prior to this role he has worked as Machine Learning Engineer building and hosting models. Outside of work he enjoys playing tennis and biking on mountain trails.
Frank Liu is a Software Engineer for AWS Deep Learning. He focuses on building innovative deep learning tools for software engineers and scientists. In his spare time, he enjoys hiking with friends and family.
Alan Tan is a Senior Product Manager with SageMaker leading efforts on large model inference. He’s passionate about applying Machine Learning to the area of Analytics. Outside of work, he enjoys the outdoors.
Dhawal Patel is a Principal Machine Learning Architect at AWS. He has worked with organizations ranging from large enterprises to mid-sized startups on problems related to distributed computing, and Artificial Intelligence. He focuses on Deep learning including NLP and Computer Vision domains. He helps customers achieve high performance model inference on SageMaker.
Qing Lan is a Software Development Engineer in AWS. He has been working on several challenging products in Amazon, including high performance ML inference solutions and high performance logging system. Qing’s team successfully launched the first Billion-parameter model in Amazon Advertising with very low latency required. Qing has in-depth knowledge on the infrastructure optimization and Deep Learning acceleration.
Qingwei Li is a Machine Learning Specialist at Amazon Web Services. He received his Ph.D. in Operations Research after he broke his advisor’s research grant account and failed to deliver the Nobel Prize he promised. Currently he helps customers in the financial service and insurance industry build machine learning solutions on AWS. In his spare time, he likes reading and teaching.
Robert Van Dusen is a Senior Product Manager with Amazon SageMaker. He leads deep learning model optimization for applications such as large model inference.
Siddharth Venkatesan is a Software Engineer in AWS Deep Learning. He currently focusses on building solutions for large model inference. Prior to AWS he worked in the Amazon Grocery org building new payment features for customers world-wide. Outside of work, he enjoys skiing, the outdoors, and watching sports.
Use Github Samples with Amazon SageMaker Data Wrangler
Amazon SageMaker Data Wrangler is a UI-based data preparation tool that helps perform data analysis, preprocessing, and visualization with features to clean, transform, and prepare data faster. Data Wrangler pre-built flow templates help make data preparation quicker for data scientists and machine learning (ML) practitioners by helping you accelerate and understand best practice patterns for data flows using common datasets.
You can use Data Wrangler flows to perform the following tasks:
- Data visualization – Examining statistical properties for each column in the dataset, building histograms, studying outliers
- Data cleaning – Removing duplicates, dropping or filling entries with missing values, removing outliers
- Data enrichment and feature engineering – Processing columns to build more expressive features, selecting a subset of features for training
This post will help you understand Data Wrangler using the following sample pre-built flows on GitHub. The repository showcases tabular data transformation, time series data transformations, and joined dataset transforms. Each requires a different type of transformations because of their basic nature. Standard tabular or cross-sectional data is collected at a specific point in time. In contrast, time series data is captured repeatedly over time, with each successive data point dependent on its past values.
Let’s look at an example of how we can use the sample data flow for tabular data.
Prerequisites
Data Wrangler is an Amazon SageMaker feature available within Amazon SageMaker Studio, so we need to follow the Studio onboarding process to spin up the Studio environment and notebooks. Although you can choose from a few authentication methods, the simplest way to create a Studio domain is to follow the Quick start instructions. The Quick start uses the same default settings as the standard Studio setup. You can also choose to onboard using AWS IAM Identity Center (successor to AWS Single Sign-On) for authentication (see Onboard to Amazon SageMaker Domain Using IAM Identity Center).
Import the dataset and flow files into Data Wrangler using Studio
The following steps outline how to import data into SageMaker to be consumed by Data Wrangler:
Initialize Data Wrangler via the Studio UI by choosing New data flow.
Clone the GitHub repo to download the flow files into your Studio environment.
When the clone is complete, you should be able to see the repository content in the left pane.
Choose the file Hotel-Bookings-Classification.flow to import the flow file into Data Wrangler.
If you use the time series or joined data flow, the flow will appear as a different name.After the flow has been imported, you should see the following screenshot. This shows us errors because we need to make sure that the flow file points to the correct data source in Amazon Simple Storage Service (Amazon S3).
Choose Edit dataset to bring up all your S3 buckets. Next, choose the dataset hotel_bookings.csv
from your S3 bucket for running through the tabular data flow.
Note that if you’re using the joined data flow, you may have to import multiple datasets into Data Wrangler
In the right pane, make sure COMMA is chosen as the delimiter and Sampling is set to First K. Our dataset is small enough to run Data Wrangler transformations on the full dataset, but we wanted to highlight how you can import the dataset. If you have a large dataset, consider using sampling. Choose Import to import this dataset to Data Wrangler.
After the dataset is imported, Data Wrangler automatically validates the dataset and detects the data types. You can see that the errors have gone away because we’re pointing to the correct dataset. The flow editor now shows two blocks showcasing that the data was imported from a source and data types recognized. You can also edit the data types if needed.
The following screenshot shows our data types.
Let’s look at some of the transforms done as a part of this tabular flow. If you’re using the time series or joined data flows, check out some common transforms on the GitHub repo. We performed some basic exploratory data analysis using data insights reports that studied the target leakage and feature collinearity in the dataset, table summary analyses, and quick modeling capability. Explore the steps on the GitHub repo.
Now we drop columns based on the recommendations provided by the Data Insights and Quality Report.
- For target leakage, drop reservation_status.
- For redundant columns, drop days_in_waiting_list, hotel, reserved_room_type, arrival_date_month, reservation_status_date, babies, and arrival_date_day_of_month.
- Based on linear correlation results, drop columns arrival_date_week_number and arrival_date_year because the correlation values for these feature (column) pairs are greater than the recommended threshold of 0.90.
- Based on non-linear correlation results, drop reservation_status. This column was already marked to be dropped based on the target leakage analysis.
- Process numeric values (min-max scaling) for lead_time, stays_in_weekend_nights, stays_in_weekday_nights, is_repeated_guest, prev_cancellations, prev_bookings_not_canceled, booking_changes, adr, total_of_specical_requests, and required_car_parking_spaces.
- One-hot encode categorical variables like meal, is_repeated_guest, market_segment, assigned_room_type, deposit_type, and customer_type.
- Balance the target variable Random oversample for class imbalance.Use the quick modeling capability to handle outliers and missing values.
Export to Amazon S3
Now we have gone through the different transforms and are ready to export the data to Amazon S3. This option creates a SageMaker processing job, which runs the Data Wrangler processing flow and saves the resulting dataset to a specified S3 bucket. Follow the next steps to set up the export to Amazon S3:
Choose the plus sign next to a collection of transformation elements and choose Add destination, then Amazon S3.
- For Dataset name, enter a name for the new dataset, for example
NYC_export
. - For File type, choose CSV.
- For Delimiter, choose Comma.
- For Compression, choose None.
- For Amazon S3 location, use the same bucket name that we created earlier.
- Choose Add destination.
Choose Create job.
For Job name, enter a name or keep the autogenerated option and choose destination. We have only one destination, S3:testingtabulardata
, but you might have multiple destinations from different steps in your workflow. Leave the KMS key ARN field empty and choose Next.
Now you have to configure the compute capacity for a job. You can keep all default values for this example.
- For Instance type, use ml.m5.4xlarge.
- For Instance count, use 2.
- You can explore Additional configuration, but keep the default settings.
- Choose Run.
Now your job has started, and it takes some time to process 6 GB of data according to our Data Wrangler processing flow. The cost for this job will be around $2 USD, because ml.m5.4xlarge costs $0.922 USD per hour and we’re using two of them.
If you choose the job name, you’re redirected to a new window with the job details.
On the job details page, you can see all the parameters from the previous steps.
When the job status changes to Completed, you can also check the Processing time (seconds) value. This processing job takes around 5–10 minutes to complete.
When the job is complete, the train and test output files are available in the corresponding S3 output folders. You can find the output location from the processing job configurations.
After the Data Wrangler processing job is complete, we can check the results saved in our S3 bucket. Don’t forget to update the job_name
variable with your job name.
You can now use this exported data for running ML models.
Clean up
Delete your S3 buckets and your Data Wrangler flow in order to delete the underlying resources and prevent unwanted costs after you finish the experiment.
Conclusion
In this post, we showed how you can import the tabular pre-built data flow into Data Wrangler, plug it against our dataset, and export the results to Amazon S3. If your use cases require you to manipulate time series data or join multiple datasets, you can go through the other pre-built sample flows in the GitHub repo.
After you have imported a pre-built data prep workflow, you can integrate it with Amazon SageMaker Processing, Amazon SageMaker Pipelines, and Amazon SageMaker Feature Store to simplify the task of processing, sharing, and storing ML training data. You can also export this sample data flow to a Python script and create a custom ML data prep pipeline, thereby accelerating your release velocity.
We encourage you to check out our GitHub repository to get hands-on practice and find new ways to improve model accuracy! To learn more about SageMaker, visit the Amazon SageMaker Developer Guide.
About the Authors
Isha Dua is a Senior Solutions Architect based in the San Francisco Bay Area. She helps AWS Enterprise customers grow by understanding their goals and challenges, and guides them on how they can architect their applications in a cloud-native manner while making sure they are resilient and scalable. She’s passionate about machine learning technologies and environmental sustainability.
Transfer learning for TensorFlow object detection models in Amazon SageMaker
Amazon SageMaker provides a suite of built-in algorithms, pre-trained models, and pre-built solution templates to help data scientists and machine learning (ML) practitioners get started on training and deploying ML models quickly. You can use these algorithms and models for both supervised and unsupervised learning. They can process various types of input data, including tabular, image, and text.
This post is the second in a series on the new built-in algorithms in SageMaker. In the first post, we showed how SageMaker provides a built-in algorithm for image classification. Today, we announce that SageMaker provides a new built-in algorithm for object detection using TensorFlow. This supervised learning algorithm supports transfer learning for many pre-trained models available in TensorFlow. It takes an image as input and outputs the objects present in the image along with the bounding boxes. You can fine-tune these pre-trained models using transfer learning even when a large number of training images aren’t available. It’s available through the SageMaker built-in algorithms as well as through the SageMaker JumpStart UI in Amazon SageMaker Studio. For more information, refer to Object Detection Tensorflow and the example notebook Introduction to SageMaker Tensorflow – Object Detection.
Object detection with TensorFlow in SageMaker provides transfer learning on many pre-trained models available in TensorFlow Hub. According to the number of class labels in the training data, a new randomly initialized object detection head replaces the existing head of the TensorFlow model. Either the whole network, including the pre-trained model, or only the top layer (object detection head) can be fine-tuned on the new training data. In this transfer learning mode, you can achieve training even with a smaller dataset.
How to use the new TensorFlow object detection algorithm
This section describes how to use the TensorFlow object detection algorithm with the SageMaker Python SDK. For information on how to use it from the Studio UI, see SageMaker JumpStart.
The algorithm supports transfer learning for the pre-trained models listed in TensorFlow models. Each model is identified by a unique model_id
. The following code shows how to fine-tune a ResNet50 V1 FPN model identified by model_id
tensorflow-od1-ssd-resnet50-v1-fpn-640x640-coco17-tpu-8
on a custom training dataset. For each model_id
, in order to launch a SageMaker training job through the Estimator class of the SageMaker Python SDK, you need to fetch the Docker image URI, training script URI, and pre-trained model URI through the utility functions provided in SageMaker. The training script URI contains all the necessary code for data processing, loading the pre-trained model, model training, and saving the trained model for inference. The pre-trained model URI contains the pre-trained model architecture definition and the model parameters. Note that the Docker image URI and the training script URI are the same for all the TensorFlow object detection models. The pre-trained model URI is specific to the particular model. The pre-trained model tarballs have been pre-downloaded from TensorFlow and saved with the appropriate model signature in Amazon Simple Storage Service (Amazon S3) buckets, such that the training job runs in network isolation. See the following code:
With these model-specific training artifacts, you can construct an object of the Estimator class:
Next, for transfer learning on your custom dataset, you might need to change the default values of the training hyperparameters, which are listed in Hyperparameters. You can fetch a Python dictionary of these hyperparameters with their default values by calling hyperparameters.retrieve_default
, update them as needed, and then pass them to the Estimator class. Note that the default values of some of the hyperparameters are different for different models. For large models, the default batch size is smaller and the train_only_top_layer
hyperparameter is set to True
. The hyperparameter train_only_top_layer
defines which model parameters change during the fine-tuning process. If train_only_top_layer
is True
, parameters of the classification layers change and the rest of the parameters remain constant during the fine-tuning process. On the other hand, if train_only_top_layer
is False
, all parameters of the model are fine-tuned. See the following code:
We provide the PennFudanPed dataset as a default dataset for fine-tuning the models. The dataset comprises images of pedestrians. The following code provides the default training dataset hosted in S3 buckets:
Finally, to launch the SageMaker training job for fine-tuning the model, call .fit
on the object of the Estimator class, while passing the S3 location of the training dataset:
For more information about how to use the new SageMaker TensorFlow object detection algorithm for transfer learning on a custom dataset, deploy the fine-tuned model, run inference on the deployed model, and deploy the pre-trained model as is without first fine-tuning on a custom dataset, see the following example notebook: Introduction to SageMaker TensorFlow – Object Detection.
Input/output interface for the TensorFlow object detection algorithm
You can fine-tune each of the pre-trained models listed in TensorFlow Models to any given dataset comprising images belonging to any number of classes. The objective is to minimize prediction error on the input data. The model returned by fine-tuning can be further deployed for inference. The following are the instructions for how the training data should be formatted for input to the model:
- Input – A directory with sub-directory images and a file
annotations.json
. - Output – There are two outputs. First is a fine-tuned model, which can be deployed for inference or further trained using incremental training. Second is a file which maps class indexes to class labels; this is saved along with the model.
The input directory should look like the following example:
The annotations.json
file should have information for bounding_boxes
and their class labels. It should have a dictionary with the keys "images"
and "annotations"
. The value for the "images"
key should be a list of entries, one for each image of the form {"file_name": image_name, "height": height, "width": width, "id": image_id}
. The value of the "annotations"
key should be a list of entries, one for each bounding box of the form {"image_id": image_id, "bbox": [xmin, ymin, xmax, ymax], "category_id": bbox_label}
.
Inference with the TensorFlow object detection algorithm
The generated models can be hosted for inference and support encoded .jpg, .jpeg, and .png image formats as the application/x-image
content type. The input image is resized automatically. The output contains the boxes, predicted classes, and scores for each prediction. The TensorFlow object detection model processes a single image per request and outputs only one line in the JSON. The following is an example of a response in JSON:
If accept
is set to application/json
, then the model only outputs predicted boxes, classes, and scores. For more details on training and inference, see the sample notebook Introduction to SageMaker TensorFlow – Object Detection.
Use SageMaker built-in algorithms through the JumpStart UI
You can also use SageMaker TensorFlow object detection and any of the other built-in algorithms with a few clicks via the JumpStart UI. JumpStart is a SageMaker feature that allows you to train and deploy built-in algorithms and pre-trained models from various ML frameworks and model hubs through a graphical interface. It also allows you to deploy fully fledged ML solutions that string together ML models and various other AWS services to solve a targeted use case.
Following are two videos that show how you can replicate the same fine-tuning and deployment process we just went through with a few clicks via the JumpStart UI.
Fine-tune the pre-trained model
Here is the process to fine-tune the same pre-trained object detection model.
Deploy the finetuned model
After model training is finished, you can directly deploy the model to a persistent, real-time endpoint with one click.
Conclusion
In this post, we announced the launch of the SageMaker TensorFlow object detection built-in algorithm. We provided example code on how to do transfer learning on a custom dataset using a pre-trained model from TensorFlow using this algorithm.
For more information, check out documentation and the example notebook.
About the authors
Dr. Vivek Madan is an Applied Scientist with the Amazon SageMaker JumpStart team. He got his PhD from University of Illinois at Urbana-Champaign and was a Post Doctoral Researcher at Georgia Tech. He is an active researcher in machine learning and algorithm design and has published papers in EMNLP, ICLR, COLT, FOCS, and SODA conferences.
João Moura is an AI/ML Specialist Solutions Architect at Amazon Web Services. He is mostly focused on NLP use cases and helping customers optimize deep learning model training and deployment. He is also an active proponent of low-code ML solutions and ML-specialized hardware.
Dr. Ashish Khetan is a Senior Applied Scientist with Amazon SageMaker built-in algorithms and helps develop machine learning algorithms. He got his PhD from University of Illinois Urbana Champaign. He is an active researcher in machine learning and statistical inference and has published many papers in NeurIPS, ICML, ICLR, JMLR, ACL, and EMNLP conferences.
Transfer learning for TensorFlow text classification models in Amazon SageMaker
Amazon SageMaker provides a suite of built-in algorithms, pre-trained models, and pre-built solution templates to help data scientists and machine learning (ML) practitioners get started training and deploying ML models quickly. You can use these algorithms and models for both supervised and unsupervised learning. They can process various types of input data, including tabular, image, and text.
This post is the third in a series on the new built-in algorithms in SageMaker. In the first post, we showed how SageMaker provides a built-in algorithm for image classification. In the second post, we showed how SageMaker provides a built-in algorithm for object detection. Today, we announce that SageMaker provides a new built-in algorithm for text classification using TensorFlow. This supervised learning algorithm supports transfer learning for many pre-trained models available in TensorFlow hub. It takes a piece of text as input and outputs the probability for each of the class labels. You can fine-tune these pre-trained models using transfer learning even when a large corpus of text isn’t available. It’s available through the SageMaker built-in algorithms, as well as through the SageMaker JumpStart UI in Amazon SageMaker Studio. For more information, refer to Text Classification and the example notebook Introduction to JumpStart – Text Classification.
Text Classification with TensorFlow in SageMaker provides transfer learning on many pre-trained models available in the TensorFlow Hub. According to the number of class labels in the training data, a classification layer is attached to the pre-trained TensorFlow hub model. The classification layer consists of a dropout layer and a dense layer, fully connected layer, with 2-norm regularizer, which is initialized with random weights. The model training has hyper-parameters for the dropout rate of dropout layer, and L2 regularization factor for the dense layer. Then, either the whole network, including the pre-trained model, or only the top classification layer can be fine-tuned on the new training data. In this transfer learning mode, training can be achieved even with a smaller dataset.
How to use the new TensorFlow text classification algorithm
This section describes how to use the TensorFlow text classification algorithm with the SageMaker Python SDK. For information on how to use it from the Studio UI, see SageMaker JumpStart.
The algorithm supports transfer learning for the pre-trained models listed in Tensorflow models. Each model is identified by a unique model_id
. The following code shows how to fine-tune BERT base model identified by model_id
tensorflow-tc-bert-en-uncased-L-12-H-768-A-12-2
on a custom training dataset. For each model_id
, to launch a SageMaker training job through the Estimator class of the SageMaker Python SDK, you must fetch the Docker image URI, training script URI, and pre-trained model URI through the utility functions provided in SageMaker. The training script URI contains all of the necessary code for data processing, loading the pre-trained model, model training, and saving the trained model for inference. The pre-trained model URI contains the pre-trained model architecture definition and the model parameters. The pre-trained model URI is specific to the particular model. The pre-trained model tarballs have been pre-downloaded from TensorFlow and saved with the appropriate model signature in Amazon Simple Storage Service (Amazon S3) buckets, so that the training job runs in network isolation. See the following code:
With these model-specific training artifacts, you can construct an object of the Estimator class:
Next, for transfer learning on your custom dataset, you might need to change the default values of the training hyperparameters, which are listed in Hyperparameters. You can fetch a Python dictionary of these hyperparameters with their default values by calling hyperparameters.retrieve_default
, update them as needed, and then pass them to the Estimator class. Note that the default values of some of the hyperparameters are different for different models. For large models, the default batch size is smaller and the train_only_top_layer
hyperparameter is set to True
. The hyperparameter Train_only_top_layer
defines which model parameters change during the fine-tuning process. If train_only_top_layer
is True
, then parameters of the classification layers change and the rest of the parameters remain constant during the fine-tuning process. On the other hand, if train_only_top_layer
is False
, then all of the parameters of the model are fine-tuned. See the following code:
We provide the SST2 as a default dataset for fine-tuning the models. The dataset contains positive and negative movie reviews. It has been downloaded from TensorFlow under Apache 2.0 License. The following code provides the default training dataset hosted in S3 buckets.
Finally, to launch the SageMaker training job for fine-tuning the model, call .fit on the object of the Estimator class, while passing the Amazon S3 location of the training dataset:
For more information about how to use the new SageMaker TensorFlow text classification algorithm for transfer learning on a custom dataset, deploy the fine-tuned model, run inference on the deployed model, and deploy the pre-trained model as is without first fine-tuning on a custom dataset, see the following example notebook: Introduction to JumpStart – Text Classification.
Input/output interface for the TensorFlow text classification algorithm
You can fine-tune each of the pre-trained models listed in TensorFlow Models to any given dataset made up of text sentences with any number of classes. The pre-trained model attaches a classification layer to the Text Embedding model and initializes the layer parameters to random values. The output dimension of the classification layer is determined based on the number of classes detected in the input data. The objective is to minimize classification errors on the input data. The model returned by fine-tuning can be further deployed for inference.
The following instructions describe how the training data should be formatted for input to the model:
- Input – A directory containing a data.csv file. Each row of the first column should have integer class labels between 0 and the number of classes. Each row of the second column should have the corresponding text data.
- Output – A fine-tuned model that can be deployed for inference or further trained using incremental training. A file mapping class indexes to class labels is saved along with the models.
The following is an example of an input CSV file. Note that the file should not have any header. The file should be hosted in an S3 bucket with a path similar to the following: s3://bucket_name/input_directory/
. Note that the trailing /
is required.
Inference with the TensorFlow text classification algorithm
The generated models can be hosted for inference and support text as the application/x-text
content type. The output contains the probability values, class labels for all of the classes, and the predicted label corresponding to the class index with the highest probability encoded in the JSON format. The model processes a single string per request and outputs only one line. The following is an example of a JSON format response:
If accept
is set to application/json
, then the model only outputs probabilities. For more details on training and inference, see the sample notebook Introduction to Introduction to JumpStart – Text Classification.
Use SageMaker built-in algorithms through the JumpStart UI
You can also use SageMaker TensorFlow text classification and any of the other built-in algorithms with a few clicks via the JumpStart UI. JumpStart is a SageMaker feature that lets you train and deploy built-in algorithms and pre-trained models from various ML frameworks and model hubs through a graphical interface. Furthermore, it lets you deploy fully-fledged ML solutions that string together ML models and various other AWS services to solve a targeted use case.
Following are two videos that show how you can replicate the same fine-tuning and deployment process we just went through with a few clicks via the JumpStart UI.
Fine-tune the pre-trained model
Here is the process to fine-tune the same pre-trained text classification model.
Deploy the finetuned model
After model training is finished, you can directly deploy the model to a persistent, real-time endpoint with one click.
Conclusion
In this post, we announced the launch of the SageMaker TensorFlow text classification built-in algorithm. We provided example code for how to do transfer learning on a custom dataset using a pre-trained model from TensorFlow hub using this algorithm.
For more information, check out the documentation and the example notebook Introduction to JumpStart – Text Classification.
About the authors
Dr. Vivek Madan is an Applied Scientist with the Amazon SageMaker JumpStart team. He got his PhD from University of Illinois at Urbana-Champaign and was a Post Doctoral Researcher at Georgia Tech. He is an active researcher in machine learning and algorithm design and has published papers in EMNLP, ICLR, COLT, FOCS and SODA conferences.
João Moura is an AI/ML Specialist Solutions Architect at Amazon Web Services. He is mostly focused on NLP use-cases and helping customers optimize deep learning model training and deployment. He is also an active proponent of low-code ML solutions and ML-specialized hardware.
Dr. Ashish Khetan is a Senior Applied Scientist with Amazon SageMaker built-in algorithms and helps develop machine learning algorithms. He got his PhD from University of Illinois Urbana Champaign. He is an active researcher in machine learning and statistical inference and has published many papers in NeurIPS, ICML, ICLR, JMLR, ACL, and EMNLP conferences.
Jens Lehmann receives Semantic Web journal 10-year award
The work of Lehmann and three co-authors helped demonstrate the feasibility of large-scale virtual knowledge graphs.Read More
Intelligent document processing with AWS AI and Analytics services in the insurance industry: Part 2
In Part 1 of this series, we discussed intelligent document processing (IDP), and how IDP can accelerate claims processing use cases in the insurance industry. We discussed how we can use AWS AI services to accurately categorize claims documents along with supporting documents. We also discussed how to extract various types of documents in an insurance claims package, such as forms, tables, or specialized documents such as invoices, receipts, or ID documents. We looked into the challenges in legacy document processes, which is time-consuming, error-prone, expensive, and difficult to process at scale, and how you can use AWS AI services to help implement your IDP pipeline.
In this post, we walk you through advanced IDP features for document extraction, querying, and enrichment. We also look into how to further use the extracted structured information from claims data to get insights using AWS Analytics and visualization services. We highlight on how extracted structured data from IDP can help against fraudulent claims using AWS Analytics services.
Intelligent document processing with AWS AI and Analytics services in the insurance industry
|
Solution overview
The following diagram illustrates the phases if IDP using AWS AI services. In Part 1, we discussed the first three phases of the IDP workflow. In this post, we expand on the extraction step and the remaining phases, which include integrating IDP with AWS Analytics services.
We use these analytics services for further insights and visualizations, and to detect fraudulent claims using structured, normalized data from IDP. The following diagram illustrates the solution architecture.
The phases we discuss in this post use the following key services:
- Amazon Comprehend Medical is a HIPAA-eligible natural language processing (NLP) service that uses machine learning (ML) models that have been pre-trained to understand and extract health data from medical text, such as prescriptions, procedures, or diagnoses.
- AWS Glue is a part of the AWS Analytics services stack, and is a serverless data integration service that makes it easy to discover, prepare, and combine data for analytics, ML, and application development.
- Amazon Redshift is another service in the Analytics stack. Amazon Redshift is a fully managed, petabyte-scale data warehouse service in the cloud.
Prerequisites
Before you get started, refer to Part 1 for a high-level overview of the insurance use case with IDP and details about the data capture and classification stages.
For more information regarding the code samples, refer to our GitHub repo.
Extraction phase
In Part 1, we saw how to use Amazon Textract APIs to extract information like forms and tables from documents, and how to analyze invoices and identity documents. In this post, we enhance the extraction phase with Amazon Comprehend to extract default and custom entities specific to custom use cases.
Insurance carriers often come across dense text in insurance claims applications, such a patient’s discharge summary letter (see the following example image). It can be difficult to automatically extract information from such types of documents where there is no definite structure. To address this, we can use the following methods to extract key business information from the document:
- Amazon Comprehend default entity recognition
- Amazon Comprehend custom entity recognizer
Extract default entities with the Amazon Comprehend DetectEntities API
We run the following code on the sample medical transcription document:
The following screenshot shows a collection of entities identified in the input text. The output has been shortened for the purposes of this post. Refer to the GitHub repo for a detailed list of entities.
Extract custom entities with Amazon Comprehend custom entity recognition
The response from the DetectEntities
API includes the default entities. However, we’re interested in knowing specific entity values, such as the patient’s name (denoted by the default entity PERSON
), or the patient’s ID (denoted by the default entity OTHER
). To recognize these custom entities, we train an Amazon Comprehend custom entity recognizer model. We recommend following the comprehensive steps on how to train and deploy a custom entity recognition model in the GitHub repo.
After we deploy the custom model, we can use the helper function get_entities()
to retrieve custom entities like PATIENT_NAME
and PATIENT_D
from the API response:
The following screenshot shows our results.
Enrichment phase
In the document enrichment phase, we perform enrichment functions on healthcare-related documents to draw valuable insights. We look at the following types of enrichment:
- Extract domain-specific language – We use Amazon Comprehend Medical to extract medical-specific ontologies like ICD-10-CM, RxNorm, and SNOMED CT
- Redact sensitive information – We use Amazon Comprehend to redact personally identifiable information (PII), and Amazon Comprehend Medical for protected health information (PHI) redaction
Extract medical information from unstructured medical text
Documents such as medical providers’ notes and clinical trial reports include dense medical text. Insurance claims carriers need to identify the relationships among the extracted health information from this dense text and link them to medical ontologies like ICD-10-CM, RxNorm, and SNOMED CT codes. This is very valuable in automating claim capture, validation, and approval workflows for insurance companies to accelerate and simplify claim processing. Let’s look at how we can use the Amazon Comprehend Medical InferICD10CM
API to detect possible medical conditions as entities and link them to their codes:
For the input text, which we can pass in from the Amazon Textract DetectDocumentText
API, the InferICD10CM
API returns the following output (the output has been abbreviated for brevity).
Similarly, we can use the Amazon Comprehend Medical InferRxNorm
API to identify medications and the InferSNOMEDCT
API to detect medical entities within healthcare-related insurance documents.
Perform PII and PHI redaction
Insurance claims packages require a lot of privacy compliance and regulations because they contain both PII and PHI data. Insurance carriers can reduce compliance risk by redacting information like policy numbers or the patient’s name.
Let’s look at an example of a patient’s discharge summary. We use the Amazon Comprehend DetectPiiEntities
API to detect PII entities within the document and protect the patient’s privacy by redacting these entities:
We get the following PII entities in the response from the detect_pii_entities()
API :
We can then redact the PII entities that were detected from the documents by utilizing the bounding box geometry of the entities from the document. For that, we use a helper tool called amazon-textract-overlayer
. For more information, refer to Textract-Overlayer. The following screenshots compare a document before and after redaction.
Similar to the Amazon Comprehend DetectPiiEntities
API, we can also use the DetectPHI
API to detect PHI data in the clinical text being examined. For more information, refer to Detect PHI.
Review and validation phase
In the document review and validation phase, we can now verify if the claim package meets the business’s requirements, because we have all the information collected from the documents in the package from earlier stages. We can do this by introducing a human in the loop that can review and validate all the fields or just an auto-approval process for low dollar claims before sending the package to downstream applications. We can use Amazon Augmented AI (Amazon A2I) to automate the human review process for insurance claims processing.
Now that we have all required data extracted and normalized from claims processing using AI services for IDP, we can extend the solution to integrate with AWS Analytics services such as AWS Glue and Amazon Redshift to solve additional use cases and provide further analytics and visualizations.
Detect fraudulent insurance claims
In this post, we implement a serverless architecture where the extracted and processed data is stored in a data lake and is used to detect fraudulent insurance claims using ML. We use Amazon Simple Storage Service (Amazon S3) to store the processed data. We can then use AWS Glue or Amazon EMR to cleanse the data and add additional fields to make it consumable for reporting and ML. After that, we use Amazon Redshift ML to build a fraud detection ML model. Finally, we build reports using Amazon QuickSight to get insights into the data.
Setup Amazon Redshift external schema
For the purpose of this example, we have created a sample dataset the emulates the output of an ETL (extract, transform, and load) process, and use AWS Glue Data Catalog as the metadata catalog. First, we create a database named idp_demo
in the Data Catalog and an external schema in Amazon Redshift called idp_insurance_demo
(see the following code). We use an AWS Identity and Access Management (IAM) role to grant permissions to the Amazon Redshift cluster to access Amazon S3 and Amazon SageMaker. For more information about how to set up this IAM role with least privilege, refer to Cluster and configure setup for Amazon Redshift ML administration.
Create Amazon Redshift external table
The next step is to create an external table in Amazon Redshift referencing the S3 location where the file is located. In this case, our file is a comma-separated text file. We also want to skip the header row from the file, which can be configured in the table properties section. See the following code:
Create training and test datasets
After we create the external table, we prepare our dataset for ML by splitting it into training set and test set. We create a new external table called claim_train
, which consists of all records with ID <= 85000 from the claims table. This is the training set that we train our ML model on.
We create another external table called claim_test
that consists of all records with ID >85000 to be the test set that we test the ML model on:
Create an ML model with Amazon Redshift ML
Now we create the model using the CREATE MODEL command (see the following code). We select the relevant columns from the claims_train
table that can determine a fraudulent transaction. The goal of this model is to predict the value of the fraud
column; therefore, fraud
is added as the prediction target. After the model is trained, it creates a function named insurance_fraud_model
. This function is used for inference while running SQL statements to predict the value of the fraud
column for new records.
Evaluate ML model metrics
After we create the model, we can run queries to check the accuracy of the model. We use the insurance_fraud_model
function to predict the value of the fraud
column for new records. Run the following query on the claims_test
table to create a confusion matrix:
Detect fraud using the ML model
After we create the new model, as new claims data is inserted into the data warehouse or data lake, we can use the insurance_fraud_model
function to calculate the fraudulent transactions. We do this by first loading the new data into a temporary table. Then we use the insurance_fraud_model
function to calculate the fraud
flag for each new transaction and insert the data along with the flag into the final table, which in this case is the claims
table.
Visualize the claims data
When the data is available in Amazon Redshift, we can create visualizations using QuickSight. We can then share the QuickSight dashboards with business users and analysts. To create the QuickSight dashboard, you first need to create an Amazon Redshift dataset in QuickSight. For instructions, refer to Creating a dataset from a database.
After you create the dataset, you can create a new analysis in QuickSight using the dataset. The following are some sample reports we created:
- Total number of claims by state, grouped by the
fraud
field – This chart shows us the proportion of fraudulent transactions compared to the total number of transactions in a particular state. - Sum of the total dollar value of the claims, grouped by the
fraud
field – This chart shows us the proportion of dollar amount of fraudulent transactions compared to the total dollar amount of transactions in a particular state. - Total number of transactions per insurance company, grouped by the
fraud
field – This chart shows us how many claims were filed for each insurance company and how many of them are fraudulent.
- Total sum of fraudulent transactions by state displayed on a US map – This chart just shows the fraudulent transactions and displays the total charges for those transactions by state on the map. The darker shade of blue indicates higher total charges. We can further analyze this by city within that state and zip codes with the city to better understand the trends.
Clean up
To prevent incurring future charges to your AWS account, delete the resources that you provisioned in the setup by following the instructions in the Cleanup section in our repo.
Conclusion
In this two-part series, we saw how to build an end-to-end IDP pipeline with little or no ML experience. We explored a claims processing use case in the insurance industry and how IDP can help automate this use case using services such as Amazon Textract, Amazon Comprehend, Amazon Comprehend Medical, and Amazon A2I. In Part 1, we demonstrated how to use AWS AI services for document extraction. In Part 2, we extended the extraction phase and performed data enrichment. Finally, we extended the structured data extracted from IDP for further analytics, and created visualizations to detect fraudulent claims using AWS Analytics services.
We recommend reviewing the security sections of the Amazon Textract, Amazon Comprehend, and Amazon A2I documentation and following the guidelines provided. To learn more about the pricing of the solution, review the pricing details of Amazon Textract, Amazon Comprehend, and Amazon A2I.
About the Authors
Chinmayee Rane is an AI/ML Specialist Solutions Architect at Amazon Web Services. She is passionate about applied mathematics and machine learning. She focuses on designing intelligent document processing solutions for AWS customers. Outside of work, she enjoys salsa and bachata dancing.
Uday Narayanan is an Analytics Specialist Solutions Architect at AWS. He enjoys helping customers find innovative solutions to complex business challenges. His core areas of focus are data analytics, big data systems, and machine learning. In his spare time, he enjoys playing sports, binge-watching TV shows, and traveling.
Sonali Sahu is leading the Intelligent Document Processing AI/ML Solutions Architect team at Amazon Web Services. She is a passionate technophile and enjoys working with customers to solve complex problems using innovation. Her core area of focus is artificial intelligence and machine learning for intelligent document processing.