Papers focus on learning previously unseen intents and personalization, both generally and in the specific case of recipe recommendation.Read More
From Code to Clinic, Smart Hospital Tech Boosts Efficiency, Sustainability in Medicine
NVIDIA is collaborating with clinical organizations across Europe to bring AI to the point of care, bolstering clinical pathways with efficiency gains and new data dimensions that can be included in medical decision-making processes.
The University Hospital Essen, in northwestern Germany, is one such organization taking machine learning from the bits to the bedside — using NVIDIA technology and AI to build smart hospitals of the future.
Jens Kleesiek and Felix Nensa, professors at the School of Medicine of the University of Duisburg Essen, are part of a four-person team leading the research groups that established the Institute for Artificial Intelligence in Medicine (IKIM). The technology developed by IKIM is integrated with the IT infrastructure of University Hospital Essen.
IKIM hosts a data annotation lab, overseen by a team of board-certified radiologists, that accelerates the labeling of anatomic structures in medical images using MONAI, an open-source, PyTorch-based framework for building, training, labeling and deploying AI models for healthcare imaging.
MONAI was created by NVIDIA in collaboration with over a dozen leading clinical and research organizations, including King’s College London.
IKIM researchers also use self-supervised learning to pretrain AI models that generate high-quality labels for the hospital’s CT scans, MRIs and more.
Additionally, the IKIM team has developed a smart hospital information platform, or SHIP, an AI-based central healthcare data integration platform and deployment engine. The platform is used by researchers and clinicians to conduct real-time analysis of the slew of data in university hospitals — including medical imaging, radiology reports, clinic notes and patient interviews.
SHIP can, for example, flag an abnormality on a radiology report and notify physicians via real-time push notifications, enabling quicker diagnoses and treatments for patients. The AI can also pinpoint data-driven associations between healthcare metrics like genetic traits and patient outcomes.
“We want to solve real-world problems and bring the solutions right into the clinics,” Kleesiek said. “The SHIP framework is capable of delivering deep learning algorithms that analyze data straight to the clinicians who are at the point of care.”
Plus, increased workflow efficiency — enabled by AI — means increased sustainability within hospitals.
Making Hospitals Smarter
Nensa says his hospital currently has close to 500 IT systems, including those for hospital information, laboratories and radiology. Each consists of critical patient information that’s interrelated — but data from disparate systems can be difficult to connect or draw machine learning-based insights from.
SHIP connects the data from all such systems by automatically translating it into a description standard called fast healthcare interoperability resources, or FHIR, which is commonly used in medicine to exchange electronic health records. SHIP currently encompasses more than 1.2 billion FHIR.
Once converted to FHIR, the information can be easily accessed by data scientists, researchers and clinicians for real-time AI training and analysis based on NVIDIA GPUs and DGX A100 systems. This makes it possible for labor-intensive tasks, such as liver volumetry prior to living donor liver transplantation or bone age estimation in children, to be performed fully automatically in the background, instead of requiring a half-hour of manual work by a radiologist.
“The more artificial intelligence is at work in a hospital, the more patients can enjoy human intelligence,” Nensa said. “As AI provides doctors and nurses relief from repetitive tasks like data retrieval and annotation, the medical professionals can focus on what they really want to do, which is to be there and care for their patients.”
NVIDIA DGX A100 systems power IKIM’s AI training and inference. NVIDIA Triton Inference Server enables fast and scalable concurrent serving of AI models within the clinic.
The IKIM team also uses NVIDIA FLARE, an open-source platform for federated learning, which allows data scientists to develop generalizable and robust AI models while maintaining patient privacy.
Smarter Equals Greener
In addition to reducing physician workload and increasing time for patient care, AI in hospitals boosts sustainability efforts.
As a highly specialized medical center, the University Hospital Essen must be available year-round for reliable patient treatment, with 24-hour operation times. In this way, patient-oriented, cutting-edge medicine is traditionally associated with a high consumption of energy.
SHIP helps hospitals increase efficiency, automating tasks and optimizing processes to reduce friction in the workflow — which saves energy. According to Kleesiek, IKIM reuses the energy emitted by GPUs in the data center, which also helps to make the University Hospital Essen greener.
“NVIDIA is providing all of the layers for us to get the most out of the technology, from software and hardware to training led by expert engineers,” Nensa said.
In April, NVIDIA experts hosted a workshop at IKIM, featuring lectures and hands-on training on GPU-accelerated deep learning, data science and AI in medicine. The workshop led IKIM to kickstart additional projects using AI for medicine — including a research contribution to MONAI.
In addition, IKIM is building SmartWard technology to provide an end-to-end AI-powered patient experience in hospitals, from service robots in waiting areas to automated discharge reports.
For the SmartWard project, the IKIM team is considering integrating the NVIDIA Clara Holoscan platform for medical device AI computing.
Subscribe to NVIDIA healthcare news and watch IKIM’s NVIDIA GTC session on demand.
Feature image courtesy of University of Duisburg-Essen.
The post From Code to Clinic, Smart Hospital Tech Boosts Efficiency, Sustainability in Medicine appeared first on NVIDIA Blog.
Research highlights from the Core Data Science team at Meta
Core Data Science (CDS) is a central science organization that drives impact for Meta and the world through use-inspired advancements to our fundamental understanding of the intern…Read More
Critical Regularizations for Neural Surface Reconstruction in the Wild
Neural implicit functions have recently shown promising results on surface reconstructions from multiple views. However, current methods still suffer from excessive time complexity and poor robustness when reconstructing unbounded or complex scenes. In this paper, we present RegSDF, which shows that proper point cloud supervisions and geometry regularizations are sufficient to produce high-quality and robust reconstruction results. Specifically, RegSDF takes an additional oriented point cloud as input, and optimizes a signed distance field and a surface light field within a differentiable…Apple Machine Learning Research
How service providers can use natural language processing to gain insights from customer tickets with Amazon Comprehend
Today, customers can raise support tickets through multiple channels like – web, mobile, chat-bots, emails, or phone calls. When a support ticket is raised by a customer, it is processed and assigned to a category based on the information provided in the ticket. It is then routed to the support group for resolution according to the category of the ticket. It is estimated that a high number of support tickets are usually not routed to the right group due to incorrect ticket categorization. Incorrectly assigned tickets cause delay in overall resolution time, often resulting in severe customer dissatisfaction. It may also have other widespread impacts like financial, operational, or other business repercussions. Hence, ticket classification is an essential task for every organization these days. Although you may classify tickets manually, but it is prone to error, not cost-effective, and does not scale.
AWS Managed Services (AMS) uses Amazon Comprehend custom classifications to categorize inbound requests by resource and operation type based on how the customer described their issue. Amazon Comprehend is a natural language processing (NLP) service that uses machine learning (ML) to uncover valuable insights and connections in text. AMS utilizes custom classifiers to label customer requests with appropriate issue types, resource type, and resource action, thereby routing customer tickets to the SMEs. Amazon Comprehend classification is utilized to find opportunities for new internal automation tools that AMS engineers can use to fulfill customer requirements to reduce manual effort and chances of manual errors. The classification data is stored in an Amazon Redshift cluster and used to analyze customer requests and find new automation tool candidates. This automation results in increased operational efficiency and reduced cost.
In this post, we show how managed service providers can use Amazon Comprehend to classify and route the tickets, provide suggestions based on the classification, and utilize the classification data.
Solution overview
The following diagram shows the solution architecture.
The workflow is as follows:
- A customer submits the ticket.
- The ticket system receives the ticket from the customer, and invokes the ticket classifier AWS Lambda function with the ticket details. Lambda is a serverless, event-driven compute service that lets you run code for virtually any type of application or backend service without provisioning or managing servers. Lambda is chosen for the solution to reduce cost and maintenance effort.
- The ticket classifier Lambda function classifies the ticket with Amazon Comprehend using the ticket title and description. With Amazon Comprehend, you can train the NLP model and provide both batch and real-time classifiers without provisioning and maintaining infrastructure.
- The ticket classifier Lambda function pushes the ticket classification data to the Amazon Redshift cluster via Amazon Kinesis Data Firehose. Kinesis Data Firehose is an extract, transform, and load (ETL) service that captures, transforms, and delivers streaming data to data lakes, data stores, and analytics services. Amazon Redshift uses SQL to analyze structured and semi-structured data across data warehouses, operational databases, and data lakes, using AWS-designed hardware and ML to deliver the best price performance at any scale. Kinesis Data Firehose delivers data to an Amazon Simple Storage Service (Amazon S3) bucket first and then issues an Amazon Redshift COPY command to load the data into an Amazon Redshift cluster.
- The ticket classifier Lambda function invokes the ticket handler Lambda function.
- The ticket handler Lambda function runs code to help the ticket handling. In this example, it returns the recommended materials for handling the ticket based on the classification.
- Ticket analysis can be done with Amazon QuickSight. From ticket analysis, you can find out the top requested ticket type. Based on the analysis, you can discover ticket trends and opportunities to automate top ticket types. QuickSight is a cloud-scale business intelligence (BI) service that you can use to deliver easy-to-understand insights to the people who you work with, wherever they are.
In the following sections, we walk you through the steps to implement the solution, integrate the ticket classification infrastructure with your ticketing system, and use the classification data with QuickSight.
Implement the solution
In this section, we walk through the steps to provision your solution resources and create the necessary infrastructure.
Configure Amazon Comprehend
In this step, we train two new Amazon Comprehend custom classification models: Operation and Resource, and create a real-time analysis endpoint for each model.
Upload the training data
To upload the training data, complete the following steps:
- Download ticket_training_data.zip and unzip the file.
This folder contains the following two files:-
training_data_operations.csv – This file is a two-column CSV file that we use to train the Operation classification model. The first column contains
class
, and the second column containsdocument
. -
training_data_resources.csv – This file is a two-column CSV file that we use to train the Resource classification model. Like the
training_data_operations.csv
file, the first column containsclass
, and the second column containsdocument
.
-
training_data_operations.csv – This file is a two-column CSV file that we use to train the Operation classification model. The first column contains
- On the Amazon S3 console, create a new bucket for Amazon Comprehend. Because S3 bucket names are globally unique, you need to create a unique name for the bucket. For this post, we call it
comprehend-ticket-training-data
. Enable server-side encryption and block public access when creating the bucket. - Upload
training_data_operations.csv
andtraining_data_resources.csv
to the new S3 bucket.
Create two new models
To create your models, complete the following steps:
- On the Amazon Comprehend console, choose Custom classification in the navigation pane.
- Choose Create new model.
- Provide the following information:
- For Model name, enter
ticket-classification-operation
. - For Language, choose English.
- For Classifier mode, select Using Single-label mode.
- For Data format, select CSV file.
- For Training dataset, enter the S3 path for
training_data_operations.csv
. - For Test data source, select Autosplit.
Autosplit automatically selects 10% of your provided training data to use as testing data. - For IAM Role, select Create an IAM role.
- For Permissions to access, choose the training, test, and output data (if specified) in your S3 buckets.
- For Name suffix, enter
ticket-classification
.
- For Model name, enter
- Choose Create.
- Choose Create new model again to create your resource classification model.
- Provide the following information:
- For Model name, enter
ticket-classification-resource
. - For Language, choose English.
- For Classifier mode, select Using Single-label mode.
- For Data format, select CSV file.
- For Training dataset, enter the S3 path for
training_data_resources.csv
. - For Test data source, select Autosplit.
- For IAM Role, select Use an existing IAM role.
- For Role name, choose
AmazonComprehendServiceRole-ticket-classification
.
- For Model name, enter
- Choose Create.
Amazon Comprehend is now processing the CSV files and using them to train custom classifiers. We then use these to help classify customer tickets. The larger and more accurate our training data is, the more accurate the classifier will be.
Wait for the version status to show as Trained
as below. It may take up to 1 hour to complete, depending on the size of the training data.
Create Amazon Comprehend endpoints
Amazon Comprehend endpoints are billed in 1-second increments, with a minimum of 60 seconds. Charges continue to incur from the time you start the endpoint until it’s deleted, even if no documents are analyzed. For more information, see Amazon Comprehend Pricing. To create your endpoints, complete the following steps:
- On the Amazon Comprehend console, choose Endpoints in the navigation pane.
- Choose Create endpoint to create your operation classification endpoint.
- Provide the following information:
- For Endpoint name, enter
ticket-classification-operation
. - For Custom model type, select Custom classification.
- For Classifier model, choose ticket-classification-operation.
- For Version, choose No Version Name.
- For Number of inference units (IUs), enter
1
.
- For Endpoint name, enter
- Choose Create endpoint.
- Choose Create endpoint again to create the resource classification endpoint.
- Provide the following information:
- For Endpoint name, enter
ticket-classification-resource
. - For Custom model type, select Custom classification.
- For Classifier model, choose ticket-classification-resource.
- For Version, choose No Version Name.
- For Number of inference units (IUs), enter
1
.
- For Endpoint name, enter
- Choose Create endpoint.
After you create both endpoints, wait until the status for both shows as Active
.
Test the Amazon Comprehend endpoints with real-time analysis
To test your endpoints, complete the following steps:
- On the Amazon Comprehend console, choose Real-time analysis in the navigation pane.
- For Analysis type¸ select Custom.
- For Endpoint¸ choose ticket-classification-operation.
- For Input text, enter the following:
- Choose Analyze.
The results show that theUpdate
class has the highest confidence score. - Change Endpoint to ticket-classification-resource and choose Analyze again.
The results show that the EC2
class has the highest confidence score.
Create a secret for the Amazon Redshift cluster password
In this step, we create an AWS Secrets Manager secret for your Amazon Redshift cluster password. Secrets Manager helps you protect secrets needed to access your applications, services, and IT resources. The service enables you to easily rotate, manage, and retrieve database credentials, API keys, and other secrets throughout their lifecycle. In this post, we store the Amazon Redshift cluster password in a Secrets Manager secret.
- On the Secrets Manager console, choose Secrets in the navigation pane.
- Choose Store a new secret.
- For Secret type, select Other type of secret.
- Under Key/value pairs, set your key as
password
and value as your Amazon Redshift cluster password.
The password must be between 8–64 characters in length and contain at least one uppercase letter, one lowercase letter, and one number. It can be any printable ASCII character except ‘ (single quote), “ (double quote), , /, @, or space. - Choose Next.
- For Secret name, enter
ClassificationRedshiftClusterPassword
. - Choose Next.
- In the Secret rotation section, choose Next.
- Review your secret configuration and choose Store.
Provision your infrastructure with AWS CloudFormation
In this step, we provision the infrastructure for the solution using an AWS CloudFormation stack.
Upload the Lambda function code
Before launching the CloudFormation stack, upload your Lambda function code:
- Download lambda_code.zip
- On the Amazon S3 console, open the bucket that you created.
- Upload
lambda_code.zip
.
Create your CloudFormation stack
To provision resources with AWS CloudFormation, complete the following steps:
- Download cloudformation_template.json.
- On the AWS CloudFormation console, choose Create stack.
- Select With new resources (standard).
- For Template source, choose Upload a template file.
- Choose the downloaded CloudFormation template.
- Choose Next.
- For Stack name, enter
Ticket-Classification-Infrastructure
. - In the Parameters section, enter the following values:
- For ClassificationRedshiftClusterNodeType, enter the Amazon Redshift cluster node type. dc2.large is the default.
- For ClassificationRedshiftClusterPasswordSecretName, enter the Secrets Manager secret name that stores the Amazon Redshift cluster password.
- For ClassificationRedshiftClusterSubnetId, enter the subnet ID where the Amazon Redshift Cluster is hosted. The subnet must be within the VPC which you mentioned in the
ClassificationRedshiftClusterVpcId
parameter. - For ClassificationRedshiftClusterUsername, enter the Amazon Redshift cluster user name.
- For ClassificationRedshiftClusterVpcId, enter the VPC ID where the Amazon Redshift cluster is hosted.
- For LambdaCodeS3Bucket, enter the S3 bucket name where you uploaded the Lambda code.
- For LambdaCodeS3Key, enter the Amazon S3 key of the deployment package.
- For QuickSightRegion, enter the Region for QuickSight. The Region for QuickSight should be consistent with the Region you’re using for Amazon Comprehend and the S3 bucket.
- Choose Next.
- In the Configure stack options section, choose Next.
- In the Review section, select I acknowledge that AWS CloudFormation might create IAM resources.
- Choose Create stack.
Configure your Amazon Redshift cluster
In this step, you enable audit logging and add the new table to the Amazon Redshift cluster created through the CloudFormation template.
Audit logging is not turned on by default in Amazon Redshift. When you turn on logging on your cluster, Amazon Redshift exports logs to Amazon CloudWatch, which capture data from the time audit logging is enabled to the present time. Each logging update is a continuation of the previous logs.
Enable audit logging
You can skip this step if you don’t need audit logging for your Amazon Redshift cluster.
- On the Amazon Redshift console, choose Clusters in the navigation pane.
- Choose the Amazon Redshift cluster starting with
classificationredshiftcluster-
. - On the Properties tab, choose Edit.
- Choose Edit audit logging.
- For Configure audit logging¸ choose Turn on.
- For Log expert type, choose CloudWatch.
- Select all log types.
- Choose Save changes.
Create new table
To create a new table, complete the following steps:
- On the Amazon Redshift console, choose Query data.
- Choose Query in query editor v2.
- On the Database page, choose your cluster.
- For Database, enter
ticketclassification
. - Enter the user name and password you configured in the CloudFormation stack parameters.
- Choose Create connection.
- When the connection is made, choose the plus sign and open a new query window.
- Enter the following query:
- Choose Run.
Test the classification infrastructure
Now the infrastructure for ticket classification is ready. Before integrating with your ticket system, let’s test the classification infrastructure.
Run the test
To run the test, complete the following steps:
- On the Lambda console, choose Functions in the navigation pane.
- Choose the function that starts with
Ticket-Classification-Inf-TicketClassifier
. - On the Test tab, choose Test event.
- For Name, enter
TestTicket
. - Enter the following test data:
- Choose Test.
The ticket is classified, and the classification data is stored in the Amazon Redshift cluster. After the classification, the ticket handler Lambda function runs, which handles the ticket based on the classification, including recommending materials to support engineers.
Check the ticket classifier test log
To check the test log, complete the following steps:
- In the result section of the test, choose Logs, or choose View logs in CloudWatch on the Monitor tab.
- Choose the log stream.
You can view the logs in the following screenshot, which shows the output from Amazon Comprehend and the final top classification of the ticket. In this example, the test ticket is classified as Resource=EC2
, Operation=Update
.
Check the ticket classification output in the Amazon Redshift cluster
To validate the output in your cluster, complete the following steps:
- On the Amazon Redshift query editor v2 console, choose the plus sign to open a new query window.
- Enter the following query:
- Choose Run.
The following screenshot shows the ticket classification. If it’s not available yet, wait for a few minutes and retry (Kinesis Data Firehose needs some time to push the data). We can now use this data in QuickSight.
Check the ticket handler test log
After the ticket classifier pushes the classification data in the Amazon Redshift cluster, the ticket handler Lambda function runs, which handles the ticket based on the classification, including recommending materials to support engineers. In this example, the ticket handler returns recommended materials including the runbook, AWS documentation, and SSM documents so support can refer to them when handling the ticket. You can integrate the output with your ticket handling system, and you can customize the handling processes in the Lambda function code. In this step, we check what recommendations were made.
- On the Lambda console, choose Functions in the navigation pane.
- Choose the Lambda function that starts with
Ticket-Classification-Inf-TicketHandlerLambdaFunct
. - On the Monitor tab, choose View logs in CloudWatch.
- Choose the log stream.
The following screenshot shows the logs. You can see the output from Amazon Comprehend and the list of recommended AWS documents and SSM documents for the ticket classified as Update EC2
. You can add your own runbooks, documents, SSM documents, or any other materials in the Lambda function code.
Integrate the ticket classification infrastructure with your ticketing system
In this section, we walk through the steps to integrate your ticketing classification infrastructure with your ticketing system and customize your configuration.
Most ticketing systems have a trigger feature, which allows you to run code when the ticket is submitted. Set up your ticketing system to invoke the ticket classifier Lambda function with the following formatted input:
If you want to customize the input, modify the ticket classifier Lambda function code. You need to add or remove parameters (lines 90–105) and customize the input for Amazon Comprehend (lines 15–17).
You can customize the ticket handler Lambda function to run automation or edit the recommendations. For example, you can add the internal comment to the ticket with the recommendations. To customize, open the ticket handler Lambda code, and edit lines 68–70 and 75–81.
Use classification data with QuickSight
After you integrate the ticket classification infrastructure with your ticket system, the ticket classification data is stored in the Amazon Redshift cluster. You can use QuickSight to check this data and generate reports. In this example, we generate a QuickSight analysis with the classification data.
Sign up for QuickSight
If you don’t already have QuickSight, sign up with the following steps:
- On the QuickSight console, choose Sign up for QuickSight.
- Choose Standard.
- Under QuickSight region, choose the Region you configured in the CloudFormation parameter
QuickSightRegion
. - Under Account info, enter your QuickSight account name and notification email address.
- Under QuickSight access to AWS services, select Amazon Redshift.
- If you want to allow access and autodiscovery for other resources, select them as well.
- Choose Finish.
- Choose Go to Amazon QuickSight after you’re signed up.
Connect your Amazon Redshift cluster to QuickSight
To connect your cluster to QuickSight as a data source, complete the following steps:
- On the QuickSight console, choose Datasets in the navigation pane.
- Choose New dataset.
- Choose Redshift Auto-discovered.
- Provide the following information:
- For Data source name, enter
ticketclassification
. - For Instance ID, choose the Amazon Redshift cluster starting with
classificationredshiftcluster-
. - For Connection type, choose Public network.
- For Database name, enter
ticketclassification
. - Enter the Amazon Redshift cluster user name and password you configured in the CloudFormation stack parameters.
- For Data source name, enter
- Choose Validate connection to see if the connection works.
If it doesn’t work, this is likely due to using the wrong user name and password, or the QuickSight Region is different from what you specified in the CloudFormation stack. - Choose Create data source.
- In the Choose your table section, select the
tickets
table. - Choose Select.
- Select Import to SPICE for quicker analytics.
SPICE is the QuickSight Super-fast, Parallel, In-memory Calculation Engine. It’s engineered to rapidly perform advanced calculations and serve data. Importing (also called ingesting) your data into SPICE can save time and money. For more information on SPICE, refer to Importing Data into SPICE. If you get the error “Not enough SPICE capacity,” purchase more SPICE capacity. For more information, refer to Purchasing SPICE capacity in an AWS Region. - Choose Visualize.
Create a ticket classification analysis report
Once you finish dataset creation, you can see the new QuickSight analysis. In this section, we walk through the steps to create a ticket classification analysis report, including a pivot table, pie charts, and line charts.
- Choose AutoGraph.
- Under Visual types, choose the pivot table.
- Drag
operation
from Fields list to Rows. - Drag
resource
from Fields list to Columns. - On the Add menu, choose Add visual.
- Under Visual types, choose the pie chart.
- Drag
operation
from Fields list to Group/Color. - On the Add menu, choose Add visual again.
- Under Visual types, choose the pie chart again.
- Drag
resource
from Fields list to Group/Color. - On the Add menu, choose Add visual again.
- Under Visual types, choose the line chart.
- Drag
creation_time
from Fields list to X axis. - Drag
operation
from Fields list to Color. - On the Add menu, choose Add visual again.
- Under Visual types, choose the line chart again.
- Drag
creation_time
from Fields list to X axis. - Drag
operation
from Fields list to Color. - Resize and reorder the charts as needed.
- Choose Save as.
- Enter a name for your analysis and choose Save.
Congratulations! Your first ticket analysis is ready. Once you have more data, the analysis will look like the following screenshot.
Clean up
In this step, we clean up the resources we created with various services.
Amazon Comprehend
To delete your endpoints, complete the following steps:
- On the Amazon Comprehend console, choose Endpoints in the navigation pane.
- Select the
endpoint ticket-classification-operation
. - Choose Delete and follow the prompts.
- Repeat these steps to delete the
ticket-classification-resource
endpoint.
Next, delete the custom classifications you created. - Choose Custom classification in the navigation pane.
- Select the
classification ticket-classification-operation
. - Select No Version Name.
- Choose Delete and follow the prompts.
- Repeat these steps to delete the
ticket-classification-resource
classification.
Amazon S3
Next, clean up the S3 bucket you created.
- On the Amazon S3 console, select the bucket you created.
- Delete all the objects in the bucket.
- Delete the bucket.
Amazon QuickSight
Delete the QuickSight analyses and dataset you created.
- On the QuickSight console, choose Analyses in the navigation pane.
- Choose the options icon (three dots) on the analysis you created.
- Choose Delete and follow the prompts.
- Choose Datasets in the navigation pane.
- Choose the
tickets
dataset. - Choose Delete dataset and follow the prompts.
AWS CloudFormation
Clean up the resources you created as part of the CloudFormation stack.
- On the AWS CloudFormation console, choose Stacks in the navigation pane.
- Choose the
Ticket-Classification-Infrastructure
stack. - On the Resources tab, choose the physical ID of
ClassificationDeliveryStreamS3Bucket
.
The Amazon S3 console opens. - Delete any objects in this bucket.
- Return to the AWS CloudFormation console, choose Delete, and follow the prompts.
AWS Secrets Manager
Lastly, delete the Secrets Manager secret.
- On the Secrets Manager console, select the secret
ClassificationRedshiftClusterPassword
. - On the Actions menu, choose Delete secret.
- Set the waiting period as 7 days and choose Schedule Delete.
Your secret will be automatically deleted after 7 days.
Conclusion
In this post, you learned how to utilize AWS services to create an automatic classification and recommendation system. This solution will help your organizations build the following workflow:
- Classify customer requests.
- Recommend automated solutions.
- Analyze customer request classifications and discover top customer requests.
- Release a new automated solution and increase the automation rate.
For more information about Amazon Comprehend, see Amazon Comprehend Documentation. You can also discover other Amazon Comprehend features and get inspiration from other AWS blog posts about using Amazon Comprehend beyond classification.
About the Authors
Seongyeol Jerry Cho is a Senior Systems Development Engineer at AWS Managed Services based in Sydney, Australia. He focuses on building highly scalable and automated cloud operations software using a variety of technologies, including machine learning. Outside of work, he enjoys travel, camping, reading, cooking, and running.
Manu Sasikumar is a Sr. Systems Engineer Manager with AWS Managed Services. Manu and his team focus on building powerful and easy-to-use automations to reduce manual effort, and build AI and ML-based solutions for managing customer requests. Outside of work, he loves spending his spare time with his family, as well as being part of various humanitarian and volunteer activities.
Incremental training with Amazon SageMaker JumpStart
In December 2020, AWS announced the general availability of Amazon SageMaker JumpStart, a capability of Amazon SageMaker that helps you quickly and easily get started with machine learning (ML). SageMaker JumpStart provides one-click fine-tuning and deployment of a wide variety of pre-trained models across popular ML tasks, as well as a selection of end-to-end solutions that solve common business problems. These features remove the heavy lifting from each step of the ML process, making it easier to develop high-quality models and reducing time to deployment.
All JumpStart content was previously available only through Amazon SageMaker Studio, which provides a user-friendly graphical interface to interact with the feature. Recently, we also announced the launch of easy-to-use JumpStart APIs as an extension of the SageMaker Python SDK, allowing you to programmatically deploy and fine-tune a vast selection of JumpStart-supported pre-trained models on your own datasets. This launch unlocks the usage of JumpStart capabilities in your code workflows, MLOps pipelines, and anywhere else you’re interacting with SageMaker via SDK.
In this post, we’re excited to announce that all trainable JumpStart models now support incremental training. Incremental training allows you to train a model you have already fine-tuned using an expanded dataset that contains an underlying pattern not accounted for in previous fine-tuning runs, which resulted in poor model performance. Incremental training saves both time and resources because you don’t need to retrain the model from scratch. If you want to jump straight into the JumpStart API code we explain in this post, you can refer to the sample notebook.
JumpStart overview
JumpStart is a multi-faceted product that includes different capabilities to help get you quickly started with ML on SageMaker. At the time of writing, JumpStart enables you to do the following:
- Deploy pre-trained models for common ML tasks – JumpStart enables you to address common ML tasks with no development effort by providing easy deployment of models pre-trained on large, publicly available datasets. The ML research community has put a large amount of effort into making a majority of recently developed models publicly available for use; JumpStart hosts a collection of over 300 models, spanning the 15 most popular ML tasks such as object detection, text classification, and text generation, making it easy for beginners to use them. These models are drawn from popular model hubs, such as TensorFlow, PyTorch, Hugging Face, and MXNet Hub.
- Fine-tune pre-trained models – JumpStart allows you to fine-tune pre-trained models with no need to write your own training algorithm. In ML, the ability to transfer the knowledge learned in one domain to another is called transfer learning. You can use transfer learning to produce accurate models on your smaller datasets, with much lower training costs than the ones involved in training the original model. JumpStart also includes popular training algorithms based on LightGBM, CatBoost, XGBoost, and Scikit-learn that you can train from scratch for tabular regression and classification.
- Use pre-built solutions – JumpStart provides a set of 17 solutions for common ML use cases such as demand forecasting and industrial and financial applications, which you can deploy with just a few clicks. Solutions are end-to-end ML applications that string together various AWS services to solve a particular business use case. They use AWS CloudFormation templates and reference architectures for quick deployment, which means they are fully customizable.
- Use notebook examples for SageMaker algorithms – SageMaker provides a suite of built-in algorithms to help data scientists and ML practitioners get started with training and deploying ML models quickly. JumpStart provides sample notebooks that you can use to quickly apply these algorithms.
- Review training videos and blogs – JumpStart also provides numerous blog posts and videos that teach you how to use different functionalities within SageMaker.
JumpStart accepts custom VPC settings and AWS Key Management Service (AWS KMS) encryption keys, so you can use the available models and solutions securely within your enterprise environment. You can pass your security settings to JumpStart within Studio or through the SageMaker Python SDK.
Image classification
Image classification refers to classifying an image into one of the class labels in the training dataset. You can fine-tune the model to any given dataset comprising images belonging to any number of classes. The model available for fine-tuning on JumpStart attaches a classification layer to the corresponding feature extractor model and initializes the layer parameters to random values. The output dimension of the classification layer is determined based on the number of classes in the input data. The fine-tuning step tunes the classification layer parameters, while keeping the parameters of the feature extractor model frozen, and returns the fine-tuned model. The objective is to minimize prediction error on the input data.
For our dataset, the input is a directory with as many sub-directories as the number of classes. Each sub-directory should have images belonging to that class in .jpg format. The input directory should look like the following hierarchy if the training data contains images from two classes: roses
and dandelion
:
The names of the folders, classes, and .jpg file names can be anything.
We provide the tf_flowers
1 dataset as a default dataset for fine-tuning the model. This dataset comprises images of five types of flowers. The dataset has been downloaded from TensorFlow.
Walkthrough overview
The following sections provide a step-by-step demo to perform image classification with JumpStart, both via the Studio UI and JumpStart APIs.
We walk through the following steps:
- Access JumpStart through the Studio UI:
- Fine-tune the pre-trained model.
- Deploy the fine-tuned model.
- Incrementally train the fine-tuned model and redeploy.
- Use JumpStart programmatically with the SageMaker Python SDK:
- Fine-tune the pre-trained model.
- Deploy the fine-tuned model.
- Incrementally train the fine-tuned model and redeploy.
Access JumpStart through the Studio UI
In this section, we demonstrate how to fine-tune and deploy JumpStart models through the Studio UI. Additionally, we show how to incrementally train a model that you have previously fine-tuned.
Fine-tune the pre-trained model
The following video shows you how to find a pre-trained image classification model on JumpStart and fine-tune it. The model page contains valuable information about the model, how to use it, expected data format, and some fine-tuning details.
For demonstration purposes, we fine-tune the model using the dataset provided by default, which is the tf_flowers
dataset, composed of different varieties of flowers. Fine-tuning on your own dataset involves taking the correct formatting of data (as explained on the model page), uploading it to Amazon Simple Storage Service (Amazon S3), and specifying its location in the data source configuration.
We use the same hyperparameter values set by default (number of epochs, learning rate, and batch size). We also use a GPU-backed ml.p3.2xlarge instance as our SageMaker training instance.
You can monitor your training job directly on the Studio console, and are notified upon its completion.
Deploy the fine-tuned model
After training is complete, you can deploy the fine-tuned model from the same page that holds the training job details. To deploy our model, we pick a different instance type, ml.p2.xlarge. It still provides the GPU acceleration needed for low inference latency, but at a lower price point. After you configure the SageMaker hosting instance, choose Deploy. It may take 5–10 minutes until your persistent endpoint is up and running.
Then your endpoint is operational and ready to respond to inference requests!
To accelerate your time to inference, JumpStart provides a sample notebook that shows you how to run inference on your freshly deployed endpoint. Choose Open Notebook under Use Endpoint from Studio.
Incrementally train the fine-tuned model and deploy
When fine-tuning is complete, you can further train the model to boost performance. This step is very similar to the initial fine-tuning process, except that we use the already fine-tuned model as the starting point. You may use new data, but the dataset format must be the same (same set of classes).
Use JumpStart programmatically with the SageMaker SDK
In the preceding sections, we showed how you can use the JumpStart UI to fine-tune, deploy, and incrementally train a model interactively in a matter of a few clicks. You can also use JumpStart’s models and easy fine-tuning programmatically by using APIs that are integrated into the SageMaker SDK. We now go over a quick example of how you can replicate the preceding process. All the steps in this demo are available in the accompanying notebooks Introduction to JumpStart – Image Classification.
Fine-tune the pre-trained model
To fine-tune a selected model, we need to get that model’s URI, as well as that of the training script and the container image used for training. Thankfully, these three inputs depend solely on the model name, version (for a list of available models, see JumpStart Available Model Table), and type of instance you want to train on. This is demonstrated in the following code snippet:
We retrieve the model_id
corresponding to the same model we used previously. The ic in the identifier corresponds to image classification.
You can now fine-tune this JumpStart model on your own custom dataset using the SageMaker SDK. We use the same tf_flowers
dataset that is publicly hosted on Amazon S3, conveniently focused on sentiment analysis. Your dataset should be structured for fine-tuning, as explained in the previous section. See the following example code:
We obtain the same default hyperparameters for our selected model as the ones we saw in the previous section, using sagemaker.hyperparameters.retrieve_default()
. We then instantiate a SageMaker estimator and call the .fit method to start fine-tuning our model, passing it the Amazon S3 URI for our training data. As you can see, the entry_point
script provided is named transfer_learning.py
(the same for other tasks and models), and the input data channel passed to .fit
must be named training
.
Deploying the fine-tuned model
When training is complete, you can deploy your fine-tuned model. To do so, all we need to obtain is the inference script URI (the code that determines how the model is used for inference once deployed) and the inference container image URI, which includes an appropriate model server to host the model we chose. See the following code:
After a few minutes, our model is deployed and we can get predictions from it in real time!
Next, we invoke the endpoint to predict what type of flowers exist in the example image. We use the query_endpoint
and parse_response
helper functions, which are defined in the accompanying notebook.
Incrementally train the fine-tuned model and redeploy
We can increase the performance of a fine-tuned model by further training it on new images. You may use any number of new or old images for this, however the dataset format must remain the same (same set of classes). The incremental training step is similar to the fine-tuning process, with an important difference: in the initial fine-tuning we start with a pre-trained model, whereas in incremental training we start with an existing fine-tuned model. See the following code:
When training is complete, we can use the same steps as the ones described in the preceding section to deploy the model.
Conclusion
JumpStart is a capability in SageMaker that allows you to quickly get started with ML. JumpStart uses open-source pre-trained models to solve common ML problems like image classification, object detection, text classification, sentence pair classification, and question answering.
In this post, we showed you how to fine-tune and deploy a pre-trained image classification model. We also showed how to incrementally train a fine-tuned model for image classification. With JumpStart, you can easily perform this process with no need to code. Try out the solution on your own and let us know how it goes in the comments. To learn more about JumpStart, check out the AWS re:Invent 2020 video Get started with ML in minutes with Amazon SageMaker JumpStart.
References
- The TensorFlow Team, 2019
About the Authors
Dr. Vivek Madan is an Applied Scientist with the Amazon SageMaker JumpStart team. He got his PhD. from University of Illinois at Urbana-Champaign and was a Post Doctoral Researcher at Georgia Tech. He is an active researcher in machine learning and algorithm design and has published papers in EMNLP, ICLR, COLT, FOCS, and SODA conferences.
João Moura is an AI/ML Specialist Solutions Architect at Amazon Web Services. He is mostly focused on NLP use cases and helping customers optimize deep learning model training and deployment. He is also an active proponent of low-code ML solutions and ML-specialized hardware.
Dr. Ashish Khetan is a Senior Applied Scientist with Amazon SageMaker JumpStart and Amazon SageMaker built-in algorithms and helps develop machine learning algorithms. He is an active researcher in machine learning and statistical inference and has published many papers in NeurIPS, ICML, ICLR, JMLR, ACL, and EMNLP conferences.
How eMagazines utilizes Amazon Polly to voice articles for school-aged kids
This is a guest post by Andrew Degenholtz, CEO and Founder of eMagazines, the parent company of ReadAlong.ai. eMagazines’ technology seamlessly transforms print products into premium digital and audio experiences. Leveraging Amazon technology, ReadAlong.ai offers a simple, turn-key way for publishers to add audio to their websites with a single line of code.
eMagazines supports publishers in bringing high-quality journalism content to readers across digital platforms. Our ReadAlong.ai brand allows our customers to deepen their connection to readers by adding audio to traditional text-first publishing formats. In March 2020, we helped TIME for Kids launch a digital version of its popular magazine for school-aged kids. This premium subscription product helped their users transition to digital when the pandemic forced schools to close and families needed high-quality educational tools to supplement classroom learning materials.
In this post, we share how we created an automated way for TIME for Kids to seamlessly add audio for early readers and pre-readers through ReadAlong.ai, which uses Amazon Polly technology.
Why did TIME for Kids decide to start creating audio narration of their articles?
The addition of audio with auto scrolling and highlighting of text supports pre-readers and those students still learning to read. Listening while reading supports vocabulary development and reading comprehension, and new words are more likely to be learned when both their oral and written forms are provided. A report from the National Center on Early Childhood Development, Teaching, and Learning states that developing brains need to hear language even before learning to talk, and that even infants’ brains are preparing to speak months before they say their first words. Not only that, but the report also revealed that listening to stories read aloud helps expand both the volume and variety of words entering young vocabularies and fields of comprehension. Experts at Scholastic report that being read to also helps early readers “focus on the sounds of words read without interruption and provides a model of fluent reading,” and also noted that resources like audio help children learn how to listen, a prerequisite to learning to read.
What was the business challenge we addressed?
TIME for Kids originally addressed pre-reader accessibility by hiring voice actors to record their stories. The earlier iteration of their audio play button used an HTML audio player without speed variation or the option to scroll the page or highlight the text. The experience was expensive and time-consuming, and the user experience wasn’t as engaging as it could be. TIME for Kids was also unable to see even basic data around play or completion rates.
Why Amazon Polly?
We chose Amazon Polly because its APIs and web services support our goal of automating processes and making things easy for our clients.
Amazon Polly’s neural text-to-speech synthesis does the best job of voicing words within the context of a sentence, and the consistency in speech quality allows for the automation of article rendering.
Additionally, Amazon Polly offers a responsive API and powerful SSML support. This offers support for those cases where more control is needed to change inflection, and in the event that text contains challenging names (people, brands, companies) or word and phrase replacements (reading out abbreviations or acronyms in a particular way).
Amazon Polly also supports speech marks, which are crucial for highlighting the text that is currently being read out.
For TIME for Kids, the Kevin voice was a clear winner. TIME for Kids loved the approachable sound of the Kevin voice—they wanted wanting a voice that sounded like a child’s in order to help establish a sense of connection with young readers. Hear an example of a TIME for Kids article using the Kevin voice.
The technical challenge
TIME for Kids needed an educational audio solution for their website. It needed to be a one-time setup that was highly automated and very low friction. The solution also needed to process new articles as they were added dynamically, on a daily basis. And when a user listens to the audio, the page needed to scroll along with the text and highlight the sentence currently being read out loud.
Part of our challenge was to reliably and programmatically identify which content should be read aloud. In a typical publishing context, the audio player needs to read the article title and content, but avoid reading the header and footer text, navigation bars, and certain kinds of ads or captions. Our page analysis solution combines positive and negative query selectors. For each configuration, defined by a set of articles that share the same structure and layout, the http://readalong.ai solution supports a set of allow list selectors and a set of deny list selectors that together capture the appropriate content for synthesizing speech.
Furthermore, the TIME for Kids website posed many technical challenges because some pages are available only for paying subscribers, whereas some are open to the public. TIME for Kids offers four grade-specific editions, teaching materials, curriculum guides, and weekly virtual learning plans for each issue, as well as worksheets and quizzes. Therefore, each article has multiple versions for different reading levels in both English and Spanish—some with as many as seven different reading levels in both languages.
Our solution
We created a simple drop-in script that allowed TIME for Kids to only add one line of code to the header of any page where they wanted to offer audio. The script automated everything from page content delivery to audio-synthesis to webpage integration. Since the start of the school year, we’ve added the Kevin and Lupe voices (for English and Spanish content, respectively) to thousands of articles on timeforkids.com.
Our solution allowed for automated content delivery and audio synthesizing, which meant no need to sign into a dashboard, FTP, Dropbox, or otherwise send new article content to ReadAlong.ai each time a new page was added. The user-friendly backend of the solution also allows TIME for Kids to easily make word replacements, including global rules, to give the audio synthesizer engine lexicon hints for context-based pronunciations and difficult names, brands, or acronyms.
In addition to positioning and styling the launcher and player to match the TIME for Kids site design, as part of the customization, we added functionality to highlight and scroll the text as the article is read aloud, which is another helpful tool to support children in learning to recognize words and connect them to sounds. We customized this feature to be visible but not distracting, so the audio and visual elements could work in tandem to aid young readers. To support this enhanced feature, we implemented the detailed word- and sentence-level metadata available in Amazon Polly to provide a fluid highlighting experience that helps readers follow along as they encounter new words and concepts. This allows the listener to identify what they’re hearing as they view the content as it’s highlighted on the browser.
We also created a default for the Amazon Polly Kevin and Lupe voices to start at a slower speed, so the default pacing is at .9x, rather than at 1x, as another way to help early readers and pre-readers better access the content. Listeners have the ability to lower the default voice speed to .75x or increase to 1.5x, in order to accommodate more reading levels.
Business benefits for the customer
With our product in place on their site, TIME for Kids was able to voice their content in a scalable way. They deliver content on an article-by-article basis in two different languages (English and Spanish) and in seven different reading levels.
They’re also now able to easily collect and analyze data in real time, including both play and completion rates, and view most popular articles as well as articles with the most audio engagement.
We now know that 55% of kids that click to listen to an article complete 100% of the article, and 66% of kids that listen to an article complete more than half of the article. These significant completion rates reinforce the benefit and confirm that listeners are comfortable with the technology and the voice is relatable. The ReadAlong.ai audio also helped TIME for Kids promote its advanced accessibility features, including key articles with Spanish translation and read-aloud functionality, because the presence of the audio is featured prominently on the preview of each article along with other benefits (such as Spanish translation).
Stacy Bien, Director of Curriculum for TIME for Kids, was impressed with both the solution and the engagement data, saying,
“This is really a thing of beauty. This solution will help so many early readers develop their reading skills and easily consume more content. For us, we’ve seen a huge lift in engagement. That, coupled with the ease of use and cost-effectiveness, makes this a slam dunk.”
Conclusion
ReadAlong.ai used Amazon Polly to help TIME for Kids streamline the process of adding high-quality audio voiceover content to its premium subscription product. Our solution enabled the customer to significantly improve product time, precision, and cost. For example, a voiceover artist typically spends 1 hour or more to record an article, edit the audio, and master the final audio output. Now, once the ReadAlong.ai script has been added to the site, when new articles are created, the content is automatically processed without any time spent by a voiceover artist, audio editor, or administrator. The audio reads articles precisely and rarely requires adjustments, creating a valuable and immeasurable savings of both time and cost.
Collected KPIs tell us that not only did this become an easy way for the TIME for Kids team to manage audio functionality, but that the end-users—children early in the development of their reading abilities—take to the functionality as another tool on their reading path.
About the Author
Andrew Degenholtz is CEO and Founder of eMagazines and ReadAlong.ai, and is President of ValueMags, which he founded in 1999. Degenholtz holds a master’s in marketing from Northwestern University and a B.A. from Muhlenberg College. Previously, he was a member of the Alliance for Audited Media digital edition task force, created to develop best practices for acquisition of digital magazine subscribers.
LIMoE: Learning Multiple Modalities with One Sparse Mixture of Experts Model
Sparse models stand out among the most promising approaches for the future of deep learning. Instead of every part of a model processing every input (“dense” modeling), sparse models employing conditional computation learn to route individual inputs to different “experts” in a potentially huge network. This has many benefits. First, model size can increase while keeping computational cost constant — an effective and environmentally friendlier way to scale models, which is often key to high performance. Sparsity also naturally compartmentalizes neural networks. Dense models that learn many different tasks simultaneously (multitask) or sequentially (continual learning) often suffer negative interference, where too much task variety means it is better to just train one model per task, or catastrophic forgetting, where the model becomes worse at earlier tasks as new ones are added. Sparse models help avoid both these phenomena — by not applying the whole model to all inputs, “experts” in the model can specialize on different tasks or data types while still taking advantage of shared parts of the model.
Research on sparsity has long been pursued at Google Research. Pathways summarizes the research vision of building one single large model that diligently handles thousands of tasks and numerous data modalities. So far there has been considerable progress in sparse unimodal models for language (Switch, Task-MoE, GLaM) and computer vision (Vision MoE). Today, we take another important step towards the Pathways vision by studying large sparse models that simultaneously handle images and text with modality-agnostic routing. A relevant approach is multimodal contrastive learning, which requires a solid understanding of both images and text in order to align pictures with their correct text description. The strongest models that tackle this task to date rely on independent networks for each modality (a “two-tower” approach).
In “Multimodal Contrastive Learning with LIMoE: the Language Image Mixture of Experts”, we present the first large-scale multimodal architecture using a sparse mixture of experts. It simultaneously processes both images and text, but uses sparsely activated experts that naturally specialize. On zero-shot image classification, LIMoE outperforms both comparable dense multimodal models and two-tower approaches. The largest LIMoE achieves 84.1% zero-shot ImageNet accuracy, comparable to more expensive state-of-the-art models. Sparsity enables LIMoE to scale up gracefully and learn to handle very different inputs, addressing the tension between being a jack-of-all-trades generalist and a master-of-one specialist.
Sparse Mixture of Expert Models
Transformers represent data as a sequence of vectors (or tokens). Though originally developed for text, they can be applied to most things that are representable as a sequence of tokens, e.g., images, videos, and audio. Recent large-scale MoE models add expert layers to the Transformer architecture (e.g., gShard and ST-MoE in natural language processing, and Vision MoE for vision tasks).
A standard Transformer consists of many “blocks”, each containing various different layers. One of these layers is a feed-forward network (FFN). For LIMoE and the works cited above, this single FFN is replaced by an expert layer that contains many parallel FFNs, each of which is an expert. Given a sequence of tokens to process, a simple router learns to predict which experts should handle which tokens. Only a small number of experts are activated per token, meaning although the model capacity is significantly increased by virtue of having so many experts, the actual computational cost is controlled by using them sparsely. If only one expert is activated, the model’s cost is roughly equivalent to the standard Transformer model.
LIMoE does precisely that, activating one expert per example, thereby matching the computational cost of the dense baselines. What’s different is that the LIMoE router might see tokens of either image or text data.
A unique failure mode of MoE models occurs when they try to send all tokens to the same expert. Typically this is addressed with auxiliary losses, extra training objectives that encourage balanced expert usage. We found that dealing with multiple modalities interacted with sparsity to cause new failure modes that existing auxiliary losses could not address. To overcome this, we developed new auxiliary losses (more details in the paper) and used routing prioritization (BPR) during training, two innovations that resulted in stable and high performance multimodal models.
The new auxiliary losses (LIMoE aux) and routing prioritization (BPR) stabilized and improved overall performance (left) and increased the success rate of routing behavior (middle and right). A low success rate means the router does not use all the experts available and drops many tokens due to individual expert capacity being reached, which usually indicates the sparse model is not learning well. The combination introduced for LIMoE ensures high routing success rates for both images and text and consequently leads to significantly better performance. |
Contrastive Learning with LIMoE
In multimodal contrastive learning, models are trained on paired image-text data (e.g., a photo and its caption). Typically, an image model extracts a representation of images, and a different text model extracts a representation of text. The contrastive learning objective encourages the image and text representations to be close for the same image-text pair and far away for content from different pairs. Such models with aligned representations can be adapted to new tasks without extra training data (“zero-shot”), e.g., an image will be classified as a dog if its representation is closer to the representation of the word “dog” than the word “cat”. This idea scales to thousands of classes and is referred to as zero-shot image classification.
CLIP and ALIGN (both two-tower models) scaled this process to achieve 76.2% and 76.4% zero-shot classification accuracy on the popular ImageNet dataset. We study one-tower models which compute both image and text representations. We find this reduces performance for dense models, likely due to negative interference or insufficient capacity. However, a compute-matched LIMoE not only improves over the one-tower dense model, but also outperforms two-tower dense models. We trained a series of models in a comparable training regimen to CLIP. Our dense L/16 model achieves 73.5% zero-shot accuracy, whereas LIMoE-L/16 gets to 78.6%, even outperforming CLIP’s more expensive, two-tower L/14 model (76.2%). As shown below, LIMoE’s use of sparsity provides a remarkable performance boost over dense models with equivalent cost.
For a given compute cost (x-axis), LIMoE models (circles, solid line) are significantly better than their dense baselines (triangles, dashed line). The architecture indicates the size of the underlying transformer, increasing from left (S/32) to right (L/16). Following standard convention, S (small), B (base), and L (large) refer to model scale. The number refers to the patch size, where smaller patches imply a larger architecture. |
LiT and BASIC pushed zero-shot accuracy for dense two-tower models to 84.5% and 85.6% respectively. In addition to scaling, these approaches made use of specialized pre-training methods, repurposing image models that were already of exceptionally high quality. LIMoE-H/14 does not benefit from any pre-training or modality-specific components, but still achieved a comparable 84.1% zero-shot accuracy training from scratch. The scale of these models is also interesting to compare: LiT and BASIC are 2.1B and 3B parameter models. LIMoE-H/14 has 5.6B parameters in total, but via sparsity it only applies 675M parameters per token making it significantly more lightweight.
Data seen during training | |||||
Model | Pre-training | Image-text | Total | Parameters per token | ImageNet accuracy |
CLIP | – | 12.8B | 12.8B | ~200M | 76.2% |
ALIGN | – | 19.8B | 19.8B | ~410M | 76.4% |
LiT | 25.8B | 18.2B | 44.0B | 1.1B | 84.5% |
BASIC | 19.7B | 32.8B | 52.5B | 1.5B | 85.6% |
LIMoE H/14 | – | 23.3B | 23.3B | 675M | 84.1% |
Understanding LIMoE’s Behavior
LIMoE was motivated by the intuition that sparse conditional computation enables a generalist multimodal model to still develop the specialization needed to excel at understanding each modality. We analyzed LIMoE’s expert layers and uncovered a few interesting phenomena.
First, we see the emergence of modality-specialized experts. In our training setup there are many more image tokens than text tokens, so all experts tend to process at least some images, but some experts process either mostly images, mostly text, or both.
There are also some clear qualitative patterns among the image experts — e.g., in most LIMoE models, there is an expert that processes all image patches that contain text. In the example below, one expert processes fauna and greenery, and another processes human hands.
Moving Forward
Multimodal models that handle many tasks are a promising route forward, and there are two key ingredients for success: scale, and the ability to avoid interference between distinct tasks and modalities while taking advantage of synergies. Sparse conditional computation is an excellent way of doing both. It enables performant and efficient generalist models that also have the capacity and flexibility for the specialization necessary to excel at individual tasks, as demonstrated by LIMoE’s solid performance with less compute.
Acknowledgements
We thank our co-authors on this work: Joan Puigcerver, Rodolphe Jenatton and Neil Houlsby. We also thank Andreas Steiner, Xiao Wang and Xiaohua Zhai, who led early explorations into dense single-tower models for contrastive multimodal learning, and also were instrumental in providing data access. We enjoyed useful discussions with André Susano Pinto, Maxim Neumann, Barret Zoph, Liam Fedus, Wei Han and Josip Djolonga. Finally, we would also like to thank and acknowledge Tom Small for the awesome animated figure used in this post.
Weekly forecasts can now start on Sunday with Amazon Forecast
We are excited to announce that in Amazon Forecast, you can now start your forecast horizon at custom starting points, including on Sundays for weekly forecasts. This allows you to more closely align demand planning forecasts to local business practices and operational requirements.
Forecast is a fully managed service that uses statistical and machine learning (ML) algorithms to deliver highly accurate time series forecasts. It uses state-of-the-art algorithms to predict future time series data based on historical data, and requires no ML experience. Typical Forecast applications include resource planning for inventory, workforce staffing, and web traffic. In this post, we review a new option that allows you to align forecasts with business and demand cycles, while reducing operational cost by offloading aggregation workflows.
To optimize demand planning, forecasts need to align with business operations. Previously, starting points for forecasts were fixed: daily forecasts assumed demand starting at midnight each day, weekly predictions assumed Monday as the first day of the week, and monthly predictions started on the first day of each month. These predefined starting points presented two challenges. First, if your business cycle began at a different point than the fixed value, you had to manually aggregate forecasts to your required starting point. For example, if your business week began on a Sunday and you wanted to produce weekly forecasts, you had to manually aggregate daily forecasts to a Sunday–Saturday week. This additional work added cost and compute time, and presented opportunities for errors. Second, the training data and forecast periods weren’t consistent; if your data reflects a demand cycle that begins on Sundays, the predictor and forecast should also use Sunday as the starting point.
Custom forecast horizon starting points now align business operations and forecasts, eliminating the need for manual aggregation work and saving cost and compute. If you have a business week starting on Sundays, you can automatically aggregate daily data to generate weekly forecasts that begin on Sundays. Or you can begin daily forecasts starting at 9:00 AM. Predictors can now be aligned with your ground truth data, providing consistency between inputs and forecasts. Forecast horizon starting points are easily defined when training new predictors via the Forecast console or using Forecast APIs.
Define custom forecast horizon starting periods
The forecast horizon, also called frequency, is the length of time for which a forecast is made, and is bounded by a starting and ending point. In Forecast, you can now select specific starting points for daily, weekly, monthly, and yearly forecast horizons when training new predictors. These starting points—also called boundary values—are selected at one frequency unit finer than the forecast horizon, as shown in the following table.
Forecast frequency unit | Boundary unit | Boundary values |
Daily | Hour | 0–23 |
Weekly | Day of week | Monday through Sunday |
Monthly | Day of month | 1 through 28 |
Yearly | Month | January through December |
With custom starting points, you can align forecasts to start at specific points in time that match your business processes and ground truth data, for example, the month of May, the 15th of the month, Sundays, or 15:00 hours. For forecast horizons coarser than the provided time series frequency, Forecast aggregates the time series data based on the custom starting point. For example:
- When generating daily forecasts from hourly data with a 9:00 AM starting period, forecasts are aggregated with hourly data each day between 9:00 AM to the following day at 8:00 AM
- When generating weekly forecasts from daily data with a Sunday starting period, forecasts are aggregated with daily data each week from Sunday to the following Saturday
- When generating monthly forecasts from daily data with a starting day of the 15th of the month, forecasts are aggregated with daily data from the 15th of the current month to the 14th of the next month
- When generating yearly forecasts from monthly data with a starting month of May, forecasts are aggregated with monthly data from May of the current year to April of next year
Available forecast frequencies
The following screenshots show examples of custom daily, weekly, monthly, and yearly forecast frequencies and starting points (the Time alignment boundary field on the Forecast console).
Specify custom forecast horizon starting points
You can define custom forecast horizon starting points when creating a new predictor. The following steps demonstrate how to do this using the Forecast console. We also offer a sample notebook that provides an example of how to integrate this new setting into your workflows.
- On the Forecast console, choose View dataset groups, and then Create dataset group.
- Create your dataset group, a target time series dataset, and load your data.
You’re redirected to the Forecast console as your data is loaded. - After your target time series dataset is loaded into your dataset group and active, choose Start under Train a predictor.
- In the Train predictor section, provide values for the Name, Forecast frequency, and Forecast horizon fields.
- In the optional Time alignment boundary field, specify the starting point the predictor uses for the forecast.
The values in this list depend on the Forecast frequency value you choose. In this example, we create weekly forecasts with a 1-week horizon, with Sunday as the starting day of the week and of the forecast.
- Provide other optional configurations as needed and choose Create.
After you create the predictor, you can create your forecast. - In the navigation pane, under your dataset group choose Predictors.
- Select your new predictor.
- Choose Create forecast.
- Provide the necessary details and choose Start to create your forecast.
- When the forecast is complete, choose Create forecast export to export the results.
The following screenshots are samples of the original input file (left) and the exported forecast results (right). The input file is at an hourly frequency, whereas the forecast is produced at a weekly frequency, beginning with Sunday as the first day of the week. This is an example of Forecast automatically aggregating over two levels of forecast frequencies (from hours to days).
Conclusion
Custom forecast horizon starting points in Forecast allow you to produce forecasts that align with your specific operational requirements. Work weeks start on different days in different regions, requiring forecasts that begin on days other than Mondays, and that are aligned with ground truth training and ongoing data. Or you may want to generate hourly forecasts that reflect a demand cycle beginning at 7:00 AM each day, for example.
Forecast also automatically aggregates fine-grained forecasts to higher-level frequencies (such as days into weeks). This allows you to produce forecasts aligned with your operations, and saves you costs by removing the need to stand up and manage aggregation workflows.
Custom starting points are optional. If you don’t provide specific starting points, forecasts start at default times. Specific forecast horizon starting points are only available with AutoPredictor. For more information, refer to New Amazon Forecast API that creates up to 40% more accurate forecasts and provides explainability and CreateAutoPredictor.
To learn more about forecast frequencies, refer to Data aggregation for different forecast frequencies. All these new capabilities are available in all Regions where Forecast is publicly available. For more information about Region availability, see AWS Regional Services.
About the Authors
Dan Sinnreich is a Sr. Product Manager for Amazon Forecast. He is focused on democratizing low-code/no-code machine learning and applying it to improve business outcomes. Outside of work, he can be found playing hockey, trying to improve his tennis serve, scuba diving, and reading science fiction.
Paras Arora is a Software Development Engineer in the Amazon Forecast Team. He is passionate about building cutting edge AI/ML solutions in the cloud. In his spare time, he enjoys hiking and traveling.
Chetan Surana is a Software Development Engineer in the Amazon Forecast team. His interests lie at the intersection of machine learning and software development, applying thoughtful design and engineering skills to solve problems. Outside of work, he enjoys photography, hiking, and cooking.
Student-powered machine learning
From their early days at MIT, and even before, Emma Liu ’22, MNG ’22, Yo-whan “John” Kim ’22, MNG ’22, and Clemente Ocejo ’21, MNG ’22 knew they wanted to perform computational research and explore artificial intelligence and machine learning. “Since high school, I’ve been into deep learning and was involved in projects,” says Kim, who participated in a Research Science Institute (RSI) summer program at MIT and Harvard University and went on to work on action recognition in videos using Microsoft’s Kinect.
As students in the Department of Electrical Engineering and Computer Science who recently graduated from the Master of Engineering (MEng) Thesis Program, Liu, Kim, and Ocejo have developed the skills to help guide application-focused projects. Working with the MIT-IBM Watson AI Lab, they have improved text classification with limited labeled data and designed machine-learning models for better long-term forecasting for product purchases. For Kim, “it was a very smooth transition and … a great opportunity for me to continue working in the field of deep learning and computer vision in the MIT-IBM Watson AI Lab.”
Modeling video
Collaborating with researchers from academia and industry, Kim designed, trained, and tested a deep learning model for recognizing actions across domains — in this case, video. His team specifically targeted the use of synthetic data from generated videos for training and ran prediction and inference tasks on real data, which is composed of different action classes. They wanted to see how pre-training models on synthetic videos, particularly simulations of, or game engine-generated, humans or humanoid actions stacked up to real data: publicly available videos scraped from the internet.
The reason for this research, Kim says, is that real videos can have issues, including representation bias, copyright, and/or ethical or personal sensitivity, e.g., videos of a car hitting people would be difficult to collect, or the use of people’s faces, real addresses, or license plates without consent. Kim is running experiments with 2D, 2.5D, and 3D video models, with the goal of creating domain-specific or even a large, general, synthetic video dataset that can be used for some transfer domains, where data are lacking. For instance, for applications to the construction industry, this could include running its action recognition on a building site. “I didn’t expect synthetically generated videos to perform on par with real videos,” he says. “I think that opens up a lot of different roles [for the work] in the future.”
Despite a rocky start to the project gathering and generating data and running many models, Kim says he wouldn’t have done it any other way. “It was amazing how the lab members encouraged me: ‘It’s OK. You’ll have all the experiments and the fun part coming. Don’t stress too much.’” It was this structure that helped Kim take ownership of the work. “At the end, they gave me so much support and amazing ideas that help me carry out this project.”
Data labeling
Data scarcity was also a theme of Emma Liu’s work. “The overarching problem is that there’s all this data out there in the world, and for a lot of machine learning problems, you need that data to be labeled,” says Liu, “but then you have all this unlabeled data that’s available that you’re not really leveraging.”
Liu, with direction from her MIT and IBM group, worked to put that data to use, training text classification semi-supervised models (and combining aspects of them) to add pseudo labels to the unlabeled data, based on predictions and probabilities about which categories each piece of previously unlabeled data fits into. “Then the problem is that there’s been prior work that’s shown that you can’t always trust the probabilities; specifically, neural networks have been shown to be overconfident a lot of the time,” Liu points out.
Liu and her team addressed this by evaluating the accuracy and uncertainty of the models and recalibrated them to improve her self-training framework. The self-training and calibration step allowed her to have better confidence in the predictions. This pseudo labeled data, she says, could then be added to the pool of real data, expanding the dataset; this process could be repeated in a series of iterations.
For Liu, her biggest takeaway wasn’t the product, but the process. “I learned a lot about being an independent researcher,” she says. As an undergraduate, Liu worked with IBM to develop machine learning methods to repurpose drugs already on the market and honed her decision-making ability. After collaborating with academic and industry researchers to acquire skills to ask pointed questions, seek out experts, digest and present scientific papers for relevant content, and test ideas, Liu and her cohort of MEng students working with the MIT-IBM Watson AI Lab felt they had confidence in their knowledge, freedom, and flexibility to dictate their own research’s direction. Taking on this key role, Liu says, “I feel like I had ownership over my project.”
Demand forecasting
After his time at MIT and with the MIT-IBM Watson AI Lab, Clemente Ocejo also came away with a sense of mastery, having built a strong foundation in AI techniques and timeseries methods beginning with his MIT Undergraduate Research Opportunities Program (UROP), where he met his MEng advisor. “You really have to be proactive in decision-making,” says Ocejo, “vocalizing it [your choices] as the researcher and letting people know that this is what you’re doing.”
Ocejo used his background in traditional timeseries methods for a collaboration with the lab, applying deep learning to better predict product demand forecasting in the medical field. Here, he designed, wrote, and trained a transformer, a specific machine learning model, which is typically used in natural-language processing and has the ability to learn very long-term dependencies. Ocejo and his team compared target forecast demands between months, learning dynamic connections and attention weights between product sales within a product family. They looked at identifier features, concerning the price and amount, as well as account features about who is purchasing the items or services.
“One product does not necessarily impact the prediction made for another product in the moment of prediction. It just impacts the parameters during training that lead to that prediction,” says Ocejo. “Instead, we wanted to make it have a little more of a direct impact, so we added this layer that makes this connection and learns attention between all of the products in our dataset.”
In the long run, over a one-year prediction, MIT-IBM Watson AI Lab group was able to outperform the current model; more impressively, it did so in the short run (close to a fiscal quarter). Ocejo attributes this to the dynamic of his interdisciplinary team. “A lot of the people in my group were not necessarily very experienced in the deep learning aspect of things, but they had a lot of experience in the supply chain management, operations research, and optimization side, which is something that I don’t have that much experience in,” says Ocejo. “They were giving a lot of good high-level feedback of what to tackle next and … and knowing what the field of industry wanted to see or was looking to improve, so it was very helpful in streamlining my focus.”
For this work, a deluge of data didn’t make the difference for Ocejo and his team, but rather its structure and presentation. Oftentimes, large deep learning models require millions and millions of data points in order to make meaningful inferences; however, the MIT-IBM Watson AI Lab group demonstrated that outcomes and technique improvements can be application-specific. “It just shows that these models can learn something useful, in the right setting, with the right architecture, without needing an excess amount of data,” says Ocejo. “And then with an excess amount of data, it’ll only get better.”