Achieve effective business outcomes with no-code machine learning using Amazon SageMaker Canvas

Achieve effective business outcomes with no-code machine learning using Amazon SageMaker Canvas

On November 30, 2021, we announced the general availability of Amazon SageMaker Canvas, a visual point-and-click interface that enables business analysts to generate highly accurate machine learning (ML) predictions without having to write a single line of code. With Canvas, you can take ML mainstream throughout your organization so business analysts without data science or ML experience can use accurate ML predictions to make data-driven decisions.

ML is becoming ubiquitous in organizations across industries to gather valuable business insights using predictions from existing data quickly and accurately. The key to scaling the use of ML is making it more accessible. This means empowering business analysts to use ML on their own, without depending on data science teams. Canvas helps business analysts apply ML to common business problems without having to know the details such as algorithm types, training parameters, or ensemble logic. Today, customers are using Canvas to address a wide range of use cases across verticals including churn detection, sales conversion, and time series forecasting.

In this post, we discuss key Canvas capabilities.

Get started with Canvas

Canvas offers an interactive tour to help you navigate through the visual interface, starting with importing data from the cloud or on-premises sources. Getting started with Canvas is quick; we offer sample datasets for multiple use cases, including predicting customer churn, estimating loan default probabilities, forecasting demand, and predicting supply chain delivery times. These datasets cover all the use cases currently supported by Canvas, including binary classification, multi-class classification, regression, and time series forecasting. To learn more about navigating Canvas and using the sample datasets, see Amazon SageMaker Canvas accelerates onboarding with new interactive product tours and sample datasets.

Exploratory data analysis

After you import your data, Canvas allows you to explore and analyze it, before building predictive models. You can preview your imported data and visualize the distribution of different features. You can then choose to transform your data to make it suitable to address your problem. For example, you may choose to drop columns, extract date and time, impute missing values, or replace outliers with standard or custom values. These activities are recorded in a model recipe, which is a series of steps towards data preparation. This recipe is maintained throughout the lifecycle of a particular ML model from data preparation to generating predictions. See Amazon SageMaker Canvas expands capabilities to better prepare and analyze data for machine learning to learn more about preparing and analyzing data within Canvas.

Visualize your data

Canvas also offers the ability to define and create new features in your data through mathematical operators and logical functions. You can visualize and explore your data through box plots, bar graphs, and scatterplots by dragging and dropping features directly on charts. In addition, Canvas provides correlation matrices for numerical and categorical variables to understand the relationships between features in your data. This information can be used to refine your input data and drive more accurate models. For more details on data analysis capabilities in Canvas, see Use Amazon SageMaker Canvas for exploratory data analysis. To learn more about mathematical functions and operators in Canvas, see Amazon SageMaker Canvas supports mathematical functions and operators for richer data exploration.

After you prepare and explore your data, Canvas gives you an option to validate your datasets so you can proactively check for data quality issues. Canvas validates the data on your behalf and surfaces issues such as missing values in any row or column and too many unique labels in the target column compared to the number of rows. In addition, Canvas provides you with options to fix these issues before you build your ML model. For a deeper dive into data validation capabilities, refer to Identifying and avoiding common data issues while building no code ML models with Amazon SageMaker Canvas.

Build ML models

The first step towards building ML models in Canvas is to define the target column for the problem. For example, you could choose the total number of rooms as the target column to determine home prices in a housing model. Alternatively, you could use churn as the target column to determine the probability of losing customers under different conditions. After you select the target column, Canvas automatically determines the type of problem for the model to be built.

Prior to building an ML model, you can get directional insights into the model’s estimated accuracy and how each feature influenced results by running a preview analysis. Based on these insights, you can further prepare, analyze, or explore your data to get the desired accuracy for model predictions.

Canvas offers two methods to train ML models: Quick build and Standard build. Both methods deliver a fully trained ML model with complete transparency to understand the importance of each feature towards the model outcome. Quick build focuses on speed and experimentation, whereas standard build focuses on the highest levels of accuracy by going through multiple iterations of data preprocessing, choosing the right algorithm, exploring the hyperparameter space, and generating multiple candidate models before selecting the best performing model. This process is done behind the scenes by Canvas without the need to write code.

New performance improvements deliver up to three times faster ML model training time, enabling rapid prototyping and faster time-to-value for business outcomes. To learn more, see Amazon SageMaker Canvas announces up to 3x faster ML model training time.

Model analysis

After you build the model, Canvas provides detailed model accuracy metrics and feature explainability.

Canvas also presents a Sankey chart depicting the flow of the data from one value into the other, including false positives and false negatives.

For users interested in analyzing more advanced metrics, Canvas provides F1 scores that combine precision and recall, an accuracy metric quantifying how many times the model made a correct prediction across the entire dataset, and the Area Under the Curve (AUC), which measures how well the model separates the categories in the dataset.

Model predictions

With Canvas, you can run real-time predictions on the trained model with interactive what-if analyses by analyzing the impact of different feature values on the model accuracy.

Furthermore, you can run batch predictions on any validation dataset as a whole. These predictions can be previewed and downloaded for use with downstream applications.

Sharing and collaboration

Canvas allows you to continue the ML journey by sharing your models with your data science teams for review, feedback, and updates. You can share your models with other users using Amazon SageMaker Studio, a fully integrated development environment (IDE) for ML. Studio users can review the model and, if needed, update data transformations, retrain the model, and share back the updated version of the model with Canvas users who can then use it to generate predictions.

In addition, data scientists can share models built outside of Amazon SageMaker with Canvas users, removing the heavy lifting to build a separate tool or user interface to share models between different teams. With the bring your own model (BYOM) approach, you can now use ML models built by your data science teams in other environments and generate predictions within minutes directly in Canvas. This seamless collaboration between business and technical teams helps democratize ML across the organization by bringing transparency to ML models and accelerating ML deployments. To learn more about sharing and collaboration between business and technical teams using Canvas, see New – Bring ML Models Built Anywhere into Amazon SageMaker Canvas and Generate Predictions.

Conclusion

Get started today with Canvas and take advantage of ML to achieve your business outcomes without writing a line of code. Learn more from the interactive tutorial or MOOC course on Coursera. Happy innovating!


About the author

Shyam Srinivasan is on the AWS low-code/no-code ML product team. He cares about making the world a better place through technology and loves being part of this journey. In his spare time, Shyam likes to run long distances, travel around the world, and experience new cultures with family and friends.

Read More

How the UNDP Independent Evaluation Office is using AWS AI/ML services to enhance the use of evaluation to support progress toward the Sustainable Development Goals

How the UNDP Independent Evaluation Office is using AWS AI/ML services to enhance the use of evaluation to support progress toward the Sustainable Development Goals

The United Nations (UN) was founded in 1945 by 51 original Member States committed to maintaining international peace and security, developing friendly relations among nations, and promoting social progress, better living standards, and human rights. The UN is currently made up of 193 Member States and has evolved over the years to keep pace with a rapidly changing world. The United Nations Development Programme (UNDP) is the UN’s development agency and operates in over 170 countries and territories. It plays a critical role in helping countries achieve the Sustainable Development Goals (SDGs), which are a global call to action to end poverty, protect the planet, and ensure all people enjoy peace and prosperity.

As a learning organization, the UNDP highly values the evaluation function. Each UNDP program unit commissions evaluations to access the performance of their projects and programs. The Independent Evaluation Office (IEO) is a functionally independent office within the UNDP that supports the oversight and accountability functions of the Executive Board and management of the UNDP, UNCDF, and UNV. The core functions of the IEO are to conduct independent programmatic and thematic evaluations that are of strategic importance to the organization—like its support for the COVID-19 pandemic recovery.

In this post, we discuss how the IEO developed UNDP’s artificial intelligence and machine learning (ML) platform—named Artificial Intelligence for Development Analytics (AIDA)— in collaboration with AWS, UNDP’s Information and Technology Management Team (UNDP ITM), and the United Nations International Computing Centre (UNICC). AIDA is a web-based platform that allows program managers and evaluators to expand their evidence base by searching existing data in a smarter, more efficient, and innovative way to produce insights and lessons learned. By searching at the granular level of paragraphs, AIDA finds pieces of evidence that would not be found using conventional searches. The creation of AIDA aligns with the UNDP Strategic Plan 2022–2025 to use digitization and innovation for greater development impact.

The challenge

The IEO is the custodian of the UNDP Evaluation Resource Center (ERC). The ERC is a repository of over 6,000 evaluation reports that cover every aspect of the organization’s work, everywhere it has worked, since 1997. The findings and recommendations of the evaluation reports inform UNDP management, donor, and program staff to better design future interventions, take course-correction measures in their current programs, and make funding and policy decisions at every level.

Before AIDA, the process to extract evaluative evidence and generate lessons and insights was manual, resource-intensive, and time-consuming. Moreover, traditional search methods didn’t work well with unstructured data, therefore the evidence base was limited. To address this challenge, the IEO decided to use AI and ML to better mine the evaluation database for lessons and knowledge.

The AIDA team was mindful of the challenging task of extracting evidence from unstructured data such as evaluation reports. Usually, evaluation reports are 80–100 pages, are in multiple languages, and contain findings, conclusions, and recommendations. Even though evaluations are guided by the UNDP Evaluation Guideline, there is no standard written format for these evaluations, and the aforementioned sections may occur at different locations in the document, or not all of them may exist. Therefore, accurately exacting evaluative evidence at the paragraph level and applying appropriate labels was a significant ML challenge.

Solution overview

The AIDA technical solution was developed by AWS Professional Services and the UNICC. The core technology platform was designed and developed by the AWS ProServe team. The UNICC was responsible for developing the AIDA web portal and human-in-the-loop interface. The AIDA platform was envisioned to provide a simple and highly accurate mechanism to search UNDP evaluation reports across various themes and export them for further analysis. AIDA’s architecture needed to address several requirements:

  • Automate the extraction and labeling of evaluation data
  • Process thousands of reports
  • Allow the IEO to add new labels without calling on the expertise of data scientists and ML experts

To deliver the requirements, the components were designed with these tenets in mind:

  • Technically and environmentally sustainable
  • Cost conscious
  • Extensible to allow for future expansion

The resulting solution can be broken down to three components, as shown in the following architecture diagram:

  • Data ingestion and extraction
  • Data classification
  • Intelligent search

The following sections describe these components in detail.

Data ingestion and extraction

Evaluation reports are prepared and submitted by UNDP program units across the globe—there is no standard report layout template or format. The data ingestion and extraction component ingests and extracts content from these unstructured documents.

Amazon Textract is used to extract data from PDF documents. This solution uses the asynchronous StartDocumentTextDetection API to build the document processing workflow that handles Amazon Textract asynchronous invocation, raw response extraction, and persistence in Amazon Simple Storage Service (Amazon S3). This solution adds an Amazon Textract postprocessing component to handle paragraph-based text extraction. The postprocessing component uses bounding box metadata from Amazon Textract for intelligent data extraction. The postprocessing component is capable of extracting data from complex, multi-format, multi-page PDF files with varying headers, footers, footnotes, and multi-column data. The Apache Tika open-source Python library is used for data extraction from word documents.

The following diagram illustrates this workflow, orchestrated with AWS Step Functions.

This workflow has the following steps:

  1. TextractCompleted is the first step to ensure documents are not processed multiple times with Amazon Textract. This step is to avoid unnecessary processing time and cost by preventing duplicate processing.
  2. TextractAsyncCallTask submits the documents to be processed by Amazon Textract using the asynchronous StartDocumentTextDetection API. This API processes the documents and stores the JSON output files in Amazon S3 for postprocessing.
  3. TextractAsyncSNSListener is an AWS Lambda function that handles the Amazon Textract job completion event, and returns the metadata back to the workflow for further processing.
  4. TextractPostProcessorTask is an AWS Lambda function that uses the metadata and processes the JSON output files produced by Amazon Textract to extract meaningful paragraphs.
  5. TextractQAValidationTask is an AWS Lambda function that performs some simple text validations on the extracted paragraphs and collects metrics like number of complete or incomplete paragraphs. These metrics are used to measure the quality of text extractions.

Please refer to TextractAsync, an IDP CDK construct that abstracts the invocation of the Amazon Textract Async API, handling Amazon Simple Notification Service (Amazon SNS) messages and workflow processing to accelerate your development.

Data classification

The data classification component identifies the critical parts of the evaluation reports, and further classifies them into a taxonomy of categories organized around the various themes of the Sustainable Development Goals. We have built one multi-class and two multi-label classification models with Amazon Comprehend.

Extracted paragraphs are processed using Step Functions, which integrates with Amazon Comprehend to perform classification in batch mode. Paragraphs are classified into findings, recommendations, and conclusions (FRCs) using a custom multi-class model, which helps identify the critical sections of the evaluation reports. For the identified critical sections, we identify the categories (thematic and non-thematic) using a custom multi-label classification model. Thematic and non-thematic classification is used to identify and align the evaluation reports with Sustainable Development Goals like no poverty (SDG-1), gender equality (SDG-5), clean water and sanitation (SDG-6), and affordable and clean energy (SDG-7).

The following figure depicts the Step Functions workflow to process text classification.

To reduce cost on the classification process, we have created the workflow to submit Amazon Comprehend jobs in batch mode. The workflow waits for all the Amazon Comprehend jobs to complete and performs data refinement by aggregating the text extraction and Amazon Comprehend results to filter the paragraphs that aren’t identified as FRC, and aggregates the thematic and non-thematic classification categories by paragraphs.

Extracted paragraphs with their classification categories are stored in Amazon RDS for PostgreSQL. This is a staging database to preserve all the extraction and classification results. We also use this database to further enrich the results to aggregate the themes of the paragraphs, and filter paragraphs that are not FRC. Enriched content is fed to Amazon Kendra.

For the first release, we had over 2 million paragraphs extracted and classified. With the help of FRC custom classification, we were able to accurately narrow down the paragraphs to over 700,000 from 2 million. The Amazon Comprehend custom classification model helped accurately present the relevant content and substantially reduced the cost on Amazon Kendra indexes.

Amazon DynamoDB is used for storing document metadata and keeping track of the document processing status across all key components. Metadata tracking is particularly useful to handle errors and retries.

Intelligent search

The intelligent search capability allows the users of the AIDA platform to intuitively search for evaluative evidence on UNDP program interventions contained within all the evaluation reports. The following diagram illustrates this architecture.

Amazon Kendra is used for intelligent searches. Enriched content from Amazon RDS for PostgreSQL is ingested into Amazon Kendra for indexing. The web portal layer uses the intelligent search capability of Amazon Kendra to intuitively search the indexed content. Labelers use the human-in-the-loop user interface to update the text classification generated by Amazon Comprehend for any extracted paragraphs. Changes to the classification are immediately reflected in the web portal, and human-updated feedback is extracted and used for Amazon Comprehend model training to continuously improve the custom classification model.

AIDA incorporates a human-in-the-loop functionality, which boosts AIDA’s capacity to correct classifications (FRC, thematic, non-thematic) and data extractions errors. Labels, updated by the humans performing the human-in-the-loop function, are augmented to the training dataset and used to retrain the Amazon Comprehend models to continuously improve classification accuracy.

Conclusion

In this post, we discussed how evaluators, through the IEO’s AIDA platform, are using Amazon AI and ML services like Amazon Textract, Amazon Comprehend, and Amazon Kendra to build a custom document processing system that identifies, extracts, and classifies data from unstructured documents. Using Amazon Textract for PDF text extraction improved paragraph-level evidence extraction from under 60% to over 80% accuracy. Additionally, multi-label classification improved from under 30% to 90% accuracy by retraining models in Amazon Comprehend with improved training datasets.

This platform enabled evaluators to intuitively search relevant content quickly and accurately. Transforming unstructured data to semi-structured data empowers the UNDP and other UN entities to make informed decisions based on a corpus of hundreds or thousands of data points about what works, what doesn’t work, and how to improve the impact of UNDP operations for the people it serves.

For more information about the intelligent document processing reference architecture, refer to Intelligent Document Processing. Please share your thoughts with us in the comments section.


About the Authors

Oscar A. Garcia is the Director of the Independent Evaluation Office (IEO) of the United Nations Development Program (UNDP). As Director, he provides strategic direction, thought leadership, and credible evaluations to advance UNDP work in helping countries progress towards national SDG achievement. Oscar also currently serves as the Chairperson of the United Nations Evaluation Group (UNEG). He has more than 25 years of experience in areas of strategic planning, evaluation, and results-based management for sustainable development. Prior to joining the IEO as Director in 2020, he served as Director of IFAD’s Independent Office of Evaluation (IOE), and Head of Advisory Services for Green Economy, UNEP. Oscar has authored books and articles on development evaluation, including one on information and communication technology for evaluation. He is an economist with a master’s degree in Organizational Change Management, New School University (NY), and an MBA from Bolivian Catholic University, in association with the Harvard Institute for International Development.

Sathya Balakrishnan is a Sr. Customer Delivery Architect in the Professional Services team at AWS, specializing in data and ML solutions. He works with US federal financial clients. He is passionate about building pragmatic solutions to solve customers’ business problems. In his spare time, he enjoys watching movies and hiking with his family.

Thuan Tran is a Senior Solutions Architect in the World Wide Public Sector supporting the United Nations. He is passionate about using AWS technology to help customers conceptualize the art of the possible. In this spare time, he enjoys surfing, mountain biking, axe throwing, and spending time with family and friends.

Prince Mallari is an NLP Data Scientist in the Professional Services team at AWS, specializing in applications of NLP for public sector customers. He is passionate about using ML as a tool to allow customers to be more productive. In his spare time, he enjoys playing video games and developing one with his friends.

Read More

Enable predictive maintenance for line of business users with Amazon Lookout for Equipment

Enable predictive maintenance for line of business users with Amazon Lookout for Equipment

Predictive maintenance is a data-driven maintenance strategy for monitoring industrial assets in order to detect anomalies in equipment operations and health that could lead to equipment failures. Through proactive monitoring of an asset’s condition, maintenance personnel can be alerted before issues occur, thereby avoiding costly unplanned downtime, which in turn leads to an increase in Overall Equipment Effectiveness (OEE).

However, building the necessary machine learning (ML) models for predictive maintenance is complex and time consuming. It requires several steps, including preprocessing of the data, building, training, evaluating, and then fine-tuning multiple ML models that can reliably predict anomalies in your asset’s data. The finished ML models then need to be deployed and provided with live data for online predictions (inferencing). Scaling this process to multiple assets of various types and operating profiles is often too resource intensive to make broader adoption of predictive maintenance viable.

With Amazon Lookout for Equipment, you can seamlessly analyze sensor data for your industrial equipment to detect abnormal machine behavior—with no ML experience required.

When customers implement predictive maintenance use cases with Lookout for Equipment, they typically choose between three options to deliver the project: build it themselves, work with an AWS Partner, or use AWS Professional Services. Before committing to such projects, decision-makers such as plant managers, reliability or maintenance managers, and line leaders want to see evidence of the potential value that predictive maintenance can uncover in their lines of business. Such an evaluation is usually performed as part of a proof of concept (POC) and is the basis for a business case.

This post is directed to both technical and non-technical users: it provides an effective approach for evaluating Lookout for Equipment with your own data, allowing you to gauge the business value it provides your predictive maintenance activities.

Solution overview

In this post, we guide you through the steps to ingest a dataset in Lookout for Equipment, review the quality of the sensor data, train a model, and evaluate the model. Completing these steps will help derive insights into the health of your equipment.

Prerequisites

All you need to get started is an AWS account and a history of sensor data for assets that can benefit from a predictive maintenance approach. The sensor data should be stored as CSV files in an Amazon Simple Storage Service (Amazon S3) bucket from your account. Your IT team should be able to meet these prerequisites by referring to Formatting your data. To keep things simple, it’s best to store all the sensor data in one CSV file where the rows are timestamps and the columns are individual sensors (up to 300).

Once you have your dataset available on Amazon S3, you can follow along with the rest of this post.

Add a dataset

Lookout for Equipment uses projects to organize the resources for evaluating pieces of industrial equipment. To create a new project, complete the following steps:

  1. On the Lookout for Equipment console, choose Create project.

Click the Create Project button on the home page of the service

  1. Enter a project name and choose Create project.

After the project is created, you can ingest a dataset that will be used to train and evaluate a model for anomaly detection.

  1. On the project page, choose Add dataset.

Click Add dataset on the project dashboard

  1. For S3 location, enter the S3 location (excluding the file name) of your data.
  2. For Schema detection method, select By filename, which assumes that all sensor data for an asset is contained in a single CSV file at the specified S3 location.
  3. Keep the other settings as default and choose Start ingestion to start the ingestion process.

Configure your data source details and click Start ingestion

Ingestion may take around 10–20 minutes to complete. In the background, Lookout for Equipment performs the following tasks:

  • It detects the structure of the data, such as sensor names and data types.
  • The timestamps between sensors are aligned and missing values are filled (using the latest known value).
  • Duplicate timestamps are removed (only the last value for each timestamp is kept).
  • Lookout for Equipment uses multiple types of algorithms for building the ML anomaly detection model. During the ingestion phase, it prepares the data so it can be used for training those different algorithms.
  • It analyzes the measurement values and grades each sensor as high, medium, or low quality.
  1. When the dataset ingestion is complete, inspect it by choosing View dataset under Step 2 of the project page.

Click View dataset on the project dashboard

When creating an anomaly detection model, selecting the best sensors (the ones containing the highest data quality) is often critical to training models that deliver actionable insights. The Dataset details section shows the distribution of sensor gradings (between high, medium, and low), while the table displays information on each sensor separately (including the sensor name, date range, and grading for the sensor data). With this detailed report, you can make an informed decision about which sensors you will use to train your models. If a large proportion of sensors in your dataset are graded as medium or low, there might be a data issue needing investigation. If necessary, you can reupload the data file to Amazon S3 and ingest the data again by choosing Replace dataset.

Sensor grade dashboard overview

By choosing the sensor grade entry in the details table, you can review details on the validation errors resulting in a given grade. Displaying and addressing these details will help ensure information provided to the model is high quality. For example, you might see a signal has unexpected big chunks of missing values. Is this a data transfer issue, or was the sensor malfunctioning? Time to dive deeper in your data!

Individual sensor grade overview

To learn more about the different type of sensor issues Lookout for Equipment addresses when grading your sensors, refer to Evaluating sensor grades. Developers can also extract these insights using the ListSensorStatistics API.

When you’re happy with your dataset, you can move to the next step of training a model for predicting anomalies.

Train a model

Lookout for Equipment allows the training of models for specific sensors. This gives you the flexibility to experiment with different sensor combinations or exclude sensors with a low grading. Complete the following steps:

  1. In the Details by sensor section on the dataset page, select the sensors to include in your model and choose Create model.

Selecting sensors for training a model

  1. For Model name, enter a model name, then choose Next.

Give a model name

  1. In the Training and evaluation settings section, configure the model input data.

To effectively train models, the data needs to be split into separate training and evaluation sets. You can define date ranges for this split in this section, along with a sampling rate for the sensors. How do you choose this split? Consider the following:

  • Lookout for Equipment expects at least 3 months of data in the training range, but the optimal amount of data is driven by your use case. More data may be necessary to account for any type of seasonality or operational cycles your production goes through.
  • There are no constraints on the evaluation range. However, we recommend setting up an evaluation range that includes known anomalies. This way, you can test if Lookout for Equipment is able to capture any events of interest leading to these anomalies.

By specifying the sample rate, Lookout for Equipment effectively downsamples the sensor data, which can significantly reduce training time. The ideal sampling rate depends on the types of anomalies you suspect in your data: for slow-trending anomalies, selecting a sampling rate between 1–10 minutes is usually a good starting point. Choosing lower values (increasing the sampling rate) results in longer training times, whereas higher values (low sampling rate) shorten the training time at the risk of cutting out leading indicators from your data relevant to predicting the anomalies.

Configure input data for model training

For training only on relevant portions of your data where the industrial equipment was in operation, you can perform off-time detection by selecting a sensor and defining a threshold indicating whether the equipment was in an on or off state. This is critical because it allows Lookout for Equipment to filter out time periods for training when the machine is off. This means the model learns only relevant operational states and not just when the machine is off.

  1. Specify your off-time detection, then choose Next.

Specify off time detection

Optionally, you can provide data labels, which indicate maintenance periods or known equipment failure times. If you have such data available, you can create a CSV file with the data in a documented format, upload it to Amazon S3, and use it for model training. Providing labels can improve the accuracy of the trained model by telling Lookout for Equipment where it should expect to find known anomalies.

  1. Specify any data labels, then choose Next.

Optionally, specify data labels

  1. Review your settings in the final step. If everything looks fine, you can start the training.

Depending on the size of your dataset, the number of sensors, and the sampling rate, training the model may take a few moments or up to a few hours. For example, if you use 1 year of data at a 5-minute sampling rate with 100 sensors and no labels, training a model will take less than 15 minutes. On the other hand, if your data contains a large number of labels, training time could increase significantly. In such a situation, you can decrease training time by merging adjacent label periods to decrease their number.

You have just trained your first anomaly detection model without any ML knowledge! Now let’s look at the insights you can get from a trained model.

Evaluate a trained model

When model training has finished, you can view the model’s details by choosing View models on the project page, and then choosing the model’s name.

In addition to general information like name, status, and training time, the model page summarizes model performance data like the number of labeled events detected (assuming you provided labels), the average forewarning time, and the number of anomalous equipment events detected outside of the label ranges. The following screenshot shows an example. For better visibility, the detected events are visualized (the red bars on the top of the ribbon) along with the labeled events (the blue bars at the bottom of the ribbon).

Evaluating a model

You can select detected events by choosing the red areas representing anomalies in the timeline view to get additional information. This includes:

  • The event start and end times along with its duration.
  • A bar chart with the sensors the model believes are most relevant to why an anomaly occurred. The percentage scores represent the calculated overall contribution.

Signal contribution bar charts on a selected event

These insights allow you to work with your process or reliability engineers to do further root cause evaluation of events and ultimately optimize maintenance activities, reduce unplanned downtimes, and identify suboptimal operating conditions.

To support predictive maintenance with real-time insights (inference), Lookout for Equipment supports live evaluation of online data via inference schedules. This requires that sensor data is uploaded to Amazon S3 periodically, and then Lookout for Equipment performs inference on the data with the trained model, providing real-time anomaly scoring. The inference results, including a history of detected anomalous events, can be viewed on the Lookout for Equipment console.

7-day inference result dashboard

The results are also written to files in Amazon S3, allowing integration with other systems, for example a computerized maintenance management system (CMMS), or to notify operations and maintenance personnel in real time.

As you increase your Lookout for Equipment adoption, you’ll need to manage a larger number of models and inference schedules. To make this process easier, the Inference schedules page lists all schedulers currently configured for a project in a single view.

Inference scheduler list

Clean up

When you’re finished evaluating Lookout for Equipment, we recommend cleaning up any resources. You can delete the Lookout for Equipment project along with the dataset and any models created by selecting the project, choosing Delete, and confirming the action.

Summary

In this post, we walked through the steps of ingesting a dataset in Lookout for Equipment, training a model on it, and evaluating its performance to understand the value it can uncover for individual assets. Specifically, we explored how Lookout for Equipment can inform predictive maintenance processes that result in reduced unplanned downtime and higher OEE.

If you followed along with your own data and are excited about the prospects of using Lookout for Equipment, the next step is to start a pilot project, with the support of your IT organization, your key partners, or our AWS Professional Services teams. This pilot should target a limited number of industrial equipment and then scale up to eventually include all assets in scope for predictive maintenance.


About the authors

 Johann Füchsl is a Solutions Architect with Amazon Web Services. He guides enterprise customers in the manufacturing industry in implementing AI/ML use cases, designing modern data architectures, and building cloud-native solutions that deliver tangible business value. Johann has a background in mathematics and quantitative modeling, which he combines with 10 years of experience in IT. Outside of work, he enjoys spending time with his family and being out in nature.

Michaël Hoarau is an industrial AI/ML Specialist Solution Architect at AWS who alternates between data scientist and machine learning architect, depending on the moment. He is passionate about bringing the power of AI/ML to the shop floors of his industrial customers and has worked on a wide range of ML use cases, ranging from anomaly detection to predictive product quality or manufacturing optimization. When not helping customers develop the next best machine learning experiences, he enjoys observing the stars, traveling, or playing the piano.

Read More

Enable fully homomorphic encryption with Amazon SageMaker endpoints for secure, real-time inferencing

Enable fully homomorphic encryption with Amazon SageMaker endpoints for secure, real-time inferencing

This is joint post co-written by Leidos and AWS. Leidos is a FORTUNE 500 science and technology solutions leader working to address some of the world’s toughest challenges in the defense, intelligence, homeland security, civil, and healthcare markets.

Leidos has partnered with AWS to develop an approach to privacy-preserving, confidential machine learning (ML) modeling where you build cloud-enabled, encrypted pipelines.

Homomorphic encryption is a new approach to encryption that allows computations and analytical functions to be run on encrypted data, without first having to decrypt it, in order to preserve privacy in cases where you have a policy that states data should never be decrypted. Fully homomorphic encryption (FHE) is the strongest notion of this type of approach, and it allows you to unlock the value of your data where zero-trust is key. The core requirement is that the data needs to be able to be represented with numbers through an encoding technique, which can be applied to numerical, textual, and image-based datasets. Data using FHE is larger in size, so testing must be done for applications that need the inference to be performed in near-real time or with size limitations. It’s also important to phrase all computations as linear equations.

In this post, we show how to activate privacy-preserving ML predictions for the most highly regulated environments. The predictions (inference) use encrypted data and the results are only decrypted by the end consumer (client side).

To demonstrate this, we show an example of customizing an Amazon SageMaker Scikit-learn, open sourced, deep learning container to enable a deployed endpoint to accept client-side encrypted inference requests. Although this example shows how to perform this for inference operations, you can extend the solution to training and other ML steps.

Endpoints are deployed with a couple clicks or lines of code using SageMaker, which simplifies the process for developers and ML experts to build and train ML and deep learning models in the cloud. Models built using SageMaker can then be deployed as real-time endpoints, which is critical for inference workloads where you have real time, steady state, low latency requirements. Applications and services can call the deployed endpoint directly or through a deployed serverless Amazon API Gateway architecture. To learn more about real-time endpoint architectural best practices, refer to Creating a machine learning-powered REST API with Amazon API Gateway mapping templates and Amazon SageMaker. The following figure shows both versions of these patterns.

time endpoint architectural best practices

In both of these patterns, encryption in transit provides confidentiality as the data flows through the services to perform the inference operation. When received by the SageMaker endpoint, the data is generally decrypted to perform the inference operation at runtime, and is inaccessible to any external code and processes. To achieve additional levels of protection, FHE enables the inference operation to generate encrypted results for which the results can be decrypted by a trusted application or client.

More on fully homomorphic encryption

FHE enables systems to perform computations on encrypted data. The resulting computations, when decrypted, are controllably close to those produced without the encryption process. FHE can result in a small mathematical imprecision, similar to a floating point error, due to noise injected into the computation. It’s controlled by selecting appropriate FHE encryption parameters, which is a problem-specific, tuned parameter. For more information, check out the video How would you explain homomorphic encryption?

The following diagram provides an example implementation of an FHE system.

example implementation of an FHE system

In this system, you or your trusted client can do the following:

  1. Encrypt the data using a public key FHE scheme. There are a couple of different acceptable schemes; in this example, we’re using the CKKS scheme. To learn more about the FHE public key encryption process we chose, refer to CKKS explained.
  2. Send client-side encrypted data to a provider or server for processing.
  3. Perform model inference on encrypted data; with FHE, no decryption is required.
  4. Encrypted results are returned to the caller and then decrypted to reveal your result using a private key that’s only available to you or your trusted users within the client.

We’ve used the preceding architecture to set up an example using SageMaker endpoints, Pyfhel as an FHE API wrapper simplifying the integration with ML applications, and SEAL as our underlying FHE encryption toolkit.

Solution overview

We’ve built out an example of a scalable FHE pipeline in AWS using an SKLearn logistic regression deep learning container with the Iris dataset. We perform data exploration and feature engineering using a SageMaker notebook, and then perform model training using a SageMaker training job. The resulting model is deployed to a SageMaker real-time endpoint for use by client services, as shown in the following diagram.

example of a scalable FHE pipeline in AWS

In this architecture, only the client application sees unencrypted data. The data processed through the model for inferencing remains encrypted throughout its lifecycle, even at runtime within the processor in the isolated AWS Nitro Enclave. In the following sections, we walk through the code to build this pipeline.

Prerequisites

To follow along, we assume you have launched a SageMaker notebook with an AWS Identity and Access Management (IAM) role with the AmazonSageMakerFullAccess managed policy.

Train the model

The following diagram illustrates the model training workflow.

model training workflow

The following code shows how we first prepare the data for training using SageMaker notebooks by pulling in our training dataset, performing the necessary cleaning operations, and then uploading the data to an Amazon Simple Storage Service (Amazon S3) bucket. At this stage, you may also need to do additional feature engineering of your dataset or integrate with different offline feature stores.

# Setup/Train Logistic Regression Estimator

# load and preprocess iris dataset
iris = datasets.load_iris()
iris_df = pd.DataFrame(data=iris.data, columns=iris.feature_names)
target_df = pd.DataFrame(data=iris.target, columns=["species"])
iris_df = (iris_df - iris_df.mean(0)) / (iris_df.std(0))

# save iris dataset
iris_df = pd.concat([iris_df, target_df], axis=1)
fname = "iris.csv"
iris_df.to_csv(fname, index=False)

# upload dataset to S3
bucket = Session().default_bucket()
upload _path = f"training data/fhe train.csv"
boto3.Session().resource("s3").Bucket (bucket) .Object (upload path).upload file(fname)

In this example, we’re using script-mode on a natively supported framework within SageMaker (scikit-learn), where we instantiate our default SageMaker SKLearn estimator with a custom training script to handle the encrypted data during inference. To see more information about natively supported frameworks and script mode, refer to Use Machine Learning Frameworks, Python, and R with Amazon SageMaker.

# instantiate estimator
sklearn estimator = SKLearn(
	role=get_execution_ role(),
	entry point="fhe train.py",
	source_dir="fhe_files",
	instance_type="ml.c4.xlarge"
	framework version="0.23-1",
)

Finally, we train our model on the dataset and deploy our trained model to the instance type of our choice.

# fit the model
sklearn estimator.fit("s3://" + bucket + "/training data")

# construct predictor from trained model
predictor = sklearn_estimator.deploy(instance_type="ml.c4.xlarge", initial_instance_count=1)

At this point, we’ve trained a custom SKLearn FHE model and deployed it to a SageMaker real-time inference endpoint that’s ready accept encrypted data.

Encrypt and send client data

The following diagram illustrates the workflow of encrypting and sending client data to the model.

workflow of encrypting and sending client data to the model

In most cases, the payload of the call to the inference endpoint contains the encrypted data rather than storing it in Amazon S3 first. We do this in this example because we’ve batched a large number of records to the inference call together. In practice, this batch size will be smaller or batch transform will be used instead. Using Amazon S3 as an intermediary isn’t required for FHE.

Now that the inference endpoint has been set up, we can start sending data over. We normally use different test and training datasets, but for this example we use the same training dataset.

First, we load the Iris dataset on the client side. Next, we set up the FHE context using Pyfhel. We selected Pyfhel for this process because it’s simple to install and work with, includes popular FHE schemas, and relies upon trusted underlying open-sourced encryption implementation SEAL. In this example, we send the encrypted data, along with public keys information for this FHE scheme, to the server, which enables the endpoint to encrypt the result to send on its side with the necessary FHE parameters, but doesn’t give it the ability to decrypt the incoming data. The private key remains only with the client, which has the ability to decrypt the results.

# Encrypt Data Locally

# load and preprocess potentially private data
iris = datasets.load_iris()
iris_df = pd.DataFrame(data=iris.data, columns=iris.feature_names)
target_df = pd.DataFrame(data=iris.target, columns=["species"])
iris_df = (iris_df - iris_df.mean(0)) / (iris_df.std(0))

# setup FHE
HE = Pyfhel()
ckks_params = {
	“scheme”: "CKKS",
	"n": 2 ** 14,
	"scale": 2 ** 30,
	"qi_sizes": [60, 30, 30, 30, 60],
}
HE.contextGen(**ckks_params)
HE.keyGen()
HE.relinKeyGen()
HE.rotateKeyGen()

# encrypt the data
iris_np = np.append(
	iris_df.to_numpy(), np.ones((len(iris_df), 1)), -1 # append 1 for bias term
)
encrypted_iris = [HE.encryptFrac(row).to_bytes() for row in iris np]

After we encrypt our data, we put together a complete data dictionary—including the relevant keys and encrypted data—to be stored on Amazon S3. Aferwards, the model makes its predictions over the encrypted data from the client, as shown in the following code. Notice we don’t transmit the private key, so the model host isn’t able to decrypt the data. In this example, we’re passing the data through as an S3 object; alternatively, that data may be sent directly to the Sagemaker endpoint. As a real-time endpoint, the payload contains the data parameter in the body of the request, which is mentioned in the SageMaker documentation.

# Send data to get prediction

# store data and encryption paramters
data dict = {
	"data": encrypted_iris,
	"public_key": HE.to_bytes_public_key(),
	"relin_key": HE.to_bytes_relin key(),
	"rotate_key": HE.to_bytes_rotate_key(),
	"context": HE.to_bytes_context(),
}

# upload data and encryption parameters to s3
pickle.dump(data_dict, open("request.pkl”, "wb"))
boto3.Session().resource("s3").Bucket(bucket).Object("request.pkl").upload_file("request.pkl")

# get predictions from our instance
response = predictor.predict({ "bucket": bucket, "uri": "request.pkl"})
predictions = pickle.loads(response)

The following screenshot shows the central prediction within fhe_train.py (the appendix shows the entire training script).

predictions = []
for data in encrypted_data:
	encrypted_prediction = [
		HE.scalar_prod_plain(data, encoded_coef, in_new_ctxt=True).to_bytes()
		for encoded_coef in encoded_coefs
	]
	predictions.append(encrypted_prediction)

We’re computing the results of our encrypted logistic regression. This code computes an encrypted scalar product for each possible class and returns the results to the client. The results are the predicted logits for each class across all examples.

Client returns decrypted results

The following diagram illustrates the workflow of the client retrieving their encrypted result and decrypting it (with the private key that only they have access to) to reveal the inference result.

the workflow of the client retrieving their encrypted result and decrypting it

In this example, results are stored on Amazon S3, but generally this would be returned through the payload of the real-time endpoint. Using Amazon S3 as an intermediary isn’t required for FHE.

The inference result will be controllably close to the results as if they had computed it themselves, without using FHE.

# Decrypt results locally

# extract predictions from class probabilities
cl_preds = []
for prediction in predictions:
	logits = [PyCtxt(bytestring=p, scheme="CKKS", pyfhel=HE) for p in prediction]
	cl = np.argmax([HE.decryptFrac(logit)[0] for logit in logits])
	c1_preds.append(cl)

# compute accuracy
np.mean(cl_preds == target_df.to_numpy().squeeze()) * 100

# clean up
predictor.delete_endpoint()

Clean up

We end this process by deleting the endpoint we created, to make sure there isn’t any unused compute after this process.

Results and considerations

One of the common drawbacks of using FHE on top of models is that it adds computational overhead, which—in practice—makes the resulting model too slow for interactive use cases. But, in cases where the data is highly sensitive, it might be worthwhile to accept this latency trade-off. However, for our simple logistic regression, we are able to process 140 input data samples within 60 seconds and see linear performance. The following chart includes the total end-to-end time, including the time performed by the client to encrypt the input and decypt the results. It also uses Amazon S3, which adds latency and isn’t required for these cases.

linear scaling as we increase the number of examples from 1 to 150

We see linear scaling as we increase the number of examples from 1 to 150. This is expected because each example is encrypted independently from each other, so we expect a linear increase in computation, with a fixed setup cost.

This also means that you can scale your inference fleet horizontally for greater request throughput behind your SageMaker endpoint. You can use Amazon SageMaker Inference Recommender to cost optimize your fleet depending on your business needs.

Conclusion

And there you have it: fully homomorphic encryption ML for a SKLearn logistic regression model that you can set up with a few lines of code. With some customization, you can implement this same encryption process for different model types and frameworks, independent of the training data.

If you’d like to learn more about building an ML solution that uses homomorphic encryption, reach out to your AWS account team or partner, Leidos, to learn more. You can also refer to the following resources for more examples:

The content and opinions in this post contains those from third-party authors and AWS is not responsible for the content or accuracy of this post.

Appendix

The full training script is as follows:

import argparse
import os
import pickle
from io import BytesIO

import boto3
import joblib
import numpy as np
import pandas as pd
from Pyfhel import PyCtxt, Pyfhel
from sklearn.linear_model import LogisticRegression


def model_fn(model_dir):
    clf = joblib.load(os.path.join(model_dir, "model.joblib"))
    return clf


def input_fn(request_body, request_content_type):
    loaded_data = np.load(BytesIO(request_body), allow_pickle=True).item()
    boto3.Session().resource("s3").Bucket(loaded_data["bucket"]).download_file(
        loaded_data["uri"], "request.pkl"
    )
    loaded_data = pickle.load(open("request.pkl", "rb"))
    return loaded_data


def predict_fn(input_data, model):
    HE = Pyfhel()

    data = input_data["data"]
    HE.from_bytes_context(input_data["context"])
    HE.from_bytes_public_key(input_data["public_key"])
    HE.from_bytes_relin_key(input_data["relin_key"])
    HE.from_bytes_rotate_key(input_data["rotate_key"])

    encrypted_data = [PyCtxt(bytestring=row, scheme="CKKS", pyfhel=HE) for row in data]

    coefs = [
        np.append(coef, intercept).astype("float64")
        for coef, intercept in zip(model.coef_, model.intercept_)
    ]
    encoded_coefs = [HE.encodeFrac(coef) for coef in coefs]

    predictions = []
    for data in encrypted_data:
        encrypted_prediction = [
            HE.scalar_prod_plain(data, encoded_coef, in_new_ctxt=True).to_bytes()
            for encoded_coef in encoded_coefs
        ]
        predictions.append(encrypted_prediction)

    encoded_output = pickle.dumps(predictions)
    output = np.frombuffer(encoded_output, dtype="byte")

    return output


if __name__ == "__main__":

    parser = argparse.ArgumentParser()

    # Data and model directories
    parser.add_argument("--model-dir", type=str, default=os.environ.get("SM_MODEL_DIR"))
    parser.add_argument(
        "--train", type=str, default=os.environ.get("SM_CHANNEL_TRAINING")
    )

    args, _ = parser.parse_known_args()
    train_df = pd.read_csv(args.train + "/" + "fhe_train.csv")

    model = LogisticRegression()
    model.fit(train_df.iloc[:, :-1], train_df.iloc[:, -1])
    acc = np.mean(model.predict(train_df.iloc[:, :-1]) == train_df.iloc[:, -1]) * 100
    print("****Accuracy****", acc)

    joblib.dump(model, os.path.join(args.model_dir, "model.joblib"))

About the Authors

Liv d’Aliberti is a researcher within the Leidos AI/ML Accelerator under the Office of Technology. Their research focuses on privacy-preserving machine learning.

Manbir Gulati is a researcher within the Leidos AI/ML Accelerator under the Office of Technology. His research focuses on the intersection of cybersecurity and emerging AI threats.

Joe Kovba is a Cloud Center of Excellence Practice Lead within the Leidos Digital Modernization Accelerator under the Office of Technology. In his free time, he enjoys refereeing football games and playing softball.

Ben Snively is a Public Sector Specialist Solutions Architect. He works with government, non-profit, and education customers on big data and analytical projects, helping them build solutions using AWS. In his spare time, he adds IoT sensors throughout his house and runs analytics on them.

Sami Hoda is a Senior Solutions Architect in the Partners Consulting division covering the Worldwide Public Sector. Sami is passionate about projects where equal parts design thinking, innovation, and emotional intelligence can be used to solve problems for and impact people in need.

Read More

Automate Amazon Rekognition Custom Labels model training and deployment using AWS Step Functions

Automate Amazon Rekognition Custom Labels model training and deployment using AWS Step Functions

With Amazon Rekognition Custom Labels, you can have Amazon Rekognition train a custom model for object detection or image classification specific to your business needs. For example, Rekognition Custom Labels can find your logo in social media posts, identify your products on store shelves, classify machine parts in an assembly line, distinguish healthy and infected plants, or detect animated characters in videos.

Developing a Rekognition Custom Labels model to analyze images is a significant undertaking that requires time, expertise, and resources, often taking months to complete. Additionally, it often requires thousands or tens of thousands of hand-labeled images to provide the model with enough data to accurately make decisions. Generating this data can take months to gather and require large teams of labelers to prepare it for use in machine learning (ML).

With Rekognition Custom Labels, we take care of the heavy lifting for you. Rekognition Custom Labels builds off of the existing capabilities of Amazon Rekognition, which is already trained on tens of millions of images across many categories. Instead of thousands of images, you simply need to upload a small set of training images (typically a few hundred images or less) that are specific to your use case via our easy-to-use console. If your images are already labeled, Amazon Rekognition can begin training in just a few clicks. If not, you can label them directly within the Amazon Rekognition labeling interface, or use Amazon SageMaker Ground Truth to label them for you. After Amazon Rekognition begins training from your image set, it produces a custom image analysis model for you in just a few hours. Behind the scenes, Rekognition Custom Labels automatically loads and inspects the training data, selects the right ML algorithms, trains a model, and provides model performance metrics. You can then use your custom model via the Rekognition Custom Labels API and integrate it into your applications.

However, building a Rekognition Custom Labels model and hosting it for real-time predictions involves several steps: creating a project, creating the training and validation datasets, training the model, evaluating the model, and then creating an endpoint. After the model is deployed for inference, you might have to retrain the model when new data becomes available or if feedback is received from real-world inference. Automating the whole workflow can help reduce manual work.

In this post, we show how you can use AWS Step Functions to build and automate the workflow. Step Functions is a visual workflow service that helps developers use AWS services to build distributed applications, automate processes, orchestrate microservices, and create data and ML pipelines.

Solution overview

The Step Functions workflow is as follows:

  1. We first create an Amazon Rekognition project.
  2. In parallel, we create the training and the validation datasets using existing datasets. We can use the following methods:
    1. Import a folder structure from Amazon Simple Storage Service (Amazon S3) with the folders representing the labels.
    2. Use a local computer.
    3. Use Ground Truth.
    4. Create a dataset using an existing dataset with the AWS SDK.
    5. Create a dataset with a manifest file with the AWS SDK.
  3. After the datasets are created, we train a Custom Labels model using the CreateProjectVersion API. This could take from minutes to hours to complete.
  4. After the model is trained, we evaluate the model using the F1 score output from the previous step. We use the F1 score as our evaluation metric because it provides a balance between precision and recall. You can also use precision or recall as your model evaluation metrics. For more information on custom label evaluation metrics, refer to Metrics for evaluating your model.
  5. We then start to use the model for predictions if we are satisfied with the F1 score.

The following diagram illustrates the Step Functions workflow.

Prerequisites

Before deploying the workflow, we need to create the existing training and validation datasets. Complete the following steps:

  1. First, create an Amazon Rekognition project.
  2. Then, create the training and validation datasets.
  3. Finally, install the AWS SAM CLI.

Deploy the workflow

To deploy the workflow, clone the GitHub repository:

git clone https://github.com/aws-samples/rekognition-customlabels-automation-with-stepfunctions.git
cd rekognition-customlabels-automation-with-stepfunctions
sam build
sam deploy --guided

These commands build, package and deploy your application to AWS, with a series of prompts as explained in the repository.

Run the workflow

To test the workflow, navigate to the deployed workflow on the Step Functions console, then choose Start execution.

The workflow could take a few minutes to a few hours to complete. If the model passes the evaluation criteria, an endpoint for the model is created in Amazon Rekognition. If the model doesn’t pass the evaluation criteria or the training failed, the workflow fails. You can check the status of the workflow on the Step Functions console. For more information, refer to Viewing and debugging executions on the Step Functions console.

Perform model predictions

To perform predictions against the model, you can call the Amazon Rekognition DetectCustomLabels API. To invoke this API, the caller needs to have the necessary AWS Identity and Access Management (IAM) permissions. For more details on performing predictions using this API, refer to Analyzing an image with a trained model.

However, if you need to expose the DetectCustomLabels API publicly, you can front the DetectCustomLabels API with Amazon API Gateway. API Gateway is a fully managed service that makes it easy for developers to create, publish, maintain, monitor, and secure APIs at any scale. API Gateway acts as the front door for your DetectCustomLabels API, as shown in the following architecture diagram.

API Gateway forwards the user’s inference request to AWS Lambda. Lambda is a serverless, event-driven compute service that lets you run code for virtually any type of application or backend service without provisioning or managing servers. Lambda receives the API request and calls the Amazon Rekognition DetectCustomLabels API with the necessary IAM permissions. For more information on how to set up API Gateway with Lambda integration, refer to Set up Lambda proxy integrations in API Gateway.

The following is an example Lambda function code to call the DetectCustomLabels API:

client = boto3.client('rekognition', region_name="us-east-1")
REKOGNITION_PROJECT_VERSION_ARN = os.getenv(
    'REKOGNITION_PROJECT_VERSION_ARN', None)


def lambda_handler(event, context):
    image = json.dumps(event['body'])

    # Base64 decode the base64 encoded image body since API GW base64 encodes the image sent in and
    # Amazon Rekognition's detect_custom_labels API base64 encodes automatically ( since we are using the SDK)
    base64_decoded_image = base64.b64decode(image)

    min_confidence = 85

    # Call DetectCustomLabels
    response = client.detect_custom_labels(Image={'Bytes': base64_decoded_image},
                                           MinConfidence=min_confidence,
                                           ProjectVersionArn=REKOGNITION_PROJECT_VERSION_ARN)

    response_body = json.loads(json.dumps(response))

    statusCode = response_body['ResponseMetadata']['HTTPStatusCode']
    predictions = {}
    predictions['Predictions'] = response_body['CustomLabels']

    return {
        "statusCode": statusCode,
        "body": json.dumps(predictions)
    }

Clean up

To delete the workflow, use the AWS SAM CLI:

sam delete —stack-name <your sam project name>

To delete the Rekognition Custom Labels model, you can either use the Amazon Rekognition console or the AWS SDK. For more information, refer to Deleting an Amazon Rekognition Custom Labels model.

Conclusion

In this post, we walked through a Step Functions workflow to create a dataset and then train, evaluate, and use a Rekognition Custom Labels model. The workflow allows application developers and ML engineers to automate the custom label classification steps for any computer vision use case. The code for the workflow is open-sourced.

For more serverless learning resources, visit Serverless Land. To learn more about Rekognition custom labels, visit Amazon Rekognition Custom Labels.


About the Author

Veda Raman is a Senior Specialist Solutions Architect for machine learning based in Maryland. Veda works with customers to help them architect efficient, secure and scalable machine learning applications. Veda is interested in helping customers leverage serverless technologies for Machine learning.

Read More