Automate document processing with Amazon Bedrock Prompt Flows (preview)

Automate document processing with Amazon Bedrock Prompt Flows (preview)

Enterprises in industries like manufacturing, finance, and healthcare are inundated with a constant flow of documents—from financial reports and contracts to patient records and supply chain documents. Historically, processing and extracting insights from these unstructured data sources has been a manual, time-consuming, and error-prone task. However, the rise of intelligent document processing (IDP), which uses the power of artificial intelligence and machine learning (AI/ML) to automate the extraction, classification, and analysis of data from various document types is transforming the game. For manufacturers, this means streamlining processes like purchase order management, invoice processing, and supply chain documentation. Financial services firms can accelerate workflows around loan applications, account openings, and regulatory reporting. And in healthcare, IDP revolutionizes patient onboarding, claims processing, and medical record keeping.

By integrating IDP into their operations, organizations across these key industries experience transformative benefits: increased efficiency and productivity through the reduction of manual data entry, improved accuracy and compliance by reducing human errors, enhanced customer experiences due to faster document processing, greater scalability to handle growing volumes of documents, and lower operational costs associated with document management.

This post demonstrates how to build an IDP pipeline for automatically extracting and processing data from documents using Amazon Bedrock Prompt Flows, a fully managed service that enables you to build generative AI workflow using Amazon Bedrock and other services in an intuitive visual builder. Amazon Bedrock Prompt Flows allows you to quickly update your pipelines as your business changes, scaling your document processing workflows to help meet evolving demands.

Solution overview

To be scalable and cost-effective, this solution uses serverless technologies and managed services. In addition to Amazon Bedrock Prompt Flows, the solution uses the following services:

  • Amazon Textract – Automatically extracts printed text, handwriting, and data from
  • Amazon Simple Storage Service (Amazon S3) – Object storage built to retrieve data from anywhere.
  • Amazon Simple Notification Service (Amazon SNS) – A highly available, durable, secure, and fully managed publish-subscribe (pub/sub) messaging service to decouple microservices, distributed systems, and serverless applications.
  • AWS Lambda – A compute service that runs code in response to triggers such as changes in data, changes in application state, or user actions. Because services such as Amazon S3 and Amazon SNS can directly trigger an AWS Lambda function, you can build a variety of real-time serverless data-processing systems.
  • Amazon DynamoDB – a serverless, NoSQL, fully-managed database with single-digit millisecond performance at

Solution architecture

The solution proposed contains the following steps:

  1. Users upload a PDF for analysis to Amazon S3.
  2. The Amazon S3 upload triggers an AWS Lambda function execution.
  3. The function invokes Amazon Textract to extract text from the PDF in batch mode.
  4. Amazon Textract sends an SNS notification when the job is complete.
  5. An AWS Lambda function reads the Amazon Textract response and calls an Amazon Bedrock prompt flow to classify the document.
  6. Results of the classification are stored in Amazon S3 and sent to a destination AWS Lambda function.
  7. The destination AWS Lambda function calls an Amazon Bedrock prompt flow to extract and analyze data based on the document class provided.
  8. Results of the extraction and analysis are stored in Amazon S3.

This workflow is shown in the following diagram.

Architecture

In the following sections, we dive deep into how to build your IDP pipeline with Amazon Bedrock Prompt Flows.

Prerequisites

To complete the activities described in this post, ensure that you complete the following prerequisites in your local environment:

Implementation time and cost estimation

Time to complete ~ 60 minutes
Cost to run 1000 pages Under $25
Time to cleanup ~20 minutes
Learning level Advanced (300)

Deploy the solution

To deploy the solution, follow these steps:

  1. Clone the GitHub repository
  2. Use the shell script to build and deploy the solution by running the following commands from your project root directory:
chmod +x deploy.sh
./deploy.sh
  1. This will trigger the AWS CloudFormation template in your AWS account.

Test the solution

Once the template is deployed successfully, follow these steps to test the solution:

  1. On the AWS CloudFormation console, select the stack that was deployed
  2. Select the Resources tab
  3. Locate the resources labeled SourceS3Bucket and DestinationS3Bucket, as shown in the following screenshot. Select the link to open the SourceS3Bucket in a new tab

CloudFormation S3 Resources

  1. Select Upload and then Add folder
  2. Under sample_files, select the folder customer123, then choose Upload

Alternatively, you can upload the folder using the following AWS CLI command from the root of the project:

aws s3 sync ./sample_files/customer123 s3://[SourceS3Bucket_NAME]/customer123

After a few minutes the uploaded files will be processed. To view the results, follow these steps:

  1. Open the DestinationS3Bucket
  2. Under customer123, you should see a folder for documents for the processing jobs. Download and review the files locally using the console or with the following AWS CLI command
aws s3 sync s3://[DestinationS3Bucket_NAME]/customer123 ./result_files/customer123

Inside the folder for customer123 you will see several subfolders, as shown in the following diagram:

customer123
└── [Long Textract Job ID]
    ├── classify_response.txt
    ├── input_doc.txt
    └── FOR_REVIEW
        ├── pages_0.txt
        └── report.txt
└── [Long Textract Job ID]
    ├── classify_response.txt
    ├── input_doc.txt
    └── URLA_1003
        ├── pages_0.json
        ├── pages_0.txt
        └── report.txt
└── [Long Textract Job ID]
    ├── classify_response.txt
    ├── input_doc.txt
    └── BANK_STATEMENT
        ├── pages_0.json
        ├── pages_0.txt
        └── report.txt
└── [Long Textract Job ID]
    ├── classify_response.txt
    ├── input_doc.txt
    └── DRIVERS_LICENSE
        ├── pages_0.json
        ├── pages_0.txt
        └── report.txt

How it works

After the document text is extracted, it is sent to a classify prompt flow along with a list of classes, as shown in the following screenshot:

Classify Flow

The list of classes is generated in the AWS Lambda function by using the API to identify existing prompt flows that contain class definitions in their description. This approach allows us to expand the solution to new document types by adding a new prompt flow supporting the new document class, as shown in the following screenshot:

Prompt flows

For each document type, you can implement an extract and analyze flow that is appropriate to this document type. The following screenshot shows an example flow from the URLA_1003 flow. In this case, a prompt is used to convert the text to a standardized JSON format, and a second prompt then analyzes that JSON document to generate a report to the processing agent.

URLA Flow

Expand the solution using Amazon Bedrock Prompt Flows

To adapt to new use cases without changing the underlying code, use Amazon Bedrock Prompt Flows as described in the following steps.

Create a new prompt

From the files you downloaded, look for a folder named FOR_REVIEW. This folder contains documents that were processed and did not fit into an existing class. Open report.txt and review the suggested document class and proposed JSON template.

  1. In the navigation pane in Amazon Bedrock, open Prompt management and select Create prompt, as shown in the following screenshot:

Create Prompt

  1. Name the new prompt IDP_PAYSTUB_JSON and then choose Create
  2. In the Prompt box, enter the following text. Replace COPY YOUR JSON HERE with the JSON template from your txt file
Analyze the provided paystub
<PAYSTUB>
{{doc_text}}
</PAYSTUB>

Provide a structured JSON object containing the following information:

[COPY YOUR JSON HERE]

The following screenshot demonstrates this step.

Prompt Builder

  1. Choose Select model and choose Anthropic Claude 3 Sonnet
  2. Save your changes by choosing Save draft
  3. To test your prompt, open the pages_[n].txt file FOR_REVIEW folder and copy the content into the doc_text input box. Choose Run and the model should return a response

The following screenshot demonstrates this step.

Prompt test

  1. When you are satisfied with the results, choose Create Version. Note the version number because you will need it in the next section

Create a prompt flow

Now we will create a prompt flow using the prompt you created in the previous section.

  1. In the navigation menu, choose Prompt flows and then choose Create prompt flow, as shown in the following screenshot:

Create flow

  1. Name the new flow IDP_PAYSTUB
  2. Choose Create and use a new service role and then choose Save

Next, create the flow using the following steps. When you are done, the flow should resemble the following screenshot.

Paystub flow

  1. Configure the Flow input node:
    1. Choose the Flow input node and select the Configure
    2. Select Object as the Type. This means that flow invocation will expect to receive a JSON object.
  2. Add the S3 Retrieval node:
    1. In the Prompt flow builder navigation pane, select the Nodes tab
    2. Drag an S3 Retrieval node into your flow in the center pane
    3. In the Prompt flow builder pane, select the Configure tab
    4. Enter get_doc_text as the Node name
    5. Expand the Inputs Set the input express for objectKey to $.data.doc_text_s3key
    6. Drag a connection from the output of the Flow input node to the objectKey input of this node
  3. Add the Prompt node:
    1. Drag a Prompt node into your flow in the center pane
    2. In the Prompt flow builder pane, select the Configure tab
    3. Enter map_to_json as the Node name
    4. Choose Use a prompt from your Prompt Management
    5. Select IDP_PAYSTUB_JSON from the dropdown
    6. Choose the version you noted previously
    7. Drag a connection from the output of the get_doc_text node to the doc_text input of this node
  4. Add the S3 Storage node:
    1. In the Prompt flow builder navigation pane, select the Nodes tab
    2. Drag an S3 Storage node into your flow in the center pane
    3. In the Prompt flow builder pane, select the Configure tab in
    4. Enter save_json as the Node name
    5. Expand the Inputs Set the input express for objectKey to $.data.JSON_s3key
    6. Drag a connection from the output of the Flow input node to the objectKey input of this node
    7. Drag a connection from the output of the map_to_json node to the content input of this node
  5. Configure the Flow output node:
    1. Drag a connection from the output of the save_json node to the input of this node
  6. Choose Save to save your flow. Your flow should now be prepared for testing
    1. To test your flow, in the Test prompt flow pane on the right, enter the following JSON object. Choose Run and the flow should return a model response
    2. When you are satisfied with the result, choose Save and exit
{
"doc_text_s3key": "[PATH TO YOUR TEXT FILE IN S3].txt",
"JSON_s3key": "[PATH TO YOUR TEXT FILE IN S3].json"
}

To get the path to your file, follow these steps:

  1. Navigate to FOR_REVIEW in S3 and choose the pages_[n].txt file
  2. Choose the Properties tab
  3. Copy the key path by selecting the copy icon to the left of the key value, as shown in the following screenshot. Be sure to replace .txt with .json in the second line of input as noted previously.

S3 object key

Publish a version and alias

  1. On the flow management screen, choose Publish version. A success banner appears at the top
  2. At the top of the screen, choose Create alias
  3. Enter latest for the Alias name
  4. Choose Use an existing version to associate this alias. From the dropdown menu, choose the version that you just published
  5. Select Create alias. A success banner appears at the top.
  6. Get the FlowId and AliasId to use in the step below
    1. Choose the Alias you just created
    2. From the ARN, copy the FlowId and AliasId

Prompt flow alias

Add your new class to DynamoDB

  1. Open the AWS Management Console and navigate to the DynamoDB service.
  2. Select the table document-processing-bedrock-prompt-flows-IDP_CLASS_LIST
  3. Choose Actions then Create item
  4. Choose JSON view for entering the item data.
  5. Paste the following JSON into the editor:
{
    "class_name": {
        "S": "PAYSTUB"
    },
    "expected_inputs": {
        "S": "Should contain Gross Pay, Net Pay, Pay Date "
    },
    "flow_alias_id": {
        "S": "[Your flow Alias ID]"
    },
    "flow_id": {
        "S": "[Your flow ID]"
    },
    "flow_name": {
        "S": "[The name of your flow]"
    }
}
  1. Review the JSON to ensure all details are correct.
  2. Choose Create item to add the new class to your DynamoDB table.

Test by repeating the upload of the test file

Use the console to repeat the upload of the paystub.jpg file from your customer123 folder into Amazon S3. Alternatively, you can enter the following command into the command line:

aws s3 cp ./sample_files/customer123/paystub.jpeg s3://[INPUT_BUCKET_NAME]/customer123/

In a few minutes, check the report in the output location to see that you successfully added support for the new document type.

Clean up

Use these steps to delete the resources you created to avoid incurring charges on your AWS account:

  1. Empty the SourceS3Bucket and DestinationS3Bucket buckets including all versions
  2. Use the following shell script to delete the CloudFormation stack and test resources from your account:
chmod +x cleanup.sh
./cleanup.sh
  1. Return to the Expand the solution using Amazon Bedrock Prompt Flows section and follow these steps:
    1. In the Create a prompt flow section:
      1. Choose the flow idp_paystub that you created and choose Delete
      2. Follow the instructions to permanently delete the flow
    2. In the Create a new prompt section:
      1. Choose the prompt paystub_json that you created and choose Delete
      2. Follow the instructions to permanently delete the prompt

Conclusion

This solution demonstrates how customers can use Amazon Bedrock Prompt Flows to deploy and expand a scalable, low-code IDP pipeline. By taking advantage of the flexibility of Amazon Bedrock Prompt Flows, organizations can rapidly implement and adapt their document processing workflows to help meet evolving business needs. The low-code nature of Amazon Bedrock Prompt Flows makes it possible for business users and developers alike to create, modify, and extend IDP pipelines without extensive programming knowledge. This significantly reduces the time and resources required to deploy new document processing capabilities or adjust existing ones.

By adopting this integrated IDP solution, businesses across industries can accelerate their digital transformation initiatives, improve operational efficiency, and enhance their ability to extract valuable insights from document-based processes, driving significant competitive advantages.

Review your current manual document processing processes and identify where Amazon Bedrock Prompt Flows can help you automate these workflows for your business.

For further exploration and learning, we recommend checking out the following resources:


About the Authors

Erik Cordsen is a Solutions Architect at AWS serving customers in Georgia. He is passionate about applying cloud technologies and ML to solve real life problems. When he is not designing cloud solutions, Erik enjoys travel, cooking, and cycling.

Vivek Mittal is a Solution Architect at Amazon Web Services. He is passionate about serverless and machine learning technologies. Vivek takes great joy in assisting customers with building innovative solutions on the AWS cloud.

Brijesh Pati is an Enterprise Solutions Architect at AWS. His primary focus is helping enterprise customers adopt cloud technologies for their workloads. He has a background in application development and enterprise architecture and has worked with customers from various industries such as sports, finance, energy, and professional services. His interests include serverless architectures and AI/ML.

Read More

Governing the ML lifecycle at scale: Centralized observability with Amazon SageMaker and Amazon CloudWatch

Governing the ML lifecycle at scale: Centralized observability with Amazon SageMaker and Amazon CloudWatch

This post is part of an ongoing series on governing the machine learning (ML) lifecycle at scale. To start from the beginning, refer to Governing the ML lifecycle at scale, Part 1: A framework for architecting ML workloads using Amazon SageMaker.

A multi-account strategy is essential not only for improving governance but also for enhancing security and control over the resources that support your organization’s business. This approach enables various teams within your organization to experiment, innovate, and integrate more rapidly while keeping the production environment secure and available for your customers. However, because multiple teams might use your ML platform in the cloud, monitoring large ML workloads across a scaling multi-account environment presents challenges in setting up and monitoring telemetry data that is scattered across multiple accounts. In this post, we dive into setting up observability in a multi-account environment with Amazon SageMaker.

Amazon SageMaker Model Monitor allows you to automatically monitor ML models in production, and alerts you when data and model quality issues appear. SageMaker Model Monitor emits per-feature metrics to Amazon CloudWatch, which you can use to set up dashboards and alerts. You can use cross-account observability in CloudWatch to search, analyze, and correlate cross-account telemetry data stored in CloudWatch such as metrics, logs, and traces from one centralized account. You can now set up a central observability AWS account and connect your other accounts as sources. Then you can search, audit, and analyze logs across your applications to drill down into operational issues in a matter of seconds. You can discover and visualize operational and model metrics from many accounts in a single place and create alarms that evaluate metrics belonging to other accounts.

AWS CloudTrail is also essential for maintaining security and compliance in your AWS environment by providing a comprehensive log of all API calls and actions taken across your AWS account, enabling you to track changes, monitor user activities, and detect suspicious behavior. This post also dives into how you can centralize CloudTrail logging so that you have visibility into user activities within all of your SageMaker environments.

Solution overview

Customers often struggle with monitoring their ML workloads across multiple AWS accounts, because each account manages its own metrics, resulting in data silos and limited visibility. ML models across different accounts need real-time monitoring for performance and drift detection, with key metrics like accuracy, CPU utilization, and AUC scores tracked to maintain model reliability.

To solve this, we implement a solution that uses SageMaker Model Monitor and CloudWatch cross-account observability. This approach enables centralized monitoring and governance, allowing your ML team to gain comprehensive insights into logs and performance metrics across all accounts. With this unified view, your team can effectively monitor and manage their ML workloads, improving operational efficiency.

Implementing the solution consists of the following steps:

  1. Deploy the model and set up SageMaker Model Monitor.
  2. Enable CloudWatch cross-account observability.
  3. Consolidate metrics across source accounts and build unified dashboards.
  4. Configure centralized logging to API calls across multiple accounts using CloudTrail.

The following architecture diagram showcases the centralized observability solution in a multi-account setup. We deploy ML models across two AWS environments, production and test, which serve as our source accounts. We use SageMaker Model Monitor to assess these models’ performance. Additionally, we enhance centralized management and oversight by using cross-account observability in CloudWatch to aggregate metrics from the ML workloads in these source accounts into the observability account.

Deploy the model and set up SageMaker Model Monitor

We deploy an XGBoost classifier model, trained on publicly available banking marketing data, to identify potential customers likely to subscribe to term deposits. This model is deployed in both production and test source accounts, where its real-time performance is continually validated against baseline metrics using SageMaker Model Monitor to detect deviations in model performance. Additionally, we use CloudWatch to centralize and share the data and performance metrics of these ML workloads in the observability account, providing a comprehensive view across different accounts. You can find the full source code for this post in the accompanying GitHub repo.

The first step is to deploy the model to an SageMaker endpoint with data capture enabled:

endpoint_name = f"BankMarketingTarget-endpoint-{datetime.utcnow():%Y-%m-%d-%H%M}"
print("EndpointName =", endpoint_name)

data_capture_config = DataCaptureConfig(
enable_capture=True, sampling_percentage=100, destination_s3_uri=s3_capture_upload_path)

model.deploy(
initial_instance_count=1,
instance_type="ml.m4.xlarge",
endpoint_name=endpoint_name,
data_capture_config=data_capture_config,)

For real-time model performance evaluation, it’s essential to establish a baseline. This baseline is created by invoking the endpoint with validation data. We use SageMaker Model Monitor to perform baseline analysis, compute performance metrics, and propose quality constraints for effective real-time performance evaluation.

Next, we define the model quality monitoring object and run the model quality monitoring baseline job. The model monitor automatically generates baseline statistics and constraints based on the provided validation data. The monitoring job evaluates the model’s predictions against ground truth labels to make sure the model maintains its performance over time.

Banking_Quality_Monitor = ModelQualityMonitor(
    role=role,
    instance_count=1,
    instance_type="ml.m5.xlarge",
    volume_size_in_gb=20,
    max_runtime_in_seconds=1800,
    sagemaker_session=session,
)
job = Banking_Quality_Monitor.suggest_baseline(
    job_name=baseline_job_name,
    baseline_dataset=baseline_dataset_uri,
    dataset_format=DatasetFormat.csv(header=True),
    output_s3_uri=baseline_results_uri,
    problem_type="BinaryClassification",
    inference_attribute="prediction",
    probability_attribute="probability",
    ground_truth_attribute="label",
)
job.wait(logs=False)

In addition to the generated baseline, SageMaker Model Monitor requires two additional inputs: predictions from the deployed model endpoint and ground truth data provided by the model-consuming application. Because data capture is enabled on the endpoint, we first generate traffic to make sure prediction data is captured. When listing the data capture files stored, you should expect to see various files from different time periods, organized based on the hour in which the invocation occurred. When viewing the contents of a single file, you will notice the following details. The inferenceId attribute is set as part of the invoke_endpoint call. When ingesting ground truth labels and merging them with predictions for performance metrics, SageMaker Model Monitor uses inferenceId, which is included in captured data records. It’s used to merge these captured records with ground truth records, making sure the inferenceId in both datasets matches. If inferenceId is absent, it uses the eventId from captured data to correlate with the ground truth record.

{
"captureData": {
"endpointInput": {
"observedContentType": "text/csv",
"mode": "INPUT",
"data": "162,1,0.1,25,1.4,94.465,-41.8,4.961,0.2,0.3,0.4,0.5,0.6,0.7,0.8,1.1,0.9,0.10,0.11,0.12,0.13,0.14,0.15,1.2,0.16,0.17,0.18,0.19,0.20,1.3",
"encoding": "CSV"
},
"endpointOutput": {
"observedContentType": "text/csv; charset=utf-8",
"mode": "OUTPUT",
"data": "0.000508524535689503",
"encoding": "CSV"
}
},
"eventMetadata": {
"eventId": "527cfbb1-d945-4de8-8155-a570894493ca",
"inferenceId": "0",
"inferenceTime": "2024-08-18T20:25:54Z"
},
"eventVersion": "0"
}

SageMaker Model Monitor ingests ground truth data collected periodically and merges it with prediction data to calculate performance metrics. This monitoring process uses baseline constraints from the initial setup to continuously assess the model’s performance. By enabling enable_cloudwatch_metrics=True, SageMaker Model Monitor uses CloudWatch to monitor the quality and performance of our ML models, thereby emitting these performance metrics to CloudWatch for comprehensive tracking.

from sagemaker.model_monitor import CronExpressionGenerator

response = Banking_Quality_Monitor.create_monitoring_schedule(
monitor_schedule_name=Banking_monitor_schedule_name,
endpoint_input=endpointInput,
output_s3_uri=baseline_results_uri,
problem_type="BinaryClassification",
ground_truth_input=ground_truth_upload_path,
constraints=baseline_job.suggested_constraints(),
schedule_cron_expression=CronExpressionGenerator.hourly(),
enable_cloudwatch_metrics=True,
)

Each time the model quality monitoring job runs, it begins with a merge job that combines two datasets: the inference data captured at the endpoint and the ground truth data provided by the application. This is followed by a monitoring job that assesses the data for insights into model performance using the baseline setup.

Waiting for execution to finish......................................................!
groundtruth-merge-202408182100-7460007b77e6223a3f739740 job status: Completed
groundtruth-merge-202408182100-7460007b77e6223a3f739740 job exit message, if any: None
groundtruth-merge-202408182100-7460007b77e6223a3f739740 job failure reason, if any: None
Waiting for execution to finish......................................................!
model-quality-monitoring-202408182100-7460007b77e6223a3f739740 job status: Completed
model-quality-monitoring-202408182100-7460007b77e6223a3f739740 job exit message, if any: CompletedWithViolations: Job completed successfully with 8 violations.
model-quality-monitoring-202408182100-7460007b77e6223a3f739740 job failure reason, if any: None
Execution status is: CompletedWithViolations
{'MonitoringScheduleName': 'BankMarketingTarget-monitoring-schedule-2024-08-18-2029', 'ScheduledTime': datetime.datetime(2024, 8, 18, 21, 0, tzinfo=tzlocal()), 'CreationTime': datetime.datetime(2024, 8, 18, 21, 2, 21, 198000, tzinfo=tzlocal()), 'LastModifiedTime': datetime.datetime(2024, 8, 18, 21, 12, 53, 253000, tzinfo=tzlocal()), 'MonitoringExecutionStatus': 'CompletedWithViolations', 'ProcessingJobArn': 'arn:aws:sagemaker:us-west-2:730335512115:processing-job/model-quality-monitoring-202408182100-7460007b77e6223a3f739740', 'EndpointName': 'BankMarketingTarget-endpoint-2024-08-18-1958'}
====STOP====
No completed executions to inspect further. Please wait till an execution completes or investigate previously reported failures

Check for deviations from the baseline constraints to effectively set appropriate thresholds in your monitoring process. As you can see in the following the screenshot, various metrics such as AUC, accuracy, recall, and F2 score are closely monitored, each subject to specific threshold checks like LessThanThreshold or GreaterThanThreshold. By actively monitoring these metrics, you can detect significant deviations and make informed decisions promptly, making sure your ML models perform optimally within established parameters.

Enable CloudWatch cross-account observability

With CloudWatch integrated into SageMaker Model Monitor to track the metrics of ML workloads running in the source accounts (production and test), the next step involves enabling CloudWatch cross-account observability. CloudWatch cross-account observability allows you to monitor and troubleshoot applications spanning multiple AWS accounts within an AWS Region. This feature enables seamless searching, visualization, and analysis of metrics, logs, traces, and Application Insights across linked accounts, eliminating account boundaries. You can use this feature to consolidate CloudWatch metrics from these source accounts into the observability account.

To achieve this centralized governance and monitoring, we establish two types of accounts:

  • Observability account – This central AWS account aggregates and interacts with ML workload metrics from the source accounts
  • Source accounts (production and test) – These individual AWS accounts share their ML workload metrics and logging resources with the central observability account, enabling centralized oversight and analysis

Configure the observability account

Complete the following steps to configure the observability account:

  1. On the CloudWatch console of the observability account, choose Settings in the navigation pane.
  2. In the Monitoring account configuration section, choose Configure.

  1. Select which telemetry data can be shared with the observability account.

  1. Under List source accounts, enter the source accounts that will share data with the observability account.

To link the source accounts, you can use account IDs, organization IDs, or organization paths. You can use an organization ID to include all accounts within the organization, or an organization path can target all accounts within a specific department or business unit. In this case, because we have two source accounts to link, we enter the account IDs of those two accounts.

  1. Choose Configure.

After the setup is complete, the message “Monitoring account enabled” appears in the CloudWatch settings.

Additionally, your source accounts are listed on the Configuration policy tab.

Link source accounts

Now that the observability account has been enabled with source accounts, you can link these source accounts within an AWS organization. You can choose from two methods:

  • For organizations using AWS CloudFormation, you can download a CloudFormation template and deploy it in a CloudFormation delegated administration account. This method facilitates the bulk addition of source accounts.
  • For linking individual accounts, two options are available:
    • Download a CloudFormation template that can be deployed directly within each source account.
    • Copy a provided URL, which simplifies the setup process using the AWS Management Console.

Complete the following steps to use the provided URL:

  1. Copy the URL and open it in a new browser window where you’re logged in as the source account.

  1. Configure the telemetry data you want to share. This can include logs, metrics, traces, Application Insights, or Internet Monitor.

During this process, you’ll notice that the Amazon Resource Name (ARN) of the observability account configuration is automatically filled in. This convenience is due to copying and pasting the URL provided in the earlier step. If, however, you choose not to use the URL, you can manually enter the ARN. Copy the ARN from the observability account settings and enter it into the designated field in the source account configuration page.

  1. Define the label that identifies your source accounts. This label is crucial for organizing and distinguishing your accounts within the monitoring system.
  1. Choose Link to finalize the connection between your source accounts and the observability account.

  1. Repeat these steps for both source accounts.

You should see those accounts listed on the Linked source accounts tab within the observability account CloudWatch settings configuration.

Consolidate metrics across source accounts and build unified dashboards

In the observability account, you can access and monitor detailed metrics related to your ML workloads and endpoints deployed across the source accounts. This centralized view allows you to track a variety of metrics, including those from SageMaker endpoints and processing jobs, all within a single interface.

The following screenshot displays CloudWatch model metrics for endpoints in your source accounts. Because you linked the production and test source accounts using the label as the account name, CloudWatch categorizes metrics by account label, effectively distinguishing between the production and test environments. It organizes key details into columns, including account labels, metric names, endpoints, and performance metrics like accuracy and AUC, all captured by scheduled monitoring jobs. These metrics offer valuable insights into the performance of your models across these environments.

The observability account allows you to monitor key metrics of ML workloads and endpoints. The following screenshots display CPU utilization metrics associated with the BankMarketingTarget model and BankMarketing model endpoints you deployed in the source accounts. This view provides detailed insights into critical performance indicators, including:

  • CPU utilization
  • Memory utilization
  • Disk utilization

Furthermore, you can create dashboards that offer a consolidated view of key metrics related to your ML workloads running across the linked source accounts. These centralized dashboards are pivotal for overseeing the performance, reliability, and quality of your ML models on a large scale.

Let’s look at a consolidated view of the ML workload metrics running in our production and test source accounts. This dashboard provides us with immediate access to critical information:

  • AUC scores – Indicating model performance, giving insights into the trade-off between true positives and false positives
  • Accuracy rates – Showing prediction correctness, which helps in assessing the overall reliability of the model
  • F2 scores – Offering a balance between precision and recall, particularly valuable when false negatives are more critical to minimize
  • Total number of violations – Highlighting any breaches in predefined thresholds or constraints, making sure the model adheres to expected behavior
  • CPU usage levels – Helping you manage resource allocation by monitoring the processing power utilized by the ML workloads
  • Disk utilization percentages – Providing efficient storage management by keeping track of how much disk space is being consumed

This following screenshots show CloudWatch dashboards for the models deployed in our production and test source accounts. We track metrics for accuracy, AUC, CPU and disk utilization, and violation counts, providing insights into model performance and resource usage.

You can configure CloudWatch alarms to proactively monitor and receive notifications on critical ML workload metrics from your source accounts. The following screenshot shows an alarm configured to track the accuracy of our bank marketing prediction model in the production account. This alarm is set to trigger if the model’s accuracy falls below a specified threshold, so any significant degradation in performance is promptly detected and addressed. By using such alarms, you can maintain high standards of model performance and quickly respond to potential issues within your ML infrastructure.

You can also create a comprehensive CloudWatch dashboard for monitoring various aspects of Amazon SageMaker Studio, including the number of domains, apps, and user profiles across different AWS accounts. The following screenshot illustrates a dashboard that centralizes key metrics from the production and test source accounts.

Configure centralized logging of API calls across multiple accounts with CloudTrail

If AWS Control Tower has been configured to automatically create an organization-wide trail, each account will send a copy of its CloudTrail event trail to a centralized Amazon Simple Storage Service (Amazon S3) bucket. This bucket is typically created in the log archive account and is configured with limited access, where it serves as a single source of truth for security personnel. If you want to set up a separate account to allow the ML admin team to have access, you can configure replication from the log archive account. You can create the destination bucket in the observability account.

After you create the bucket for replicated logs, you can configure Amazon S3 replication by defining the source and destination bucket, and attaching the required AWS Identity and Access Management (IAM) permissions. Then you update the destination bucket policy to allow replication.

Complete the following steps:

  1. Create an S3 bucket in the observability account.
  2. Log in to the log archive account.
  3. On the Amazon S3 console, open the Control Tower logs bucket, which will have the format aws-controltower-logs-{ACCOUNT-ID}-{REGION}.

You should see an existing key that corresponds to your organization ID. The trail logs are stored under /{ORG-ID}/AWSLogs/{ACCOUNT-ID}/CloudTrail/{REGION}/YYYY/MM/DD.

  1. On the Management tab, choose Create replication rule.
  2. For Replication rule name, enter a name, such as replicate-ml-workloads-to-observability.
  3. Under Source bucket, select Limit the scope of the rule using one or more filters, and enter a path the corresponds to the account you want to enable querying against.

  1. Select Specify a bucket in another account and enter the observability account ID and the bucket name.
  2. Select Change object ownership to destination bucket owner.
  3. For IAM role, choose Create new role.

After you set the cross-account replication, the logs being stored in the S3 bucket in the log archive account will be replicated in the observability account. You can now use Amazon Athena to query and analyze the data being stored in Amazon S3. If you don’t have Control Tower configured, you have to manually configure CloudTrail in each account to write to the S3 bucket in the centralized observability account for analysis. If your organization has more stringent security and compliance requirements, you can configure replication of just the SageMaker logs from the log archive account to the bucket in the observability account by integrating Amazon S3 Event Notifications with AWS Lambda functions.

The following is a sample query run against the logs stored in the observability account bucket and the associated result in Athena:

SELECT useridentity.arn, useridentity.sessioncontext.sourceidentity, requestparametersFROM observability_replicated_logs
WHERE eventname = 'CreateEndpoint'
AND eventsource = 'sagemaker.amazonaws.com'

Conclusion

Centralized observability in a multi-account setup empowers organizations to manage ML workloads at scale. By integrating SageMaker Model Monitor with cross-account observability in CloudWatch, you can build a robust framework for real-time monitoring and governance across multiple environments.

This architecture not only provides continuous oversight of model performance, but also significantly enhances your ability to quickly identify and resolve potential issues, thereby improving governance and security throughout our ML ecosystem.

In this post, we outlined the essential steps for implementing centralized observability within your AWS environment, from setting up SageMaker Model Monitor to using cross-account features in CloudWatch. We also demonstrated centralizing CloudTrail logs by replicating them from the log archive account and querying them using Athena to get insights into user activity within SageMaker environments across the organization.

As you implement this solution, remember that achieving optimal observability is an ongoing process. Continually refining and expanding your monitoring capabilities is crucial to making sure your ML models remain reliable, efficient, and aligned with business objectives. As ML practices evolve, blending cutting-edge technology with sound governance principles is key. Run the code yourself using the following notebook or try out the observability module in the following workshop.


About the Authors

Abhishek Doppalapudi is a Solutions Architect at Amazon Web Services (AWS), where he assists startups in building and scaling their products using AWS services. Currently, he is focused on helping AWS customers adopt Generative AI solutions. In his free time, Abhishek enjoys playing soccer, watching Premier League matches, and reading.

Venu Kanamatareddy is a Startup Solutions Architect at AWS. He brings 16 years of extensive IT experience working with both Fortune 100 companies and startups. Currently, Venu is helping guide and assist Machine Learning and Artificial Intelligence-based startups to innovate, scale, and succeed.

Vivek Gangasani is a Senior GenAI Specialist Solutions Architect at AWS. He helps emerging GenAI companies build innovative solutions using AWS services and accelerated compute. Currently, he is focused on developing strategies for fine-tuning and optimizing the inference performance of Large Language Models. In his free time, Vivek enjoys hiking, watching movies and trying different cuisines.

Ram Vittal is a Principal ML Solutions Architect at AWS. He has over 3 decades of experience architecting and building distributed, hybrid, and cloud applications. He is passionate about building secure, scalable, reliable AI/ML and big data solutions to help enterprise customers with their cloud adoption and optimization journey to improve their business outcomes. In his spare time, he rides motorcycle and walks with his three-year old sheep-a-doodle!

Read More

Import data from Google Cloud Platform BigQuery for no-code machine learning with Amazon SageMaker Canvas

Import data from Google Cloud Platform BigQuery for no-code machine learning with Amazon SageMaker Canvas

In the modern, cloud-centric business landscape, data is often scattered across numerous clouds and on-site systems. This fragmentation can complicate efforts by organizations to consolidate and analyze data for their machine learning (ML) initiatives.

This post presents an architectural approach to extract data from different cloud environments, such as Google Cloud Platform (GCP) BigQuery, without the need for data movement. This minimizes the complexity and overhead associated with moving data between cloud environments, enabling organizations to access and utilize their disparate data assets for ML projects.

We highlight the process of using Amazon Athena Federated Query to extract data from GCP BigQuery, using Amazon SageMaker Data Wrangler to perform data preparation, and then using the prepared data to build ML models within Amazon SageMaker Canvas, a no-code ML interface.

SageMaker Canvas allows business analysts to access and import data from over 50 sources, prepare data using natural language and over 300 built-in transforms, build and train highly accurate models, generate predictions, and deploy models to production without requiring coding or extensive ML experience.

Solution overview

The solution outlines two main steps:

  • Set up Amazon Athena for federated queries from GCP BigQuery, which enables running live queries in GCP BigQuery directly from Athena
  • Import the data into SageMaker Canvas from BigQuery using Athena as an intermediate

After the data is imported into SageMaker Canvas, you can use the no-code interface to build ML models and generate predictions based on the imported data.

You can use SageMaker Canvas to build the initial data preparation routine and generate accurate predictions without writing code. However, as your ML needs evolve or require more advanced customization, you may want to transition from a no-code environment to a code-first approach. The integration between SageMaker Canvas and Amazon SageMaker Studio allows you to operationalize the data preparation routine for production-scale deployments. For more details, refer to Seamlessly transition between no-code and code-first machine learning with Amazon SageMaker Canvas and Amazon SageMaker Studio

The overall architecture, as seen below, demonstrates how to use AWS services to seamlessly access and integrate data from a GCP BigQuery data warehouse into SageMaker Canvas for building and deploying ML models.

Solution Architecture Diagram

The workflow includes the following steps:

  1. Within the SageMaker Canvas interface, the user composes a SQL query to run against the GCP BigQuery data warehouse. SageMaker Canvas relays this query to Athena, which acts as an intermediary service, facilitating the communication between SageMaker Canvas and BigQuery.
  2. Athena uses the Athena Google BigQuery connector, which uses a pre-built AWS Lambda function to enable Athena federated query capabilities. This Lambda function retrieves the necessary BigQuery credentials (service account private key) from AWS Secrets Manager for authentication purposes.
  3. After authentication, the Lambda function uses the retrieved credentials to query BigQuery and obtain the desired result set. It parses this result set and sends it back to Athena.
  4. Athena returns the queried data from BigQuery to SageMaker Canvas, where you can use it for ML model training and development purposes within the no-code interface.

This solution offers the following benefits:

  • Seamless integration – SageMaker Canvas empowers you to integrate and use data from various sources, including cloud data warehouses like BigQuery, directly within its no-code ML environment. This integration eliminates the need for additional data movement or complex integrations, enabling you to focus on building and deploying ML models without the overhead of data engineering tasks.
  • Secure access – The use of Secrets Manager makes sure BigQuery credentials are securely stored and accessed, enhancing the overall security of the solution.
  • Scalability – The serverless nature of the Lambda function and the ability in Athena to handle large datasets make this solution scalable and able to accommodate growing data volumes. Additionally, you can use multiple queries to partition the data to source in parallel.

In the next sections, we dive deeper into the technical implementation details and walk through a step-by-step demonstration of this solution.

Dataset

The steps outlined in this post provide an example of how to import data into SageMaker Canvas for no-code ML. In this example, we demonstrate how to import data through Athena from GCP BigQuery.

For our dataset, we use a synthetic dataset from a telecommunications mobile phone carrier. This sample dataset contains 5,000 records, where each record uses 21 attributes to describe the customer profile. The Churn column in the dataset indicates whether the customer left service (true/false). This Churn attribute is the target variable that the ML model should aim to predict.

The following screenshot shows an example of the dataset on the BigQuery console.

Example Dataset in BigQuery Console

Prerequisites

Complete the following prerequisite steps:

  1. Create a service account in GCP and a service account key.
  2. Download the private key JSON file.
  3. Store the JSON file in Secrets Manager:
    1. On the Secrets Manager console, choose Secrets in the navigation pane, then choose Store a new secret.
    2. For Secret type¸ select Other type of secret.
    3. Copy the contents of the JSON file and enter it under Key/value pairs on the Plaintext tab.

AWS Secret Manager Setup

  1. If you don’t have a SageMaker domain already created, create it along with the user profile. For instructions, see Quick setup to Amazon SageMaker.
  2. Make sure the user profile has permission to invoke Athena by confirming that the AWS Identity and Access Management (IAM) role has glue:GetDatabase and athena:GetDataCatalog permission on the resource. See the following example:
    {
    "Version": "2012-10-17",
    "Statement": [
    {
    "Sid": "VisualEditor0",
    "Effect": "Allow",
    "Action": [
    "glue:GetDatabase",
    "athena:GetDataCatalog"
    ],
    "Resource": [
    "arn:aws:glue:*:<AWS account id>:catalog",
    "arn:aws:glue:*:<AWS account id>:database/*",
    "arn:aws:athena:*:<AWS account id>:datacatalog/*"
    ]
    }
    ]
    }

Register the Athena data source connector

Complete the following steps to set up the Athena data source connector:

  1. On the Athena console, choose Data sources in the navigation pane.
  2. Choose Create data source.
  3. On the Choose a data source page, search for and select Google BigQuery, then choose Next.

Select BigQuery as Datasource on Amazon Athena

  1. On the Enter data source details page, provide the following information:
    1. For Data source name¸ enter a name.
    2. For Description, enter an optional description.
    3. For Lambda function, choose Create Lambda function to configure the connection.

Provide Data Source Details

  1. Under Application settings¸ enter the following details:
    1. For SpillBucket, enter the name of the bucket where the function can spill data.
    2. For GCPProjectID, enter the project ID within GCP.
    3. For LambdaFunctionName, enter the name of the Lambda function that you’re creating.
    4. For SecretNamePrefix, enter the secret name stored in Secrets Manager that contains GCP credentials.

Application settings for data source connector

Application settings for data source connector

  1. Choose Deploy.

You’re returned to the Enter data source details page.

  1. In the Connection details section, choose the refresh icon under Lambda function.
  2. Choose the Lambda function you just created. The ARN of the Lambda function is displayed.
  3. Optionally, for Tags, add key-value pairs to associate with this data source.

For more information about tags, see Tagging Athena resources.

Lambda function connection details

  1. Choose Next.
  2. On the Review and create page, review the data source details, then choose Create data source.

The Data source details section of the page for your data source shows information about your new connector. You can now use the connector in your Athena queries. For information about using data connectors in queries, see Running federated queries.

To query from Athena, launch the Athena SQL editor and choose the data source you created. You should be able to run live queries against the BigQuery database.

Athena Query Editor

Connect to SageMaker Canvas with Athena as a data source

To import data from Athena, complete the following steps:

  1. On the SageMaker Canvas console, choose Data Wrangler in the navigation pane.
  2. Choose Import data and prepare.
  3. Select the Tabular
  4. Choose Athena as the data source.

SageMaker Data Wrangler in SageMaker Canvas allows you to prepare, featurize, and analyze your data. You can integrate a SageMaker Data Wrangler data preparation flow into your ML workflows to simplify and streamline data preprocessing and feature engineering using little to no coding.

  1. Choose an Athena table in the left pane from AwsDataCatalog and drag and drop the table into the right pane.

SageMaker Data Wrangler Select Athena Table

  1. Choose Edit in SQL and enter the following SQL query:
SELECT 
state,
account_length,
area_code,
phone,
intl_plan,
vmail_plan,vmail_message,day_mins,
day_calls,
day_charge,
eve_mins,
eve_calls,
eve_charge,
night_mins,
night_calls,
night_charge,
intl_mins,
intl_calls,
intl_charge,
custserv_calls,
churn FROM "bigquery"."athenabigquery"."customer_churn" order by random() limit 50 ;

In the preceding query, bigquery is the data source name created in Athena, athenabigquery is the database name, and customer_churn is the table name.

  1. Choose Run SQL to preview the dataset and when you’re satisfied with the data, choose Import.

Run SQL to preview the dataset

When working with ML, it’s crucial to randomize or shuffle the dataset. This step is essential because you may have access to millions or billions of data points, but you don’t necessarily need to use the entire dataset for training the model. Instead, you can limit the data to a smaller subset specifically for training purposes. After you’ve shuffled and prepared the data, you can begin the iterative process of data preparation, feature evaluation, model training, and ultimately hosting the trained model.

  1. You can process or export your data to a location that is suitable for your ML workflows. For example, you can export the transformed data as a SageMaker Canvas dataset and create an ML model from it.
  2. After you export your data, choose Create model to create an ML model from your data.

Create Model Option

The data is imported into SageMaker Canvas as a dataset from the specific table in Athena. You can now use this dataset to create a model.

Train a model

After your data is imported, it shows up on the Datasets page in SageMaker Canvas. At this stage, you can build a model. To do so, complete the following steps:

  1. Select your dataset and choose Create a model.

Create model from SageMaker Datasets menu option

  1. For Model name, enter your model name (for this post, my_first_model).

SageMaker Canvas enables you to create models for predictive analysis, image analysis, and text analysis.

  1. Because we want to categorize customers, select Predictive analysis for Problem type.
  2. Choose Create.

Create predictive analysis model

On the Build page, you can see statistics about your dataset, such as the percentage of missing values and mode of the data.

  1. For Target column, choose a column that you want to predict (for this post, churn).

SageMaker Canvas offers two types of models that can generate predictions. Quick build prioritizes speed over accuracy, providing a model in 2–15 minutes. Standard build prioritizes accuracy over speed, providing a model in 30 minutes–2 hours.

  1. For this example, choose Quick build.

Model quick build

After the model is trained, you can analyze the model accuracy.

The Overview tab shows us the column impact, or the estimated importance of each column in predicting the target column. In this example, the Night_calls column has the most significant impact in predicting if a customer will churn. This information can help the marketing team gain insights that lead to taking actions to reduce customer churn. For example, we can see that both low and high CustServ_Calls increase the likelihood of churn. The marketing team can take actions to help prevent customer churn based on these learnings. Examples include creating a detailed FAQ on websites to reduce customer service calls, and running education campaigns with customers on the FAQ that can keep engagement up.

Model outcome & results

Generate predictions

On the Predict tab, you can generate both batch predictions and single predictions. Complete the following steps to generate a batch prediction:

  1. Download the following sample inference dataset for generating predictions.
  2. To test batch predictions, choose Batch prediction.

SageMaker Canvas allows you to generate batch predictions either manually or automatically on a schedule. To learn how to automate batch predictions on a schedule, refer to Manage automations.

  1. For this post, choose Manual.
  2. Upload the file you downloaded.
  3. Choose Generate predictions.

After a few seconds, the prediction is complete, and you can choose View to see the prediction.

View generated predictions

Optionally, choose Download to download a CSV file containing the full output. SageMaker Canvas will return a prediction for each row of data and the probability of the prediction being correct.

Download CSV Output

Optionally, you can deploy your models to an endpoint to make predictions. For more information, refer to Deploy your models to an endpoint.

Clean up

To avoid future charges, log out of SageMaker Canvas.

Conclusion

In this post, we showcased a solution to extract the data from BigQuery using Athena federated queries and a sample dataset. We then used the extracted data to build an ML model using SageMaker Canvas to predict customers at risk of churning—without writing code. SageMaker Canvas enables business analysts to build and deploy ML models effortlessly through its no-code interface, democratizing ML across the organization. This enables you to harness the power of advanced analytics and ML to drive business insights and innovation, without the need for specialized technical skills.

For more information, see Query any data source with Amazon Athena’s new federated query and Import data from over 40 data sources for no-code machine learning with Amazon SageMaker Canvas. If you’re new to SageMaker Canvas, refer to Build, Share, Deploy: how business analysts and data scientists achieve faster time-to-market using no-code ML and Amazon SageMaker Canvas.


About the authors

Amit Gautam is an AWS senior solutions architect supporting enterprise customers in the UK on their cloud journeys, providing them with architectural advice and guidance that helps them achieve their business outcomes.

Sujata Singh is an AWS senior solutions architect supporting enterprise customers in the UK on their cloud journeys, providing them with architectural advice and guidance that helps them achieve their business outcomes.

Read More

Customized model monitoring for near real-time batch inference with Amazon SageMaker

Customized model monitoring for near real-time batch inference with Amazon SageMaker

Real-world applications vary in inference requirements for their artificial intelligence and machine learning (AI/ML) solutions to optimize performance and reduce costs. Examples include financial systems processing transaction data streams, recommendation engines processing user activity data, and computer vision models processing video frames. In these scenarios, customized model monitoring for near real-time batch inference with Amazon SageMaker is essential, making sure the quality of predictions is continuously monitored and any deviations are promptly detected.

In this post, we present a framework to customize the use of Amazon SageMaker Model Monitor for handling multi-payload inference requests for near real-time inference scenarios. SageMaker Model Monitor monitors the quality of SageMaker ML models in production. Early and proactive detection of deviations in model quality enables you to take corrective actions, such as retraining models, auditing upstream systems, or fixing quality issues without having to monitor models manually or build additional tooling. SageMaker Model Monitor provides monitoring capabilities for data quality, model quality, bias drift in a model’s predictions, and drift in feature attribution. SageMaker Model Monitor adapts well to common AI/ML use cases and provides advanced capabilities given edge case requirements such as monitoring custom metrics, handling ground truth data, or processing inference data capture.

You can deploy your ML model to SageMaker hosting services and get a SageMaker endpoint for real-time inference. Your client applications invoke this endpoint to get inferences from the model. To reduce the number of invocations and meet custom business objectives, AI/ML developers can customize inference code to send multiple inference records in one payload to the endpoint for near real-time model predictions. Rather than using a SageMaker Model Monitoring schedule with native configurations, a SageMaker Model Monitor Bring Your Own Container (BYOC) approach meets these custom requirements. Although this advanced BYOC topic can appear overwhelming to AI/ML developers, with the right framework, there is opportunity to accelerate SageMaker Model Monitor BYOC development for customized model monitoring requirements.

In this post, we provide a BYOC framework with SageMaker Model Monitor to enable customized payload handling (such as multi-payload requests) from SageMaker endpoint data capture, use ground truth data, and output custom business metrics for model quality.

Overview of solution

SageMaker Model Monitor uses a SageMaker pre-built image using Spark Deequ, which accelerates the usage of model monitoring. Using this pre-built image occasionally becomes problematic when customization is required. For example, the pre-built image requires one inference payload per inference invocation (request to a SageMaker endpoint). However, if you’re sending multiple payloads in one invocation to reduce the number of invocations and setting up model monitoring with SageMaker Model Monitor, then you will need to explore additional capabilities within SageMaker Model Monitor.

A preprocessor script is a capability of SageMaker Model Monitor to preprocess SageMaker endpoint data capture before creating metrics for model quality. However, even with a preprocessor script, you still face a mismatch in the designed behavior of SageMaker Model Monitor, which expects one inference payload per request.

Given these requirements, we create the BYOC framework shown in the following diagram. In this example, we demonstrate setting up a SageMaker Model Monitor job for monitoring model quality.

The workflow includes the following steps:

  1.  Before and after training an AI/ML model, an AI/ML developer creates baseline and validation data that is used downstream for monitoring model quality. For example, users can save the accuracy score of a model, or create custom metrics, to validate model quality.
  2. An AI/ML developer creates a SageMaker endpoint including custom inference scripts. Data capture must be enabled for the SageMaker endpoint to save real-time inference data to Amazon Simple Storage Service (Amazon S3) and support downstream SageMaker Model Monitor.
  3. A user or application sends a request including multiple inference payloads. If you have a large volume of inference records, SageMaker batch transform may be a suitable option for your use case.
  4. The SageMaker endpoint (which includes the custom inference code to preprocesses the multi-payload request) passes the inference data to the ML model, postprocesses the predictions, and sends a response to the user or application. The information pertaining to the request and response is stored in Amazon S3.
  5. Independent of calling the SageMaker endpoint, the user or application generates ground truth for the predictions returned by the SageMaker endpoint.
  6. A customer image (BYOC) is pushed to Amazon Elastic Container Registry (Amazon ECR) that contains code to perform the following actions:
    • Read input and output contracts required for SageMaker Model Monitor.
    • Read ground truth data.
    • Optionally, read any baseline constraint or validation data (such as accuracy score threshold).
    • Process data capture stored in Amazon S3 from the SageMaker endpoint.
    • Compare real-time data with ground truth and create model quality metrics.
    • Publish metrics to Amazon CloudWatch Logs and output a model quality report.
  7. The AI/ML developer creates a SageMaker Model Monitor schedule and sets the custom image (BYOC) as the referable image URI.

This post uses code provided in the following GitHub repo to demonstrate the solution. The process includes the following steps:

  1. Train a multi-classification XGBoost model using the public forest coverage dataset.
  2. Create an inference script for the SageMaker endpoint for custom inference logic.
  3. Create a SageMaker endpoint with data capture enabled.
  4. Create a constraint file that contains metrics used to determine if model quality alerts should be generated.
  5. Create a custom Docker image for SageMaker Model Monitor by using the SageMaker Docker Build CLI and push it to Amazon ECR.
  6. Create a SageMaker Model Monitor schedule with the BYOC image.
  7. View the custom model quality report generated by the SageMaker Model Monitor job.

Prerequisites

To follow along with this walkthrough, make sure you have the following prerequisites:

Train the model

In the SageMaker Studio environment, launch a SageMaker training job to train a multi-classification model and output model artifacts to Amazon S3:


from sagemaker.xgboost.estimator import XGBoost
from sagemaker.estimator import Estimator

hyperparameters = {
    "max_depth": 5,
    "eta": 0.36,
    "gamma": 2.88,
    "min_child_weight": 9.89,
    "subsample": 0.77,
    "objective": "multi:softprob",
    "num_class": 7,
    "num_round": 50
}

xgb_estimator = XGBoost(
    entry_point="./src/train.py",
    hyperparameters=hyperparameters,
    role=role,
    instance_count=1,
    instance_type="ml.m5.2xlarge",
    framework_version="1.5-1",
    output_path=f's3://{bucket}/{prefix_name}/models'
)

xgb_estimator.fit(
    {
        "train": train_data_path,
        "validation": validation_data_path
    },
    wait=True,
    logs=True
)

Create Inference Code

Before you deploy the SageMaker endpoint, create an inference script (inference.py) that contains a function to preprocess the request with multiple payloads, invoke the model, and postprocess results.

For output_fn, a payload index is created for each inference record found in the request. This enables you to merge ground truth records with data capture within the SageMaker Model Monitor job.

See the following code:

def input_fn(input_data, content_type):
    """Take request data and de-serializes the data into an object for prediction.
        When an InvokeEndpoint operation is made against an Endpoint running SageMaker model server,
        the model server receives two pieces of information:
            - The request Content-Type, for example "application/json"
            - The request data, which is at most 5 MB (5 * 1024 * 1024 bytes) in size.
    Args:
        input_data (obj): the request data.
        content_type (str): the request Content-Type.
    Returns:
        (obj): data ready for prediction. For XGBoost, this defaults to DMatrix.
    """
    
    if content_type == "application/json":
        request_json = json.loads(input_data)
        prediction_df = pd.DataFrame.from_dict(request_json)
        return xgb.DMatrix(prediction_df)
    else:
        raise ValueError


def predict_fn(input_data, model):
    """A predict_fn for XGBooost Framework. Calls a model on data deserialized in input_fn.
    Args:
        input_data: input data (DMatrix) for prediction deserialized by input_fn
        model: XGBoost model loaded in memory by model_fn
    Returns: a prediction
    """
    output = model.predict(input_data, validate_features=True)
    return output


def output_fn(prediction, accept):
    """Function responsible to serialize the prediction for the response.
    Args:
        prediction (obj): prediction returned by predict_fn .
        accept (str): accept content-type expected by the client.
    Returns: JSON output
    """
    
    if accept == "application/json":
        prediction_labels = np.argmax(prediction, axis=1)
        prediction_scores = np.max(prediction, axis=1)
        output_returns = [
            {
                "payload_index": int(index), 
                "label": int(label), 
                "score": float(score)} for label, score, index in zip(
                prediction_labels, prediction_scores, range(len(prediction_labels))
            )
        ]
        return worker.Response(encoders.encode(output_returns, accept), mimetype=accept)
    
    else:
        raise ValueError

Deploy the SageMaker endpoint

Now that you have created the inference script, you can create the SageMaker endpoint:


from sagemaker.model_monitor import DataCaptureConfig

predictor = xgb_estimator.deploy(
    instance_type="ml.m5.large",
    initial_instance_count=1,
    wait=True,
    data_capture_config=DataCaptureConfig(
        enable_capture=True,
        sampling_percentage=100,
        destination_s3_uri=f"s3://{bucket}/{prefix_name}/model-monitor/data-capture"
    ),
    source_dir="./src",
    entry_point="inference.py"
)

Create constraints for model quality monitoring

In model quality monitoring, you need to compare your metric generated from ground truth and data capture with a pre-specified threshold. In this example, we use the accuracy value of the trained model on the test set as a threshold. If the newly computed accuracy metric (generated using ground truth and data capture) is lower than this threshold, a violation report will be generated and the metrics will be published to CloudWatch.

See the following code:

constraints_dict = {
    "accuracy":{
        "threshold": accuracy_value
    }
}
    

# Serializing json
json_object = json.dumps(constraints_dict, indent=4)
 
# Writing to sample.json
with open("constraints.json", "w") as outfile:
    outfile.write(json_object)

This contraints.json file is written to Amazon S3 and will be the input for the processing job for the SageMaker Model Monitor job downstream.

Publish the BYOC image to Amazon ECR

Create a script named model_quality_monitoring.py to perform the following functions:

  • Read environment variables and any arguments passed to the SageMaker Model Monitor job
  • Read SageMaker endpoint data capture and constraint metadata configured with the SageMaker Model Monitor job
  • Read ground truth data from Amazon S3 using the AWS SDK for pandas
  • Create accuracy metrics with data capture and ground truth
  • Create metrics and violation reports given constraint violations
  • Publish metrics to CloudWatch if violations are present

This script serves as the entry point for the SageMaker Model Monitor job. With a custom image, the entry point script needs to be specified in the Docker image, as shown in the following code. This way, when the SageMaker Model Monitor job initiates, the specified script is run. The sm-mm-mqm-byoc:1.0 image URI is passed to the image_uri argument when you define the SageMaker Model Monitor job downstream.

FROM 683313688378.dkr.ecr.us-east-1.amazonaws.com/sagemaker-scikit-learn:1.2-1-cpu-py3

RUN python3 -m pip install awswrangler

ENV PYTHONUNBUFFERED=TRUE

ADD ./src/model_quality_monitoring.py /

ENTRYPOINT ["python3", "/model_quality_monitoring.py"]

The custom BYOC image is pushed to Amazon ECR using the SageMaker Docker Build CLI:

sm-docker build . --file ./docker/Dockerfile --repository sm-mm-mqm-byoc:1.0

Create a SageMaker Model Monitor schedule

Next, you use the Amazon SageMaker Python SDK to create a model monitoring schedule. You can define the BYOC ECR image created in the previous section as the image_uri parameter.

You can customize the environment variables and arguments passed to the SageMaker Processing job when SageMaker Model Monitor runs the model quality monitoring job. In this example, the ground truth Amazon S3 URI path is passed as an environment variable and is used within the SageMaker Processing job:


sm_mm_mqm = ModelMonitor(
    role=role, 
    image_uri=f"{account_id}.dkr.ecr.us-east-1.amazonaws.com/sm-mm-mqm-byoc:1.0", 
    instance_count=1, 
    instance_type='ml.m5.xlarge', 
    base_job_name="sm-mm-mqm-byoc",
    sagemaker_session=sess,
    env={
        "ground_truth_s3_uri_path": f"s3://{bucket}/{prefix_name}/model-monitor/mqm/ground_truth/{predictor.endpoint_name}"
    }
)

Before you create the schedule, specify the endpoint name, the Amazon S3 URI output location you want to send violation reports to, the statistics and constraints metadata files (if applicable), and any custom arguments you want to pass to your entry script within your BYOC SageMaker Processing job. In this example, the argument –-create-violation-tests is passed, which creates a mock violation for demonstration purposes. SageMaker Model Monitor accepts the rest of the parameters and translates them into environment variables, which you can use within your custom monitoring job.

sm_mm_mqm.create_monitoring_schedule(
    endpoint_input=predictor.endpoint_name,
    output=MonitoringOutput(
        source="/opt/ml/processing/output",
        destination=f"s3://{bucket}/{prefix_name}/model-monitor/mqm/reports"
    ),
    statistics=f"s3://{bucket}/{prefix_name}/model-monitor/mqm/baseline-data/statistics.json",
    constraints=f"s3://{bucket}/{prefix_name}/model-monitor/mqm/baseline-data/constraints.json",
    monitor_schedule_name="sm-mm-byoc-batch-inf-schedule",
    schedule_cron_expression=CronExpressionGenerator().hourly(),
    arguments=[
        "--create-violation-tests"
    ]
)

Review the entry point script model_quallity_monitoring.py to better understand how to use custom arguments and environment variables provided by the SageMaker Model Monitor job.

Observe the SageMaker Model Monitor job output

Now that the SageMaker Model Monitor resource is created, the SageMaker endpoint is invoked.

In this example, a request is provided that includes a list of two payloads in which we want to collect predictions:

sm_runtime = boto3.client("sagemaker-runtime")

response = sm_runtime.invoke_endpoint(
    EndpointName=predictor.endpoint_name,
    ContentType="application/json",
    Accept="application/json",
    Body=test_records,
    InferenceId="0"
)

InferenceId is passed as an argument to the invoke_endpoint method. This ID is used downstream when merging the ground truth data to the real-time SageMaker endpoint data capture. In this example, we want to collect ground truth with the following structure.

InferenceI payload_index groundTruthLabel
0 0 1
0 1 0

This makes it simpler when merging the ground truth data with real-time data within the SageMaker Model Monitor custom job.

Because we set the CRON schedule for the SageMaker Model Monitor job to an hourly schedule, we can view the results at the end of the hour. In SageMaker Studio Classic, by navigating the SageMaker endpoint details page, you can choose the Monitoring job history tab to view status reports of the SageMaker Model Monitor job.


If an issue is found, you can choose the monitoring job name to review the report.

In this example, the custom model monitoring metric created in the BYOC flagged an accuracy score violation of -1 (this was done purposely for demonstration with the argument --create-violation-tests).

This gives you the ability to monitor model quality violations for your custom SageMaker Model Monitor job within the SageMaker Studio console. If you want to invoke CloudWatch alarms based on published CloudWatch metrics, you must create these CloudWatch metrics with your BYOC job. You can review how this is done within the monitor_quality_monitoring.py script. For automated alerts for model monitoring, creating an Amazon Simple Notification Service (Amazon SNS) topic is recommended, which email user groups will subscribe to for alerts on a given CloudWatch metric alarm.

Clean up

To avoid incurring future charges, delete all resources related to the SageMaker Model Monitor schedule by completing the following steps:

  1. Delete data capture and any ground truth data:
    ! aws s3 rm s3://{bucket}/{prefix_name}/model-monitor/data-capture/{predictor.endpoint_name} --recursive
    ! aws s3 rm s3://{bucket}/{prefix_name}/model-monitor/mqm/ground_truth/{predictor.endpoint_name} --recursive

  2. Delete the monitoring schedule:
    sm_mm_mqm.delete_monitoring_schedule()

  3. Delete the SageMaker model and SageMaker endpoint:
    predictor.delete_model()
    predictor.delete_endpoint()

Conclusion

Custom business or technical requirements for a SageMaker endpoint frequently have an impact on downstream efforts in model monitoring. In this post, we provided a framework that enables you to customize SageMaker Model Monitor jobs (in this case, for monitoring model quality) to handle the use case of passing multiple inference payloads to a SageMaker endpoint.

Explore the provided GitHub repository to implement this customized model monitoring framework with SageMaker Model Monitor. You can use this framework as a starting point to monitor your custom metrics or handle other unique requirements for model quality monitoring in your AI/ML applications.


About the Authors

Joe King is a Sr. Data Scientist at AWS, bringing a breadth of data science, ML engineering, MLOps, and AI/ML architecting to help businesses create scalable solutions on AWS.

Ajay Raghunathan is a Machine Learning Engineer at AWS. His current work focuses on architecting and implementing ML solutions at scale. He is a technology enthusiast and a builder with a core area of interest in AI/ML, data analytics, serverless, and DevOps. Outside of work, he enjoys spending time with family, traveling, and playing football.

Raju Patil is a Sr. Data Scientist with AWS Professional Services. He architects, builds, and deploys AI/ML solutions to help AWS customers across different verticals overcome business challenges in a variety of AI/ML use cases.

Read More

How Planview built a scalable AI Assistant for portfolio and project management using Amazon Bedrock

How Planview built a scalable AI Assistant for portfolio and project management using Amazon Bedrock

This post is co-written with Lee Rehwinkel from Planview.

Businesses today face numerous challenges in managing intricate projects and programs, deriving valuable insights from massive data volumes, and making timely decisions. These hurdles frequently lead to productivity bottlenecks for program managers and executives, hindering their ability to drive organizational success efficiently.

Planview, a leading provider of connected work management solutions, embarked on an ambitious plan in 2023 to revolutionize how 3 million global users interact with their project management applications. To realize this vision, Planview developed an AI assistant called Planview Copilot, using a multi-agent system powered by Amazon Bedrock.

Developing this multi-agent system posed several challenges:

  • Reliably routing tasks to appropriate AI agents
  • Accessing data from various sources and formats
  • Interacting with multiple application APIs
  • Enabling the self-serve creation of new AI skills by different product teams

To overcome these challenges, Planview developed a multi-agent architecture built using Amazon Bedrock. Amazon Bedrock is a fully managed service that provides API access to foundation models (FMs) from Amazon and other leading AI startups. This allows developers to choose the FM that is best suited for their use case. This approach is both architecturally and organizationally scalable, enabling Planview to rapidly develop and deploy new AI skills to meet the evolving needs of their customers.

This post focuses primarily on the first challenge: routing tasks and managing multiple agents in a generative AI architecture. We explore Planview’s approach to this challenge during the development of Planview Copilot, sharing insights into the design decisions that provide efficient and reliable task routing.

We describe customized home-grown agents in this post because this project was implemented before Amazon Bedrock Agents was generally available. However, Amazon Bedrock Agents is now the recommended solution for organizations looking to use AI-powered agents in their operations. Amazon Bedrock Agents can retain memory across interactions, offering more personalized and seamless user experiences. You can benefit from improved recommendations and recall of prior context where required, enjoying a more cohesive and efficient interaction with the agent. We share our learnings in our solution to help you understanding how to use AWS technology to build solutions to meet your goals.

Solution overview

Planview’s multi-agent architecture consists of multiple generative AI components collaborating as a single system. At its core, an orchestrator is responsible for routing questions to various agents, collecting the learned information, and providing users with a synthesized response. The orchestrator is managed by a central development team, and the agents are managed by each application team.

The orchestrator comprises two main components called the router and responder, which are powered by a large language model (LLM). The router uses AI to intelligently route user questions to various application agents with specialized capabilities. The agents can be categorized into three main types:

  • Help agent – Uses Retrieval Augmented Generation (RAG) to provide application help
  • Data agent – Dynamically accesses and analyzes customer data
  • Action agent – Runs actions within the application on the user’s behalf

After the agents have processed the questions and provided their responses, the responder, also powered by an LLM, synthesizes the learned information and formulates a coherent response to the user. This architecture allows for a seamless collaboration between the centralized orchestrator and the specialized agents, which provides users an accurate and comprehensive answers to their questions. The following diagram illustrates the end-to-end workflow.

End-to-end workflow showing responder and router components

Technical overview

Planview used key AWS services to build its multi-agent architecture. The central Copilot service, powered by Amazon Elastic Kubernetes Service (Amazon EKS), is responsible for coordinating activities among the various services. Its responsibilities include:

  • Managing user session chat history using Amazon Relational Database Service (Amazon RDS)
  • Coordinating traffic between the router, application agents, and responder
  • Handling logging, monitoring, and collecting user-submitted feedback

The router and responder are AWS Lambda functions that interact with Amazon Bedrock. The router considers the user’s question and chat history from the central Copilot service, and the responder considers the user’s question, chat history, and responses from each agent.

Application teams manage their agents using Lambda functions that interact with Amazon Bedrock. For improved visibility, evaluation, and monitoring, Planview has adopted a centralized prompt repository service to store LLM prompts.

Agents can interact with applications using various methods depending on the use case and data availability:

  • Existing application APIs – Agents can communicate with applications through their existing API endpoints
  • Amazon Athena or traditional SQL data stores – Agents can retrieve data from Amazon Athena or other SQL-based data stores to provide relevant information
  • Amazon Neptune for graph data – Agents can access graph data stored in Amazon Neptune to support complex dependency analysis
  • Amazon OpenSearch Service for document RAG – Agents can use Amazon OpenSearch Service to perform RAG on documents

The following diagram illustrates the generative AI assistant architecture on AWS.

AWS services and data flow in Generative AI chatbot

Router and responder sample prompts

The router and responder components work together to process user queries and generate appropriate responses. The following prompts provide illustrative router and responder prompt templates. Additional prompt engineering would be required to improve reliability for a production implementation.

First, the available tools are described, including their purpose and sample questions that can be asked of each tool. The example questions help guide the natural language interactions between the orchestrator and the available agents, as represented by tools.

tools = '''
<tool>
<toolName>applicationHelp</toolName>
<toolDescription>
Use this tool to answer application help related questions.
Example questions:
How do I reset my password?
How do I add a new user?
How do I create a task?
</toolDescription>
</tool>
<tool>
<toolName>dataQuery</toolName>
<toolDescription>
Use this tool to answer questions using application data.
Example questions:
Which tasks are assigned to me?
How many tasks are due next week?
Which task is most at risk?
</toolDescription>
</tool>

Next, the router prompt outlines the guidelines for the agent to either respond directly to user queries or request information through specific tools before formulating a response:

system_prompt_router = f'''
<role>
Your job is to decide if you need additional information to fully answer the User's 
questions.
You achieve your goal by choosing either 'respond' or 'callTool'.
You have access to your chat history in <chatHistory></chatHistory> tags.
You also have a list of available tools to assist you in <tools></tools> tags.
</role>
<chatHistory>
{chatHistory}
</chatHistory>
<tools>
{tools}
</tools>
<rules>
- If the chat history contains sufficient information to answer the User's questions, 
choose the 'respond' action.
- To gather more information before responding, choose the 'callTool' action.
- You many only choose from the tools in the <tools></tools> tags.
- If no tool can assist with the question, choose the 'respond' action.
- Place your chosen action within <action></action> tags.
- When you chose the 'callTool' action, provide the <toolName> and the <toolQuestion> you
would like to ask.
- Your <toolQuestion> should be verbose and avoid using pronouns.
- Start by providing your step-by-step thinking in <thinking></thinking> tags.
- Then you will give your answer in <answer></answer> tags.
- Your answer should follow the format of one of these three examples:
When choosing the 'respond' action, your answer should follow the below example EXACTLY:
<example>
<answer>
<action>respond</action>
</answer>
</example>
When choosing the 'callTool' action for a single Tool:
<example>
<answer>
<action>callTool</action>
<tools>
<tool>
<toolName>applicationHelp</toolName>
<toolQuestion>How do I reset my password?</toolQuestion>
</tool>
</tools>
</answer>
</example>
Executing the above, would produce the following result:
You can also call multiple Tools using this format:
<example>
<answer>
<action>callTool</action>
<tools>
<tool>
<toolName>dataQuery</toolName>
<toolQuestion>How many tasks are assigned to me?</toolQuestion>
</tool>
<tool>
<toolName>applicationHelp</toolName>
<toolQuestion>How do I add a new task?</toolQuestion>
</tool>
</tools>
</answer>
</example>
</rules>
'''

The following is a sample response from the router component that initiates the dataQuery tool to retrieve and analyze task assignments for each user:

<thinking>
To determine who has the most tasks assigned, I will need to query the application data. The "dataQuery" tool seems most appropriate for this question.
</thinking>

<answer>
<action>callTool</action>
<tools>
    <tool>
        <toolName>dataQuery</toolName>
        <toolQuestion>Which user has the most tasks currently assigned to them?   </toolQuestion>
    </tool>
</tools>
</answer>

The following is a sample response from the responder component that uses the dataQuery tool to fetch information about the user’s assigned tasks. It reports that the user has five tasks assigned to them.

<thinking>
Based on the chat history, I previously called the dataQuery tool to ask "How many tasks are currently assigned to the user?". The tool responded that the user has 5 tasks assigned to them.
</thinking>

<answer>
According to the data I queried previously, you have 5 tasks assigned to you.
</answer>

Model evaluation and selection

Evaluating and monitoring generative AI model performance is crucial in any AI system. Planview’s multi-agent architecture enables assessment at various component levels, providing comprehensive quality control despite the system’s complexity. Planview evaluates components at three levels:

  • Prompts – Assessing LLM prompts for effectiveness and accuracy
  • AI agents – Evaluating complete prompt chains to maintain optimal task handling and response relevance
  • AI system – Testing user-facing interactions to verify seamless integration of all components

The following figure illustrates the evaluation framework for prompts and scoring.

Evaluation framework for prompts scoring

To conduct these evaluations, Planview uses a set of carefully crafted test questions that cover typical user queries and edge cases. These evaluations are performed during the development phase and continue in production to track the quality of responses over time. Currently, human evaluators play a crucial role in scoring responses. To aid in the evaluation, Planview has developed an internal evaluation tool to store the library of questions and track the responses over time.

To assess each component and determine the most suitable Amazon Bedrock model for a given task, Planview established the following prioritized evaluation criteria:

  • Quality of response – Assuring accuracy, relevance, and helpfulness of system responses
  • Time of response – Minimizing latency between user queries and system responses
  • Scale – Making sure the system can scale to thousands of concurrent users
  • Cost of response – Optimizing operational costs, including AWS services and generative AI models, to maintain economic viability

Based on these criteria and the current use case, Planview selected Anthropic’s Claude 3 Sonnet on Amazon Bedrock for the router and responder components.

Results and impact

Over the past year, Planview Copilot’s performance has significantly improved through the implementation of a multi-agent architecture, development of a robust evaluation framework, and adoption of the latest FMs available through Amazon Bedrock. Planview saw the following results between the first generation of Planview Copilot developed mid-2023 and the latest version:

  • Accuracy – Human-evaluated accuracy has improved from 50% answer acceptance to now exceeding 95%
  • Response time – Average response times have been reduced from over 1 minute to 20 seconds
  • Load testing – The AI assistant has successfully passed load tests, where 1,000 questions were submitted simultaneous with no noticeable impact on response time or quality
  • Cost-efficiency – The cost per customer interaction has been slashed to one tenth of the initial expense
  • Time-to-market – New agent development and deployment time has been reduced from months to weeks

Conclusion

In this post, we explored how Planview was able to develop a generative AI assistant to address complex work management process by adopting the following strategies:

  • Modular development – Planview built a multi-agent architecture with a centralized orchestrator. The solution enables efficient task handling and system scalability, while allowing different product teams to rapidly develop and deploy new AI skills through specialized agents.
  • Evaluation framework – Planview implemented a robust evaluation process at multiple levels, which was crucial for maintaining and improving performance.
  • Amazon Bedrock integration – Planview used Amazon Bedrock to innovate faster with broad model choice and access to various FMs, allowing for flexible model selection based on specific task requirements.

Planview is migrating to Amazon Bedrock Agents, which enables the integration of intelligent autonomous agents within their application ecosystem. Amazon Bedrock Agents automate processes by orchestrating interactions between foundation models, data sources, applications, and user conversations.

As next steps, you can explore Planview’s AI assistant feature built on Amazon Bedrock and stay updated with new Amazon Bedrock features and releases to advance your AI journey on AWS.


About Authors

Sunil Ramachandra is a Senior Solutions Architect enabling hyper-growth Independent Software Vendors (ISVs) to innovate and accelerate on AWS. He partners with customers to build highly scalable and resilient cloud architectures. When not collaborating with customers, Sunil enjoys spending time with family, running, meditating, and watching movies on Prime Video.

Benedict Augustine is a thought leader in Generative AI and Machine Learning, serving as a Senior Specialist at AWS. He advises customer CxOs on AI strategy, to build long-term visions while delivering immediate ROI.As VP of Machine Learning, Benedict spent the last decade building seven AI-first SaaS products, now used by Fortune 100 companies, driving significant business impact. His work has earned him 5 patents.

Lee Rehwinkel is a Principal Data Scientist at Planview with 20 years of experience in incorporating AI & ML into Enterprise software. He holds advanced degrees from both Carnegie Mellon University and Columbia University. Lee spearheads Planview’s R&D efforts on AI capabilities within Planview Copilot. Outside of work, he enjoys rowing on Austin’s Lady Bird Lake.

Read More

Super charge your LLMs with RAG at scale using AWS Glue for Apache Spark

Super charge your LLMs with RAG at scale using AWS Glue for Apache Spark

Large language models (LLMs) are very large deep-learning models that are pre-trained on vast amounts of data. LLMs are incredibly flexible. One model can perform completely different tasks such as answering questions, summarizing documents, translating languages, and completing sentences. LLMs have the potential to revolutionize content creation and the way people use search engines and virtual assistants. Retrieval Augmented Generation (RAG) is the process of optimizing the output of an LLM, so it references an authoritative knowledge base outside of its training data sources before generating a response. While LLMs are trained on vast volumes of data and use billions of parameters to generate original output, RAG extends the already powerful capabilities of LLMs to specific domains or an organization’s internal knowledge base—without having to retrain the LLMs. RAG is a fast and cost-effective approach to improve LLM output so that it remains relevant, accurate, and useful in a specific context. RAG introduces an information retrieval component that uses the user input to first pull information from a new data source. This new data from outside of the LLM’s original training data set is called external data. The data might exist in various formats such as files, database records, or long-form text. An AI technique called embedding language models converts this external data into numerical representations and stores it in a vector database. This process creates a knowledge library that generative AI models can understand.

RAG introduces additional data engineering requirements:

  • Scalable retrieval indexes must ingest massive text corpora covering requisite knowledge domains.
  • Data must be preprocessed to enable semantic search during inference. This includes normalization, vectorization, and index optimization.
  • These indexes continuously accumulate documents. Data pipelines must seamlessly integrate new data at scale.
  • Diverse data amplifies the need for customizable cleaning and transformation logic to handle the quirks of different sources.

In this post, we will explore building a reusable RAG data pipeline on LangChain—an open source framework for building applications based on LLMs—and integrating it with AWS Glue and Amazon OpenSearch Serverless. The end solution is a reference architecture for scalable RAG indexing and deployment. We provide sample notebooks covering ingestion, transformation, vectorization, and index management, enabling teams to consume disparate data into high-performing RAG applications.

Data preprocessing for RAG

Data pre-processing is crucial for responsible retrieval from your external data with RAG. Clean, high-quality data leads to more accurate results with RAG, while privacy and ethics considerations necessitate careful data filtering. This lays the foundation for LLMs with RAG to reach their full potential in downstream applications.

To facilitate effective retrieval from external data, a common practice is to first clean up and sanitize the documents. You can use Amazon Comprehend or AWS Glue sensitive data detection capability to identify sensitive data and then use Spark to clean up and sanitize the data. The next step is to split the documents into manageable chunks. The chunks are then converted to embeddings and written to a vector index, while maintaining a mapping to the original document. This process is shown in the figure that follows. These embeddings are used to determine semantic similarity between queries and text from the data sources

Solution overview

In this solution, we use LangChain integrated with AWS Glue for Apache Spark and Amazon OpenSearch Serverless. To make this solution scalable and customizable, we use Apache Spark’s distributed capabilities and PySpark’s flexible scripting capabilities. We use OpenSearch Serverless as a sample vector store and use the Llama 3.1 model.

The benefits of this solution are:

  • You can flexibly achieve data cleaning, sanitizing, and data quality management in addition to chunking and embedding.
  • You can build and manage an incremental data pipeline to update embeddings on Vectorstore at scale.
  • You can choose a wide variety of embedding models.
  • You can choose a wide variety of data sources including databases, data warehouses, and SaaS applications supported in AWS Glue.

This solution covers the following areas:

  • Processing unstructured data such as HTML, Markdown, and text files using Apache Spark. This includes distributed data cleaning, sanitizing, chunking, and embedding vectors for downstream consumption.
  • Bringing it all together into a Spark pipeline that incrementally processes sources and publishes vectors to an OpenSearch Serverless
  • Querying the indexed content using the LLM model of your choice to provide natural language answers.

Prerequisites

To continue this tutorial, you must create the following AWS resources in advance:

Complete the following steps to launch an AWS Glue Studio notebook:

  1. Download the Jupyter Notebook file.
  2. On the AWS Glue console, chooseNotebooks in the navigation pane.
  3. Under Create job, select Notebook.
  4. For Options, choose Upload Notebook.
  5. Choose Create notebook. The notebook will start up in a minute.

  1. Run the first two cells to configure an AWS Glue interactive session.


Now you have configured the required settings for your AWS Glue notebook.

Vector store setup

First, create a vector store. A vector store provides efficient vector similarity search by providing specialized indexes. RAG complements LLMs with an external knowledge base that’s typically built using a vector database hydrated with vector-encoded knowledge articles.

In this example, you will use Amazon OpenSearch Serverless for its simplicity and scalability to support a vector search at low latency and up to billions of vectors. Learn more in Amazon OpenSearch Service’s vector database capabilities explained.

Complete the following steps to set up OpenSearch Serverless:

  1. For the cell under Vectorestore Setup, replace <your-iam-role-arn> with your IAM role Amazon Resource Name (ARN), replace <region> with your AWS Region, and run the cell.
  2. Run the next cell to create the OpenSearch Serverless collection, security policies, and access policies.

You have provisioned OpenSearch Serverless successfully. Now you’re ready to inject documents into the vector store.

Document preparation

In this example, you will use a sample HTML file as the HTML input. It’s an article with specialized content that LLMs cannot answer without using RAG.

  1. Run the cell under Sample document download to download the HTML file, create a new S3 bucket, and upload the HTML file to the bucket.

  1. Run the cell under Document preparation. It loads the HTML file into Spark DataFrame df_html.

  1. Run the two cells under Parse and clean up HTMLto define functions parse_html and format_md. We use Beautiful Soup to parse HTML, and convert it to Markdown using markdownify in order to use MarkdownTextSplitter for chunking. These functions will be used inside a Spark Python user-defined function (UDF) in later cells.

  1. Run the cell under Chunking HTML. The example uses LangChain’s MarkdownTextSplitter to split the text along markdown-formatted headings into manageable chunks. Adjusting chunk size and overlap is crucial to help prevent the interruption of contextual meaning, which can affect the accuracy of subsequent vector store searches. The example uses a chunk size of 1,000 and a chunk overlap of 100 to preserve information continuity, but these settings can be fine-tuned to suit different use cases.

  1. Run the three cells under Embedding. The first two cells configure LLMs and deploy them through Amazon SageMaker In the third cell, the function process_batchinjects the documents into the vector store through OpenSearch implementation inside LangChain, which inputs the embeddings model and the documents to create the entire vector store.

  1. Run the two cells under Pre-process HTML document. The first cell defines the Spark UDF, and the second cell triggers the Spark action to run the UDF per record containing the entire HTML content.

You have successfully ingested an embedding into the OpenSearch Serverless collection.

Question answering

In this section, we are going to demonstrate the question-answering capability using the embedding ingested in the previous section.

  1. Run the two cells under Question Answering to create the OpenSearchVectorSearch client, the LLM using Llama 3.1, and define RetrievalQA where you can customize how the documents fetched should be added to the prompt using the chain_type Optionally, you can choose other foundation models (FMs). For such cases, refer to the model card to adjust the chunking length.

  1. Run the next cell to do a similarity search using the query “What is Task Decomposition?” against the vector store providing the most relevant information. It takes a few seconds to make documents available in the index. If you get an empty output in the next cell, wait 1-3 minutes and retry.

Now that you have the relevant documents, it’s time to use the LLM to generate an answer based on the embeddings.

  1. Run the next cell to invoke the LLM to generate an answer based on the embeddings.

As you expect, the LLM answered with a detailed explanation about task decomposition. For production workloads, balancing latency and cost efficiency is crucial in semantic searches through vector stores. It’s important to select the most suitable k-NN algorithm and parameters for your specific needs, as detailed in this post. Additionally, consider using product quantization (PQ) to reduce the dimensionality of embeddings stored in the vector database. This approach can be advantageous for latency-sensitive tasks, though it might involve some trade-offs in accuracy. For additional details, see Choose the k-NN algorithm for your billion-scale use case with OpenSearch.

Clean up

Now to the final step, cleaning up the resources:

  1. Run the cell under Clean up to delete S3, OpenSearch Serverless, and SageMaker resources.

  1. Delete the AWS Glue notebook job.

Conclusion

This post explored a reusable RAG data pipeline using LangChain, AWS Glue, Apache Spark, Amazon SageMaker JumpStart, and Amazon OpenSearch Serverless. The solution provides a reference architecture for ingesting, transforming, vectorizing, and managing indexes for RAG at scale by using Apache Spark’s distributed capabilities and PySpark’s flexible scripting capabilities. This enables you to preprocess your external data in the phases including cleaning, sanitization, chunking documents, generating vector embeddings for each chunk, and loading into a vector store.


About the Authors

Noritaka Sekiyama is a Principal Big Data Architect on the AWS Glue team. He is responsible for building software artifacts to help customers. In his spare time, he enjoys cycling with his road bike.

Akito Takeki is a Cloud Support Engineer at Amazon Web Services. He specializes in Amazon Bedrock and Amazon SageMaker. In his spare time, he enjoys travelling and spending time with his family.

Ray Wang is a Senior Solutions Architect at Amazon Web Services. Ray is dedicated to building modern solutions on the Cloud, especially in NoSQL, big data, and machine learning. As a hungry go-getter, he passed all 12 AWS certificates to make his technical field not only deep but wide. He loves to read and watch sci-fi movies in his spare time.

Vishal Kajjam is a Software Development Engineer on the AWS Glue team. He is passionate about distributed computing and using ML/AI for designing and building end-to-end solutions to address customers’ Data Integration needs. In his spare time, he enjoys spending time with family and friends.

Savio Dsouza is a Software Development Manager on the AWS Glue team. His team works on generative AI applications for the Data Integration domain and distributed systems for efficiently managing data lakes on AWS and optimizing Apache Spark for performance and reliability.

Kinshuk Pahare is a Principal Product Manager on AWS Glue. He leads a team of Product Managers who focus on AWS Glue platform, developer experience, data processing engines, and generative AI. He had been with AWS for 4.5 years. Before that he did product management at Proofpoint and Cisco.

Read More

From RAG to fabric: Lessons learned from building real-world RAGs at GenAIIC – Part 1

From RAG to fabric: Lessons learned from building real-world RAGs at GenAIIC – Part 1

The AWS Generative AI Innovation Center (GenAIIC) is a team of AWS science and strategy experts who have deep knowledge of generative AI. They help AWS customers jumpstart their generative AI journey by building proofs of concept that use generative AI to bring business value. Since the inception of AWS GenAIIC in May 2023, we have witnessed high customer demand for chatbots that can extract information and generate insights from massive and often heterogeneous knowledge bases. Such use cases, which augment a large language model’s (LLM) knowledge with external data sources, are known as Retrieval-Augmented Generation (RAG).

This two-part series shares the insights gained by AWS GenAIIC from direct experience building RAG solutions across a wide range of industries. You can use this as a practical guide to building better RAG solutions.

In this first post, we focus on the basics of RAG architecture and how to optimize text-only RAG. The second post outlines how to work with multiple data formats such as structured data (tables, databases) and images.

Anatomy of RAG

RAG is an efficient way to provide an FM with additional knowledge by using external data sources and is depicted in the following diagram:

  • Retrieval: Based on a user’s question (1), relevant information is retrieved from a knowledge base (2) (for example, an OpenSearch index).
  • Augmentation: The retrieved information is added to the FM prompt (3.a) to augment its knowledge, along with the user query (3.b).
  • Generation: The FM generates an answer (4) by using the information provided in the prompt.

The following is a general diagram of a RAG workflow. From left to right are the retrieval, the augmentation, and the generation. In practice, the knowledge base is often a vector store.

Diagram of end-to-end RAG solution.

A deeper dive in the retriever

In a RAG architecture, the FM will base its answer on the information provided by the retriever. Therefore, a RAG is only as good as its retriever, and many of the tips that we share in our practical guide are about how to optimize the retriever. But what is a retriever exactly? Broadly speaking, a retriever is a module that takes a query as input and outputs relevant documents from one or more knowledge sources relevant to that query.

Document ingestion

In a RAG architecture, documents are often stored in a vector store. As shown in the following diagram, vector stores are populated by chunking the documents into manageable pieces (1) (if a document is short enough, chunking might not be required) and transforming each chunk of the document into a high-dimensional vector using a vector embedding (2), such as the Amazon Titan embeddings model. These embeddings have the characteristic that two chunks of texts that are semantically close have vector representations that are also close in that embedding (in the sense of the cosine or Euclidean distance).

The following diagram illustrates the ingestion of text documents in the vector store using an embedding model. Note that the vectors are stored alongside the corresponding text chunk (3), so that at retrieval time, when you identify the chunks closest to the query, you can return the text chunk to be passed to the FM prompt.

Diagram of the ingestion process.

Semantic search

Vector stores allow for efficient semantic search: as shown in the following diagram, given a user query (1), we vectorize it (2) (using the same embedding as the one that was used to build the vector store) and then look for the nearest vectors in the vector store (3), which will correspond to the document chunks that are semantically closest to the initial query (4). Although vector stores and semantic search have become the default in RAG architectures, more traditional keyword-based search is still valuable, especially when searching for domain-specific words (such as technical jargon) or names. Hybrid search is a way to use both semantic search and keywords to rank a document, and we will give more details on this technique in the section on advanced RAG techniques.

The following diagram illustrates the retrieval of text documents that are semantically close to the user query. You must use the same embedding model at ingestion time and at search time.

Diagram of the retrival process.

Implementation on AWS

A RAG chatbot can be set up in a matter of minutes using Amazon Bedrock Knowledge Bases. The knowledge base can be linked to an Amazon Simple Storage Service (Amazon S3) bucket and will automatically chunk and index the documents it contains in an OpenSearch index, which will act as the vector store. The retrieve_and_generate API does both the retrieval and a call to an FM (Amazon Titan or Anthropic’s Claude family of models on Amazon Bedrock), for a fully managed solution. The retrieve API only implements the retrieval component and allows for a more custom approach downstream, such as document post processing before calling the FM separately.

In this blog post, we will provide tips and code to optimize a fully custom RAG solution with the following components:

  • An OpenSearch Serverless vector search collection as the vector store
  • Custom chunking and ingestion functions to ingest the documents in the OpenSearch index
  • A custom retrieval function that takes a user query as an input and outputs the relevant documents from the OpenSearch index
  • FM calls to your model of choice on Amazon Bedrock to generate the final answer.

In this post, we focus on a custom solution to help readers understand the inner workings of RAG. Most of the tips we provide can be adapted to work with Amazon Bedrock Knowledge Bases, and we will point this out in the relevant sections.

Overview of RAG use cases

While working with customers on their generative AI journey, we encountered a variety of use cases that fit within the RAG paradigm. In traditional RAG use cases, the chatbot relies on a database of text documents (.doc, .pdf, or .txt). In part 2 of this post, we will discuss how to extend this capability to images and structured data. For now, we’ll focus on a typical RAG workflow: the input is a user question, and the output is the answer to that question, derived from the relevant text chunks or documents retrieved from the database. Use cases include the following:

  • Customer service– This can include the following:
    • Internal– Live agents use an internal chatbot to help them answer customer questions.
    • External– Customers directly chat with a generative AI chatbot.
    • Hybrid– The model generates smart replies for live agents that they can edit before sending to customers.
  • Employee training and resources– In this use case, chatbots can use employee training manuals, HR resources, and IT service documents to help employees onboard faster or find the information they need to troubleshoot internal issues.
  • Industrial maintenance– Maintenance manuals for complex machines can have several hundred pages. Building a RAG solution around these manuals helps maintenance technicians find relevant information faster. Note that maintenance manuals often have images and schemas, which could put them in a multimodal bucket.
  • Product information search– Field specialists need to identify relevant products for a given use case, or conversely find the right technical information about a given product.
  • Retrieving and summarizing financial news– Analysts need the most up-to-date information on markets and the economy and rely on large databases of news or commentary articles. A RAG solution is a way to efficiently retrieve and summarize the relevant information on a given topic.

In the following sections, we will give tips that you can use to optimize each aspect of the RAG pipeline (ingestion, retrieval, and answer generation) depending on the underlying use case and data format. To verify that the modifications improve the solution, you first need to be able to assess the performance of the RAG solution.

Evaluating a RAG solution

Contrary to traditional machine learning (ML) models, for which evaluation metrics are well defined and straightforward to compute, evaluating a RAG framework is still an open problem. First, collecting ground truth (information known to be correct) for the retrieval component and the generation component is time consuming and requires human intervention. Secondly, even with several question-and-answer pairs available, it’s difficult to automatically evaluate if the RAG answer is close enough to the human answer.

In our experience, when a RAG system performs poorly, we found the retrieval part to almost always be the culprit. Large pre-trained models such as Anthropic’s Claude model will generate high-quality answers if provided with the right information, and we notice two main failure modes:

  • The relevant information isn’t present in the retrieved documents: In this case, the FM can try to make up an answer or use its own knowledge to answer. Adding guardrails against such behavior is essential.
  • Relevant information is buried within an excessive amount of irrelevant data: When the scope of the retriever is too broad, the FM can get confused and start mixing up multiple data sources, resulting in a wrong answer. More advanced models such as Anthropic’s Claude Sonnet 3.5 and Opus are reported to be more robust against such behavior, but this is still a risk to be aware of.

To evaluate the quality of the retriever, you can use the following traditional retrieval metrics:

  • Top-k accuracy: Measures whether at least one relevant document is found within the top k retrieved documents.
  • Mean Reciprocal Rank (MRR)– This metric considers the ranking of the retrieved documents. It’s calculated as the average of the reciprocal ranks (RR) for each query. The RR is the inverse of the rank position of the first relevant document. For example, if the first relevant document is in third position, the RR is 1/3. A higher MRR indicates that the retriever can rank the most relevant documents higher.
  • Recall– This metric measures the ability of the retriever to retrieve relevant documents from the corpus. It’s calculated as the number of relevant documents that are successfully retrieved over the total number of relevant documents. Higher recall indicates that the retriever can find most of the relevant information.
  • Precision– This metric measures the ability of the retriever to retrieve only relevant documents and avoid irrelevant ones. It’s calculated by the number of relevant documents successfully retrieved over the total number of documents retrieved. Higher precision indicates that the retriever isn’t retrieving too many irrelevant documents.

Note that if the documents are chunked, the metrics must be computed at the chunk level. This means the ground truth to evaluate a retriever is pairs of question and list of relevant document chunks. In many cases, there is only one chunk that contains the answer to the question, so the ground truth becomes question and relevant document chunk.

To evaluate the quality of the generated response, two main options are:

  • Evaluation by subject matter experts: this provides the highest reliability in terms of evaluation but can’t scale to a large number of questions and slows down iterations on the RAG solution.
  • Evaluation by FM (also called LLM-as-a-judge):
    • With a human-created starting point: Provide the FM with a set of ground truth question-and-answer pairs and ask the FM to evaluate the quality of the generated answer by comparing it to the ground truth one.
    • With an FM-generated ground truth: Use an FM to generate question-and-answer pairs for given chunks, and then use this as a ground truth, before resorting to an FM to compare RAG answers to that ground truth.

We recommend that you use an FM for evaluations to iterate faster on improving the RAG solution, but to use subject-matter experts (or at least human evaluation) to provide a final assessment of the generated answers before deploying the solution.

A growing number of libraries offer automated evaluation frameworks that rely on additional FMs to create a ground truth and evaluate the relevance of the retrieved documents as well as the quality of the response:

  • Ragas– This framework offers FM-based metrics previously described, such as context recall, context precision, answer faithfulness, and answer relevancy. It needs to be adapted to Anthropic’s Claude models because of its heavy dependence on specific prompts.
  • LlamaIndex– This framework provides multiple modules to independently evaluate the retrieval and generation components of a RAG system. It also integrates with other tools such as Ragas and DeepEval. It contains modules to create ground truth (query-and-context pairs and question-and-answer pairs) using an FM, which alleviates the use of time-consuming human collection of ground truth.
  • RefChecker– This is an Amazon Science library focused on fine-grained hallucination detection.

Troubleshooting RAG

Evaluation metrics give an overall picture of the performance of retrieval and generation, but they don’t help diagnose issues. Diving deeper into poor responses can help you understand what’s causing them and what you can do to alleviate the issue. You can diagnose the issue by looking at evaluation metrics and also by having a human evaluator take a closer look at both the LLM answer and the retrieved documents.

The following is a brief overview of issues and potential fixes. We will describe each of the techniques in more detail, including real-world use cases and code examples, in the next section.

  • The relevant chunk wasn’t retrieved (retriever has low top k accuracy and low recall or spotted by human evaluation):
    • Try increasing the number of documents retrieved by the nearest neighbor search and re-ranking the results to cut back on the number of chunks after retrieval.
    • Try hybrid search. Using keywords in combination with semantic search (known as hybrid search) might help, especially if the queries contain names or domain-specific jargon.
    • Try query rewriting. Having an FM detect the intent or rewrite the query can help create a query that’s better suited for the retriever. For instance, a user query such as “What information do you have in the knowledge base about the economic outlook in China?” contains a lot of context that isn’t relevant to the search and would be more efficient if rewritten as “economic outlook in China” for search purposes.
  • Too many chunks were retrieved (retriever has low precision or spotted by human evaluation):
    • Try using keyword matching to restrict the search results. For example, if you’re looking for information about a specific entity or property in your knowledge base, only retrieve documents that explicitly mention them.
    • Try metadata filtering in your OpenSearch index. For example, if you’re looking for information in news articles, try using the date field to filter only the most recent results.
    • Try using query rewriting to get the right metadata filtering. This advanced technique uses the FM to rewrite the user query as a more structured query, allowing you to make the most of OpenSearch filters. For example, if you’re looking for the specifications of a specific product in your database, the FM can extract the product name from the query, and you can then use the product name field to filter out the product name.
    • Try using reranking to cut down on the number of chunks passed to the FM.
  • A relevant chunk was retrieved, but it’s missing some context (can only be assessed by human evaluation):
    • Try changing the chunking strategy. Keep in mind that small chunks are good for precise questions, while large chunks are better for questions that require a broad context:
      • Try increasing the chunk size and overlap as a first step.
      • Try using section-based chunking. If you have structured documents, use sections delimiters to cut your documents into chunks to have more coherent chunks. Be aware that you might lose some of the more fine-grained context if your chunks are larger.
    • Try small-to-large retrievers. If you want to keep the fine-grained details of small chunks but make sure you retrieve all the relevant context, small-to-large retrievers will retrieve your chunk along with the previous and next ones.
  • If none of the above help:
    • Consider training a custom embedding.
  • The retriever isn’t at fault, the problem is with FM generation (evaluated by a human or LLM):
    • Try prompt engineering to mitigate hallucinations.
    • Try prompting the FM to use quotes in its answers, to allow for manual fact checking.
    • Try using another FM to evaluate or correct the answer.

A practical guide to improving the retriever

Note that not all the techniques that follow need to be implemented together to optimize your retriever—some might even have opposite effects. Use the preceding troubleshooting guide to get a shortlist of what might work, then look at the examples in the corresponding sections that follow to assess if the method can be beneficial to your retriever.

Hybrid search

Example use case: A large manufacturer built a RAG chatbot to retrieve product specifications. These documents contain technical terms and product names. Consider the following example queries:

query_1 = "What is the viscosity of product XYZ?"
query_2 = "How viscous is XYZ?"

The queries are equivalent and need to be answered with the same document. The keyword component will make sure that you’re boosting documents mentioning the name of the product, XYZ while the semantic component will make sure that documents containing viscosity get a high score, even when the query contains the word viscous.

Combining vector search with keyword search can effectively handle domain-specific terms, abbreviations, and product names that embedding models might struggle with. Practically, this can be achieved in OpenSearch by combining a k-nearest neighbors (k-NN) query with keyword matching. The weights for the semantic search compared to keyword search can be adjusted. See the following example code:

vector_embedding = compute_embedding(query)
size = 10
semantic_weight = 10
keyword_weight = 1
search_query = {"size":size, "query": { "bool": { "should":[] , "must":[] } } }
    # semantic search
    search_query['query']['bool']['should'].append(
            {"function_score": 
             { "query": 
              {"knn": 
               {"vector_field": 
                {"vector": vector_embedding, 
                "k": 10 # The number of nearest neighbors to retrieve
                }}}, 
              "weight": semantic_weight } })
              
    # keyword search
    search_query['query']['bool']['should'].append({
             "function_score": 
            { "query": 
             {"match": 
             # This will increase the score of chunks that match the words in the query
              {"chunk_text":  query} 
              },
             "weight": keyword_weight } })

Amazon Bedrock Knowledge Bases also supports hybrid search, but you can’t adjust the weights for semantic compared to keyword search.

Adding metadata information to text chunks

Example use case: Using the same example of a RAG chatbot for product specifications, consider product specifications that are several pages long and where the product name is only present in the header of the document. When ingesting the document into the knowledge base, it’s chunked into smaller pieces for the embedding model, and the product name only appears in the first chunk, which contains the header. See the following example:

# Note: the following document was generated by Anthropic’s Claude Sonnet 
# and does not contain information about a real product

document_name = "Chemical Properties for Product XYZ"

chunk_1 = """
Product Description:
XYZ is a multi-purpose cleaning solution designed for industrial and commercial use. 
It is a concentrated liquid formulation containing anionic and non-ionic surfactants, 
solvents, and alkaline builders.

Chemical Composition:
- Water (CAS No. 7732-18-5): 60-80%
- 2-Butoxyethanol (CAS No. 111-76-2): 5-10%
- Sodium Hydroxide (CAS No. 1310-73-2): 2-5%
- Ethoxylated Alcohols (CAS No. 68439-46-3): 1-3%
- Sodium Metasilicate (CAS No. 6834-92-0): 1-3%
- Fragrance (Proprietary Mixture): <1%
"""

# chunk 2 below doesn't contain any mention of "XYZ"
chunk_2 = """
Physical Properties:
- Appearance: Clear, yellow liquid
- Odor: Mild, citrus fragrance
- pH (concentrate): 12.5 - 13.5
- Specific Gravity: 1.05 - 1.10
- Solubility in Water: Complete
- VOC Content: <10%

Shelf-life:
When stored in its original, unopened container at temperatures between 15°C and 25°C,
 the product has a shelf life of 24 months from the date of manufacture.
Once opened, the shelf life is reduced due to potential contamination and exposure to
 air. It is recommended to use the product within 6 months after opening the container.
"""

The chunk containing information about the shelf life of XYZ doesn’t contain any mention of the product name, so retrieving the right chunk when searching for shelf life of XYZ among dozens of other documents mentioning the shelf life of various products isn’t possible. A solution is to prepend the document name or title to each chunk. This way, when performing a hybrid search about the shelf life of product XYZ, the relevant chunk is more likely to be retrieved.

# append the document name to the chunks to improve context,
# now chunk 2 will contain the product name

chunk_1 = document_name + chunk_1
chunk_2 = document_name + chunk_2

This is one way to use document metadata to improve search results, which can be sufficient in some cases. Later, we discuss how you can use metadata to filter the OpenSearch index.

Small-to-large chunk retrieval

Example use case: A customer built a chatbot to help their agents better serve customers. When the agent tries to help a customer troubleshoot their internet access, he might search for How to troubleshoot internet access? You can see a document where the instructions are split between two chunks in the following example. The retriever will most likely return the first chunk but might miss the second chunk when using hybrid search. Prepending the document title might not help in this example.

document_title = "Resolving network issues"

chunk_1 = """
[....]

# Troubleshooting internet access:

1. Check your physical connections:
   - Ensure that the Ethernet cable (if using a wired connection) is securely 
   plugged into both your computer and the modem/router.
   - If using a wireless connection, check that your device's Wi-Fi is turned 
   on and connected to the correct network.

2. Restart your devices:
   - Reboot your computer, laptop, or mobile device.
   - Power cycle your modem and router by unplugging them from the power source, 
   waiting for a minute, and then plugging them back in.

"""

chunk_2 = """
3. Check for network outages:
   - Contact your internet service provider (ISP) to inquire about any known 
   outages or service disruptions in your area.
   - Visit your ISP's website or check their social media channels for updates on 
   service status.
  
4. Check for interference:
   - If using a wireless connection, try moving your device closer to the router or access point.
   - Identify and eliminate potential sources of interference, such as microwaves, cordless phones, or other wireless devices operating on the same frequency.

# Router configuration

[....]
"""

To mitigate this issue, the first thing to try is to slightly increase the chunk size and overlap, reducing the likelihood of improper segmentation, but this requires trial and error to find the right parameters. A more effective solution is to employ a small-to-large chunk retrieval strategy. After retrieving the most relevant chunks through semantic or hybrid search (chunk_1 in the preceding example), adjacent chunks (chunk_2) are retrieved, merged with the initial chunks and provided to the FM for a broader context. You can even pass the full document text if the size is reasonable.

This method requires an additional OpenSearch field in the index to keep track of the chunk number and document name at ingest time, so that you can use those to retrieve the neighboring chunks after retrieving the most relevant chunk. See the following code example.

document_name = doc['document_name'] 
current_chunk = doc['current_chunk']

query = {
    "query": {
        "bool": {
            "must": [
                {
                    "match": {
                        "document_name": document_name
                    }
                }
            ],
            "should": [
                {"term": {"chunk_number": current_chunk - 1}},
                {"term": {"chunk_number": current_chunk + 1}}
            ],
            "minimum_should_match": 1
        }
    }
}

A more general approach is to do hierarchical chunking, in which each small (child) chunk is linked to a larger (parent) chunk. At retrieval time, you retrieve the child chunks, but then replace them with the parent chunks before sending the chunks to the FM.

Amazon Bedrock Knowledge Bases can perform hierarchical chunking.

Section-based chunking

Example use case: A financial news provider wants to build a chatbot to retrieve and summarize commentary articles about certain geographic regions, industries, or financial products. The questions require a broad context, such as What is the outlook for electric vehicles in China? Answering that question requires access to the entire section on electric vehicles in the “Chinese Auto Industry Outlook” commentary article. Compare that to other question and answer use cases that require small chunks to answer a question (such as our example about searching for product specifications).

Example use case: Section based chunking also works well for how-to-guides (such as the preceding internet troubleshooting example) or industrial maintenance use cases where the user needs to follow step-by-step instructions and having truncated content would have a negative impact.

Using the structure of the text document to determine where to split it is an efficient way to create chunks that are coherent and contain all relevant context. If the document is in HTML or Markdown format, you can use the section delimiters to determine the chunks (see Langchain Markdown Splitter or HTML Splitter). If the documents are in PDF format, the Textractor library provides a wrapper around Amazon Textract that uses the Layout feature to convert a PDF document to Markdown or HTML.

Note that section-based chunking will create chunks with varying size, and they might not fit the context window of Cohere Embed, which is limited to 500 tokens. Amazon Titan Text Embeddings are better suited to section-based chunking because of their context window of 8,192 tokens.

To implement section based chunking in Amazon Bedrock Knowledge Bases, you can use an AWS Lambda function to run a custom transformation. Amazon Bedrock Knowledge Bases also has a feature to create semantically coherent chunks, called semantic chunking. Instead of using the sections of the documents to determine the chunks, it uses embedding distance to create meaningful clusters of sentences.

Rewriting the user query

Query rewriting is a powerful technique that can benefit a variety of use cases.

Example use case: A RAG chatbot that’s built for a food manufacturer allows customers to ask questions about products, such as ingredients, shelf-life, and allergens. Consider the following example query:

query = """" 
Can you list all the ingredients in the nuts and seeds granola?
Put the allergens in all caps. 
"""

Query rewriting can help with two things:

  • It can rewrite the query just for search purposes, without information about formatting that might distract the retriever.
  • It can extract a list of keywords to use for hybrid search.
  • It can extract the product name, which can be used as a filter in the OpenSearch index to refine search results (more details in the next section).

In the following code, we prompt the FM to rewrite the query and extract keywords and the product name. To avoid introducing too much latency with query rewriting, we suggest using a smaller model like Anthropic’s Claude Haiku and provide an example of a reformatted query to boost the performance.

import json

query_rewriting_prompt = """
Rewrite the query as a json with the following keys:
- rewritten_query: a better version of the user's query that will be used to compute 
an embedding and do semantic search
- keywords: a list of keywords that correspond to the query, to be used in a 
search engine, it should not contain the product name.
- product_name: if the query is a about a specific product, give the name here,
 otherwise say None.

<example>
H: what are the ingedients in the savory trail mix?
A: {{
  "rewritten_query": "ingredients savory trail mix",
  "keywords": ["ingredients"],
  "product_name": "savory trail mix"
}}
</example>

<query>
{query}
</query>

Only output the json, nothing else.
"""

def rewrite_query(query):
    response = call_FM(query_rewriting_prompt.format(query=query))
    print(response)
    json_query = json.loads(response)
    return json_query
    
rewrite_query(query)

The code output will be the following json:

{ 
"rewritten_query":"ingredients nuts and seeds granola allergens",
"keywords": ["ingredients", "allergens"], 
"product_name": "nuts and seeds granola" 
}

Amazon Bedrock Knowledge Bases now supports query rewriting. See this tutorial.

Metadata filtering

Example use case: Let’s continue with the previous example, where a customer asks “Can you list all the ingredients in the nuts and seeds granola? Put the allergens in bold and all caps.” Rewriting the query allowed you to remove superfluous information about the formatting and improve the results of hybrid search. However, there might be dozens of products that are either granola, or nuts, or granola with nuts.

If you enforce an OpenSearch filter to match exactly the product name, the retriever will return only the product information for nuts and seeds granola instead of the k-nearest documents when using hybrid search. This will reduce the number of tokens in the prompt and will both improve latency of the RAG chatbot and diminish the risk of hallucinations because of information overload.

This scenario requires setting up the OpenSearch index with metadata. Note that if your documents don’t come with metadata attached, you can use an FM at ingest time to extract metadata from the documents (for example, title, date, and author).

oss = get_opensearch_serverless_client()
request = {
"product_info": product_info, # full text for the product information
"vector_field_product":embed_query_titan(product_info), # embedding for product information
"product_name": product_name,
"date": date, # optional field, can allow to sort by most recent
"_op_type": "index",
"source": file_key # this is the s3 location, you can replace this with a URL
}
oss.index(index = index_name, body = request)

The following is an example of combining hybrid search, query rewriting, and filtering on the product_name field. Note that for the product name, we use a match_phrase clause to make sure that if the product name contains several words, the product name is matched in full; that is, if the product you’re looking for is “nuts and seeds granola”, you don’t want to match all product names that contain “nuts”, “seeds”, or “granola”.

query = """
Can you list all the ingredients in the nuts and seeds granola?
Put the allergens in bold and all caps.
"""
# using the rewrite_query function from the previous section
json_query = rewrite_query(query) 

# get the product name and keywords from the json query
product_name = json_query["product_name"] 
keywords = json_query["keywords"]

# compute the vector embedding of the rewritten query
vector_embedding = compute_embedding(json_query["rewritten_query"])

#initialize search query dictionary
search_query = {"size":10, "query": { "bool": { "should":[] , "must":[] } } }
# add must with match_phrase clause to filter on product name
search_query['query']['bool']['should'].append(
    {"match_phrase": {
            "product_name": product_name # Extracted product name must match product name field 
        }
        }

# semantic search
search_query['query']['bool']['should'].append(
        {"function_score": 
            { "query": 
            {"knn": 
            {"vector_field_product": 
            {"vector": vector_embedding, 
            "k": 10 # The number of nearest neighbors to retrieve
            }}}, 
            "weight": semantic_weight } })
            
# keyword search
search_query['query']['bool']['should'].append(
{"function_score": 
        { "query": 
            {"match": 
            # This will increase the score of chunks that match the words in the query
            {"product_info":  query} 
            },
            "weight": keyword_weight } })

Amazon Bedrock Knowledge Bases recently introduced the ability to use metadata. See Amazon Bedrock Knowledge Bases now supports metadata filtering to improve retrieval accuracy for details on the implementation.

Training custom embeddings

Training custom embeddings is a more expensive and time-consuming way to improve a retriever, so it shouldn’t be the first thing to try to improve your RAG. However, if the performance of the retriever is still not satisfactory after trying the tips already mentioned, then training a custom embedding can boost its performance. Amazon Titan Text Embeddings models aren’t currently available for fine tuning, but the FlagEmbedding library on Hugging Face provides a way to fine-tune BAAI embeddings, which are available in several sizes and rank highly in the Hugging Face embedding leaderboard. Fine-tuning requires the following steps:

  • Gather positive question-and-document pairs. You can do this manually or by using an FM prompted to generate questions based on the document.
  • Gather negative question-and-document pairs. It’s important to focus on documents that might be considered relevant by the pre-trained model but are not. This process is called hard negative mining.
  • Feed those pairs to the FlagEmbedding training module for fine-tuning as a JSON:
    {"query": str, "pos": List[str], "neg":List[str]}
    where query is the query, pos is a list of positive texts, and neg is a list of negative texts.
  • Combine the fine-tuned model with a pre-trained model using to avoid over-fitting on the fine-tuning dataset.
  • Deploy the final model for inference, for example on Amazon SageMaker, and evaluate it on sample questions.

Improving reliability of generated responses

Even with an optimized retriever, hallucinations can still occur. Prompt engineering is the best way to help prevent hallucinations in RAG. Additionally, asking the FM to generate quotations used in the answer can further reduce hallucinations and empower the user to verify the information sources.

Prompt engineering guardrails

Example use case: We built a chatbot that analyzes scouting reports for a professional sports franchise. The user might input What are the strengths of Player X? Without guardrails in the prompt, the FM might try to fill the gaps in the provided documents by using its own knowledge of Player X (if he’s a well-known player) or worse, make up information by combining knowledge it has about other players.

The FM’s training knowledge can sometimes get in the way of RAG answers. Basic prompting techniques can help mitigate hallucinations:

  • Instruct the FM to only use information available in the documents to answer the question.
    • Only use the information available in the documents to answer the question
  • Giving the FM the option to say when it doesn’t have the answer.
    • If you can’t answer the question based on the documents provided, say you don’t know.

Asking the FM to output quotes

Another approach to make answers more reliable is to output supporting quotations. This has two benefits:

  • It allows the FM to generate its response by first outputting the relevant quotations, and then using them to generate its answer.
  • The presence of the quotation in the cited document can be checked programmatically, and the user can be warned if the quotation wasn’t found in the text. They can also look in the referenced document to get more context about the quotation.

In the following example, we prompt the FM to output quotations in <quote> tags. The quotations are nicely formatted as a JSON, with the source document name. Note how we put each document in its own <doc_i> tag in the prompt, so that the FM can reference it.

# Note: The scouting reports for Player A and Player B used in the example below
# were generated by Anthropic’s Claude Sonnet 3.
quotes_tags_prompt = """
You are an Ai assistant chatbot designed to answer questions about your team's baseballe scouting reports.

Here are some reports about players that you can use to answer the question:

<doc_0>
{doc_0}
</doc_0>

<doc_1>
{doc_1}
</doc_1>

<instructions>
- In <scratchpad> tags, you should put the document names and quotes from these documents 
 that help you answer this question. 
 You must format this as one or more jsons format with 'document_name' and 'quote' as keys. 
- Then put your answer in <answer> tags. 
 If you refer to documents in your answer, make sure you are using the corresponding tag 
 e.g. in doc_0, in doc_3 etc.
- If you can't answer the question because there is not enough information in the reports say 
 "I can't answer this based on the provided reports." 
</instructions>

<question>
{question}
</question>
"""

Below is the FM response to the question "What are the weaknesses of each player?":

"""
<scratchpad>
[
{
"document_name": "doc_0",
"quote": "The main concern with Player A is his approach at the plate. He can be overly aggressive at times and will chase pitches out of the zone. Improving his plate discipline and pitch recognition will be key to him reaching his full offensive potential."
},
{
"document_name": "doc_1",
"quote": "The main area of focus for Player B's development will be improving his plate discipline and pitch recognition. He can be overly aggressive at times, leading to a high strikeout rate and a lower on-base percentage than his hit tool would suggest."
}
]
</scratchpad>
<answer>
The main weaknesses of the players are:
For Player A:
- Overly aggressive approach at the plate, chasing pitches out of the zone
- Needs to improve plate discipline and pitch recognition to reach full offensive potential
For Player B:
- Can be overly aggressive at the plate, leading to high strikeout rate and lower on-base percentage
- Needs to improve plate discipline and pitch recognition
</answer>
"""

One drawback of generating quotations outside of the answer is that the FM tends to only select one or two short quotations per document. This doesn’t make it a great choice when you’re asking complex questions that need to pull a lot of information from diverse documents. It also leads to creating more output tokens that are eventually hidden from the user, which can significantly degrade the latency of the RAG if the quotations are too long.

Generating an answer with inline quotations

Continuing the example of the chatbot to analyze scouting reports, the user might want to summarize reports about a player from several scouts. A single player can have over a dozen scouting reports that the FM could use in its summary, and the previous method outputs only a small number of quotations as justification. One alternative is to ask the FM to embed quotations directly in the answer, by using quotation marks and inline citations.

# Note: The scouting reports for Player A used in the example below
# were generated by Anthropic’s Claude Sonnet 3.

quotes_in_text_prompt = """
You are an Ai assistant chatbot designed to answer questions about your team's baseballe scouting reports.

Here are some reports about players that you can use to answer the question:

<doc_0>
{doc_0}
</doc_0>

...

<doc_10>
{doc_10}
</doc_10>

<instructions>
- Put your answer in <answer> tags.
- Use as much information from different reports as possible.
- You should only use information in the documents to answer. If you don't have enough information in the reports to answer, say you can't answer based on the reports.
- You should ground your answer by quoting the relevant documents by using quotation marks.
- After the quotes, put an inline citation <example>Player A is "very athletic" (doc_2)</example>
</instructions>

<question>
{question}
</question>
"""

Below is the FM response to the question "What are the strengths of player A?":

"""
<scratchpad>
[
{
"document_name": "doc_0",
"quote": "The main concern with Player A is his approach at the plate. He can be overly aggressive at times and will chase pitches out of the zone. Improving his plate discipline and pitch recognition will be key to him reaching his full offensive potential."
},
{
"document_name": "doc_1",
"quote": "The main area of focus for Player B's development will be improving his plate discipline and pitch recognition. He can be overly aggressive at times, leading to a high strikeout rate and a lower on-base percentage than his hit tool would suggest."
}
]
</scratchpad>
<answer>
The main weaknesses of the players are:
For Player A:
- Overly aggressive approach at the plate, chasing pitches out of the zone
- Needs to improve plate discipline and pitch recognition to reach full offensive potential
For Player B:
- Can be overly aggressive at the plate, leading to high strikeout rate and lower on-base percentage
- Needs to improve plate discipline and pitch recognition
</answer>
"""

Verifying quotes

You can use a Python script to check if a quotation is present in the referenced text, thanks to the tag doc_i. However, while this checking mechanism guarantees no false positives, there can be false negatives. When the quotation-checking function fails to find a quotation in the documents, it means only that the quotation isn’t present verbatim in the text. The information might still be factually correct but formatted differently. The FM might remove punctuation or correct misspellings from the original document, or the presence of Unicode characters in the original document that cannot be generated by the FM make the quotation-checking function fail.

To improve the user experience, you can display in the UI if the quotation was found, in which case the user can fully trust the response, and if the quotation wasn’t found, the UI can display a warning and suggest that the user check the cited source. Another benefit of prompting the FM to provide the associated source in the response is that it allows you to display only the sources in the UI to avoid information overload but still provide the user with a way to look for additional information if needed.

An additional FM call, potentially with another model, can be used to assess the response instead of using the more rigid approach of the Python script. However, using an FM to grade another FM answer has some uncertainty and it cannot match the reliability provided by using a script to check the quotation or, in the case of a suspect quotation, by using human verification.

Conclusion

Building effective text-only RAG solutions requires carefully optimizing the retrieval component to surface the most relevant information to the language model. Although FMs are highly capable, their performance is heavily dependent on the quality of the retrieved context.

As the adoption of generative AI continues to accelerate, building trustworthy and reliable RAG solutions will become increasingly crucial across industries to facilitate their broad adoption. We hope the lessons learned from our experiences at AWS GenAIIC provide a solid foundation for organizations embarking on their own generative AI journeys.

In this part of this series, we covered the core concepts behind RAG architectures and discussed strategies for evaluating RAG performance, both quantitatively through metrics and qualitatively by analyzing individual outputs. We outlined several practical tips for improving text retrieval, including using hybrid search techniques, enhancing context through data preprocessing, and rewriting queries for better relevance. We also explored methods for increasing reliability, such as prompting the language model to provide supporting quotations from the source material and programmatically verifying their presence.

In the second post in this series, we will discuss RAG beyond text. We will present techniques to work with multiple data formats, including structured data (tables and databases) and multimodal RAG, which mixes text and images.


About the Author

Aude Genevay is a Senior Applied Scientist at the Generative AI Innovation Center, where she helps customers tackle critical business challenges and create value using generative AI. She holds a PhD in theoretical machine learning and enjoys turning cutting-edge research into real-world solutions.

Read More

Enhance your Amazon Redshift cloud data warehouse with easier, simpler, and faster machine learning using Amazon SageMaker Canvas

Enhance your Amazon Redshift cloud data warehouse with easier, simpler, and faster machine learning using Amazon SageMaker Canvas

Machine learning (ML) helps organizations to increase revenue, drive business growth, and reduce costs by optimizing core business functions such as supply and demand forecasting, customer churn prediction, credit risk scoring, pricing, predicting late shipments, and many others.

Conventional ML development cycles take weeks to many months and requires sparse data science understanding and ML development skills. Business analysts’ ideas to use ML models often sit in prolonged backlogs because of data engineering and data science team’s bandwidth and data preparation activities.

In this post, we dive into a business use case for a banking institution. We will show you how a financial or business analyst at a bank can easily predict if a customer’s loan will be fully paid, charged off, or current using a machine learning model that is best for the business problem at hand. The analyst can easily pull in the data they need, use natural language to clean up and fill any missing data, and finally build and deploy a machine learning model that can accurately predict the loan status as an output, all without needing to become a machine learning expert to do so. The analyst will also be able to quickly create a business intelligence (BI) dashboard using the results from the ML model within minutes of receiving the predictions. Let’s learn about the services we will use to make this happen.

Amazon SageMaker Canvas is a web-based visual interface for building, testing, and deploying machine learning workflows. It allows data scientists and machine learning engineers to interact with their data and models and to visualize and share their work with others with just a few clicks.

SageMaker Canvas has also integrated with Data Wrangler, which helps with creating data flows and preparing and analyzing your data. Built into Data Wrangler, is the Chat for data prep option, which allows you to use natural language to explore, visualize, and transform your data in a conversational interface.

Amazon Redshift is a fast, fully managed, petabyte-scale data warehouse service that makes it cost-effective to efficiently analyze all your data using your existing business intelligence tools.

Amazon QuickSight powers data-driven organizations with unified (BI) at hyperscale. With QuickSight, all users can meet varying analytic needs from the same source of truth through modern interactive dashboards, paginated reports, embedded analytics, and natural language queries.

Solution overview

The solution architecture that follows illustrates:

  1. A business analyst signing in to SageMaker Canvas.
  2. The business analyst connects to the Amazon Redshift data warehouse and pulls the desired data into SageMaker Canvas to use.
  3. We tell SageMaker Canvas to build a predictive analysis ML model.
  4. After the model has been built, get batch prediction results.
  5. Send the results to QuickSight for users to further analyze.

Prerequisites

Before you begin, make sure you have the following prerequisites in place:

  • An AWS account and role with the AWS Identity and Access Management (IAM) privileges to deploy the following resources:
    • IAM roles.
    • A provisioned or serverless Amazon Redshift data warehouse. For this post we’ll use a provisioned Amazon Redshift cluster.
    • A SageMaker domain.
    • A QuickSight account (optional).
  • Basic knowledge of a SQL query editor.

Set up the Amazon Redshift cluster

We’ve created a CloudFormation template to set up the Amazon Redshift cluster.

  1. Deploy the Cloudformation template to your account.
  2. Enter a stack name, then choose Next twice and keep the rest of parameters as default.
  3. In the review page, scroll down to the Capabilities section, and select I acknowledge that AWS CloudFormation might create IAM resources.
  4. Choose Create stack.

The stack will run for 10–15 minutes. After it’s finished, you can view the outputs of the parent and nested stacks as shown in the following figures:

Parent stack

Nested stack 

Sample data

You will use a publicly available dataset that AWS hosts and maintains in our own S3 bucket as a workshop for bank customers and their loans that includes customer demographic data and loan terms.

Implementation steps

Load data to the Amazon Redshift cluster

  1. Connect to your Amazon Redshift cluster using Query Editor v2. To navigate to the Amazon Redshift Query v2 editor, please follow the steps Opening query editor v2.
  2. Create a table in your Amazon Redshift cluster using the following SQL command:
    DROP table IF EXISTS public.loan_cust;
    
    CREATE TABLE public.loan_cust (
        loan_id bigint,
        cust_id bigint,
        loan_status character varying(256),
        loan_amount bigint,
        funded_amount_by_investors double precision,
        loan_term bigint,
        interest_rate double precision,
        installment double precision,
        grade character varying(256),
        sub_grade character varying(256),
        verification_status character varying(256),
        issued_on character varying(256),
        purpose character varying(256),
        dti double precision,
        inquiries_last_6_months bigint,
        open_credit_lines bigint,
        derogatory_public_records bigint,
        revolving_line_utilization_rate double precision,
        total_credit_lines bigint,
        city character varying(256),
        state character varying(256),
        gender character varying(256),
        ssn character varying(256),
        employment_length bigint,
        employer_title character varying(256),
        home_ownership character varying(256),
        annual_income double precision,
        age integer
    ) DISTSTYLE AUTO;

  3. Load data into the loan_cust table using the following COPY command:
    COPY loan_cust  FROM 's3://redshift-demos/bootcampml/loan_cust.csv'
    iam_role default
    region 'us-east-1' 
    delimiter '|'
    csv
    IGNOREHEADER 1;

  4. Query the table to see what the data looks like:
    SELECT * FROM loan_cust LIMIT 100;

Set up chat for data

  1. To use the chat for data option in Sagemaker Canvas, you must enable it in Amazon Bedrock.
    1. Open the AWS Management Console, go to Amazon Bedrock, and choose Model access in the navigation pane.
    2. Choose Enable specific models, under Anthropic, select Claude and select Next.
    3. Review the selection and click Submit.
  2. Navigate to Amazon SageMaker service from the AWS management console, select Canvas and click on Open Canvas.
  3. Choose Datasets from the navigation pane, then choose the Import data dropdown, and select Tabular.
  1. For Dataset name, enter redshift_loandata and choose Create.
  2. On the next page, choose Data Source and select Redshift as the source. Under Redshift, select + Add Connection.
  3. Enter the following details to establish your Amazon Redshift connection :
    1. Cluster Identifier: Copy the ProducerClusterName from the CloudFormation nested stack outputs.
    2. You can reference the preceding screen shot for Nested Stack, where you will find the cluster identifier output.
    3. Database name: Enter dev.
    4. Database user: Enter awsuser.
    5. Unload IAM Role ARN: Copy theRedshiftDataSharingRoleName from the nested stack outputs.
    6. Connection Name: Enter MyRedshiftCluster.
    7. Choose Add connection.

  4. After the connection is created, expand the public schema, drag the loan_cust table into the editor, and choose Create dataset.
  5. Choose the redshift_loandata dataset and choose Create a data flow.
  6. Enter redshift_flow for the name and choose Create.
  7. After the flow is created, choose Chat for data prep.
  8. In the text box, enter summarize my data and choose the run arrow.
  9. The output should look something like the following:
  1. Now you can use natural language to prep the dataset. Enter Drop ssn and filter for ages over 17 and click on the run arrow. You will see it was able to handle both steps. You can also view the PySpark code that it ran. To add these steps as dataset transforms, choose Add to steps.
  2. Rename the step to drop ssn and filter age > 17, choose Update, and then choose Create model.
  3. Export data and create model: Enter loan_data_forecast_dataset for the Dateset name, for Model name, enter loan_data_forecast, for Problem type, select Predictive analysis, for Target column, select loan_status, and click Export and create model.
  4. Verify the correct Target column and Model type is selected and click on Quick build.
  5. Now the model is being created. It usually takes 14–20 minutes depending on the size of your data set.
  6. After the model has completed training, you will be routed to the Analyze tab. There, you can see the average prediction accuracy and the column impact on prediction outcome. Note that your numbers might differ from the ones you see in the following figure, because of the stochastic nature of the ML process.

Use the model to make predictions

  1. Now let’s use the model to make predictions for the future status of loans. Choose Predict.
  2. Under Choose the prediction type, select Batch prediction, then select Manual.
  3. Then select loan_data_forecast_dataset from the dataset list, and click Generate predictions.
  4. You’ll see the following after the batch prediction is complete. Click on the breadcrumb menu next to the Ready status and click on Preview to view the results.
  5. You can now view the predictions and download them as CSV.
  6. You can also generate single predictions for one row of data at a time. Under Choose the prediction type, select Single Prediction and then change the values for any of the input fields that you’d like, and choose Update.

Analyze the predictions

We will now show you how to use Quicksight to visualize the predictions data from SageMaker canvas to further gain insights from your data. SageMaker Canvas has direct integration with QuickSight, which is a cloud-powered business analytics service that helps employees within an organization to build visualizations, perform ad-hoc analysis, and quickly get business insights from their data, anytime, on any device.

  1. With the preview page up, choose Send to Amazon QuickSight.
  2. Enter a QuickSight user name you want to share the results to.
  3. Choose Send and you should see confirmation saying the results were sent successfully.
  4. Now, you can create a QuickSight dashboard for predictions.
    1. Go to the QuickSight console by entering QuickSight in your console services search bar and choose QuickSight.
    2. Under Datasets, select the SageMaker Canvas dataset that was just created.
    3. Choose Edit Dataset.
    4. Under the State field, change the data type to State.
    5. Choose Create with Interactive sheet selected.
    6. Under visual types, choose the Filled map
    7. Select the State and Probability
    8. Under Field wells, choose Probability and change the Aggregate to Average and Show as to Percent.
    9. Choose Filter and add a filter for loan_status to include fully paid loans only. Choose Apply.
    10. At the top right in the blue banner, choose Share and Publish Dashboard.
    11. We use the name Average probability for fully paid loan by state, but feel free to use your own.
    12. Choose Publish dashboard and you’re done. You would now be able to share this dashboard with your predictions to other analysts and consumers of this data.

Clean up

Use the following steps to avoid any extra cost to your account:

  1. Sign out of SageMaker Canvas
  2. In the AWS console, delete the CloudFormation stack you launched earlier in the post.

Conclusion

We believe integrating your cloud data warehouse (Amazon Redshift) with SageMaker Canvas opens the door to producing many more robust ML solutions for your business at faster and without needing to move data and with no ML experience.

You now have business analysts producing valuable business insights, while letting data scientists and ML engineers help refine, tune, and extend models as needed. SageMaker Canvas integration with Amazon Redshift provides a unified environment for building and deploying machine learning models, allowing you to focus on creating value with your data rather than focusing on the technical details of building data pipelines or ML algorithms.

Additional reading:

  1. SageMaker Canvas Workshop
  2. re:Invent 2022 – SageMaker Canvas
  3. Hands-On Course for Business Analysts – Practical Decision Making using No-Code ML on AWS

About the Authors

Suresh Patnam is Principal Sales Specialist  AI/ML and Generative AI at AWS. He is passionate about helping businesses of all sizes transform into fast-moving digital organizations focusing on data, AI/ML, and generative AI.

Sohaib Katariwala is a Sr. Specialist Solutions Architect at AWS focused on Amazon OpenSearch Service. His interests are in all things data and analytics. More specifically he loves to help customers use AI in their data strategy to solve modern day challenges.

Michael Hamilton is an Analytics & AI Specialist Solutions Architect at AWS. He enjoys all things data related and helping customers solution for their complex use cases.

Nabil Ezzarhouni is an AI/ML and Generative AI Solutions Architect at AWS. He is based in Austin, TX and  passionate about Cloud, AI/ML technologies, and Product Management. When he is not working, he spends time with his family, looking for the best taco in Texas. Because…… why not?

Read More

Create a generative AI-based application builder assistant using Amazon Bedrock Agents

Create a generative AI-based application builder assistant using Amazon Bedrock Agents

In this post, we set up an agent using Amazon Bedrock Agents to act as a software application builder assistant.

Agentic workflows are a fresh new perspective in building dynamic and complex business use- case based workflows with the help of large language models (LLM) as their reasoning engine or brain. These agentic workflows decompose the natural language query-based tasks into multiple actionable steps with iterative feedback loops and self-reflection to produce the final result using tools and APIs.

Amazon Bedrock Agents helps you accelerate generative AI application development by orchestrating multistep tasks. Amazon Bedrock Agents uses the reasoning capability of foundation models (FMs) to break down user-requested tasks into multiple steps. They use the developer-provided instruction to create an orchestration plan and then carry out the plan by invoking company APIs and accessing knowledge bases using Retrieval Augmented Generation (RAG) to provide a final response to the end user. This offers tremendous use case flexibility, enables dynamic workflows, and reduces development cost. Amazon Bedrock Agents is instrumental in customization and tailoring apps to help meet specific project requirements while protecting private data and securing their applications. These agents work with AWS managed infrastructure capabilities and Amazon Bedrock, reducing infrastructure management overhead. Additionally, agents streamline workflows and automate repetitive tasks. With the power of AI automation, you can boost productivity and reduce cost.

Amazon Bedrock is a fully managed service that offers a choice of high-performing FMs from leading AI companies like AI21 Labs, Anthropic, Cohere, Meta, Mistral AI, Stability AI, and Amazon through a single API, along with a broad set of capabilities to build generative AI applications with security, privacy, and responsible AI.

Solution overview

Typically, a three-tier software application has a UI interface tier, a middle tier (the backend) for business APIs, and a database tier. The generative AI–based application builder assistant from this post will help you accomplish tasks through all three tiers. It can generate and explain code snippets for UI and backend tiers in the language of your choice to improve developer productivity and facilitate rapid development of use cases. The agent can recommend software and architecture design best practices using the AWS Well-Architected Framework for the overall system design.

The agent can generate SQL queries using natural language questions using a database schema DDL (data definition language for SQL) and execute them against a database instance for the database tier.

We use Amazon Bedrock Agents with two knowledge bases for this assistant. Amazon Bedrock Knowledge Bases inherently uses the Retrieval Augmented Generation (RAG) technique. A typical RAG implementation consists of two parts:

  • A data pipeline that ingests data from documents typically stored in Amazon Simple Storage Service (Amazon S3) into a knowledge base, namely a vector database such as Amazon OpenSearch Serverless, so that it’s available for lookup when a question is received
  • An application that receives a question from the user, looks up the knowledge base for relevant pieces of information (context), creates a prompt that includes the question and the context, and provides it to an LLM for generating a response

The following diagram illustrates how our application builder assistant acts as a coding assistant, recommends AWS design best practices, and aids in SQL code generation.

architecture diagram for this notebook to demonstrate the conditional workflow for llms. This shows 3 workflows possible via this Application Builder Assistant. 1) Text to SQL - generate SQL statements via natural language and execute it against a local DB 2) web scraped knowledge base on AWS well architected framework - user can ask questions on it 3) Write and explain code via Claude LLM. User can ask any of these three types of questions making it an application builder assistant.

Based on the three workflows in the preceding figure, let’s explore the type of task you need for different use cases:

  • Use case 1 – If you want to write and validate a SQL query against a database, use the existing DDL schemas set up as knowledge base 1 to come up with the SQL query. The following are sample user queries:
    • What are the total sales amounts by year?
    • What are the top five most expensive products?
    • What is the total revenue for each employee?
  • Use case 2 – If you want recommendations on design best practices, look up the AWS Well-Architected Framework knowledge base (knowledge base 2). The following are sample user queries:
    • How can I design secure VPCs?
    • What are some S3 best practices?
  • Use case 3 – You might want to author some code, such as helper functions like validate email, or use existing code. In this case, use prompt engineering techniques to call the default agent LLM and generate the email validation code. The following are sample user queries:
    • Write a Python function to validate email address syntax.
    • Explain the following code in lucid, natural language to me. $code_to_explain (this variable is populated using code contents from any code file of your choice. More details can be found in the notebook).

Prerequisites

To run this solution in your AWS account, complete the following prerequisites:

  1. Clone the GitHub repository and follow the steps explained in the README.
  2. Set up an Amazon SageMaker notebook on an ml.t3.medium Amazon Elastic Compute Cloud (Amazon EC2) instance. For this post, we have provided an AWS CloudFormation template, available in the GitHub repository. The CloudFormation template also provides the required AWS Identity and Access Management (IAM) access to set up the vector database, SageMaker resources, and AWS Lambda
  3. Acquire access to models hosted on Amazon Bedrock. Choose Manage model access in the navigation pane on the Amazon Bedrock console and choose from the list of available options. We use Anthropic’s Claude v3 (Sonnet) on Amazon Bedrock and Amazon Titan Embeddings Text v2 on Amazon Bedrock for this post.

Implement the solution

In the GitHub repository notebook, we cover the following learning objectives:

  1. Choose the underlying FM for your agent.
  2. Write a clear and concise agent instruction to use one of the two knowledge bases and base agent LLM. (Examples given later in the post.)
  3. Create and associate an action group with an API schema and a Lambda function.
  4. Create, associate, and ingest data into the two knowledge bases.
  5. Create, invoke, test, and deploy the agent.
  6. Generate UI and backend code with LLMs.
  7. Recommend AWS best practices for system design with the AWS Well-Architected Framework guidelines.
  8. Generate, run, and validate the SQL from natural language understanding using LLMs, few-shot examples, and a database schema as a knowledge base.
  9. Clean up agent resources and their dependencies using a script.

Agent instructions and user prompts

The application builder assistant agent instruction looks like the following.

Hello, I am AI Application Builder Assistant. I am capable of answering the following three categories of questions:

- Best practices for design of software applications using the content inside the AWS best practices 
and AWS well-architected framework Knowledge Base. I help customers understand AWS best practices for 
building applications with AWS services.

- Generate a valid SQLite query for the customer using the database schema inside the Northwind DB knowledge base 
and then execute the query that answers the question based on the [Northwind] dataset. If the Northwind DB Knowledge Base search 
function result did not contain enough information to construct a full query try to construct a query to the best of your ability 
based on the Northwind database schema.

- Generate and Explain code for the customer following standard programming language syntax</p><p>Feel free to ask any questions 
along those lines!

Each user question to the agent by default includes the following system prompt.

Note: The following system prompt remains the same for each agent invocation, only the {user_question_to_agent} gets replaced with user query.

Question: {user_question_to_agent} 

Given an input question, you will use the existing Knowledge Bases on AWS 
Well-Architected Framework and Northwind DB Knowledge Base.

- For building and designing software applications, you will use the existing Knowledge Base on AWS well-architected framework 
to generate a response of the most relevant design principles and links to any documents. This Knowledge Base response can then be passed 
to the functions available to answer the user question. The final response to the direct answer to the user question. 
It has to be in markdown format highlighting any text of interest. Remove any backticks in the final response.

- To generate code for a given user question,  you can use the default Large Language model to come up with the response. 
This response can be in code markdown format. You can optionally provide an explanation for the code.

- To explain code for a given user question, you can use the default Large Language model to come up with the response.

- For SQL query generation you will ONLY use the existing database schemas in the Northwind DB Knowledge Base to create a syntactically 
correct SQLite query and then you will EXECUTE the SQL Query using the functions and API provided to answer the question.

Make sure to use ONLY existing columns and tables based on the Northwind DB database schema. Make sure to wrap table names with 
square brackets. Do not use underscore for table names unless that is part of the database schema. Make sure to add a semicolon after 
the end of the SQL statement generated.</p><p>Remove any backticks and any html tags like <table><th><tr> in the 
final response.

Here are a few examples of questions I can help answer by generating and then executing a SQLite query:

- What are the total sales amounts by year?</p>
- What are the top 5 most expensive products?</p>
- What is the total revenue for each employee?</p>

Cost considerations

The following are important cost considerations:

  • This current implementation has no separate charges for building resources using Amazon Bedrock Knowledge Bases or Amazon Bedrock Agents.
  • You will incur charges for embedding model and text model invocation on Amazon Bedrock. For more details, refer to Amazon Bedrock pricing.
  • You will incur charges for Amazon S3 and vector DB usage. For more details, see Amazon S3 pricing and Amazon OpenSearch Service Pricing, respectively.

Clean up

To avoid incurring unnecessary costs, the implementation automatically cleans up resources after an entire run of the notebook. You can check the notebook instructions in the Clean-up Resources section on how to avoid the automatic cleanup and experiment with different prompts.

The order of resource cleanup is as follows:

  1. Disable the action group.
  2. Delete the action group.
  3. Delete the alias.
  4. Delete the agent.
  5. Delete the Lambda function.
  6. Empty the S3 bucket.
  7. Delete the S3 bucket.
  8. Delete IAM roles and policies.
  9. Delete the vector DB collection policies.
  10. Delete the knowledge bases.

Conclusion

This post demonstrated how to query and integrate workflows with Amazon Bedrock Agents using multiple knowledge bases to create a generative AI–based software application builder assistant that can author and explain code, generate SQL using DDL schemas, and recommend design suggestions using the AWS Well-Architected Framework.

Beyond code generation and explanation of code as demonstrated in this post, to run and troubleshoot application code in a secure test environment, you can refer to Code Interpreter setup with Amazon Bedrock Agents

For more information on creating agents to orchestrate workflows, see Amazon Bedrock Agents.

Acknowledgements

The author thanks all the reviewers for their valuable feedback.


About the Author

Shayan Ray is an Applied Scientist at Amazon Web Services. His area of research is all things natural language (like NLP, NLU, NLG). His work has been focused on conversational AI, task-oriented dialogue systems and LLM-based agents. His research publications are on natural language processing, personalization, and reinforcement learning.

Read More

Transitioning from Amazon Rekognition people pathing: Exploring other alternatives

Transitioning from Amazon Rekognition people pathing: Exploring other alternatives

Amazon Rekognition people pathing is a machine learning (ML)–based capability of Amazon Rekognition Video that users can use to understand where, when, and how each person is moving in a video. This capability can be used for multiple use cases, such as for understanding:

  1. Retail analytics – Customer flow in the store and identifying high-traffic areas
  2. Sports analytics – Players’ movements across the field or court
  3. Industrial safety – Workers’ movement in work environments to promote compliance with safety protocols

After careful consideration, we made the decision to discontinue Rekognition people pathing on October 31, 2025. New customers will not be able to access the capability effective October 24, 2024, but existing customers will be able to use the capability as normal until October 31, 2025.

This post discusses an alternative solution to Rekognition people pathing and how you can implement this solution in your applications.

Alternatives to Rekognition people pathing

One alternative to Amazon Rekognition people pathing combines the open source ML model YOLOv9, which is used for object detection, and the open source ByteTrack algorithm, which is used for multi-object tracking.

Overview of YOLO9 and ByteTrack

YOLOv9 is the latest in the YOLO object detection model series. It uses a specialized architecture called Generalized Efficient Layer Aggregation Network (GELAN) to analyze images efficiently. The model divides an image into a grid, quickly identifying and locating objects in each section in a single pass. It then refines its results using a technique called programmable gradient information (PGI) to improve accuracy, especially for easily missed objects. This combination of speed and accuracy makes YOLOv9 ideal for applications that need fast and reliable object detection.

ByteTrack is an algorithm for tracking multiple moving objects in videos, such as people walking through a store. What makes it special is how it handles objects that are both straightforward and difficult to detect. Even when someone is partially hidden or in a crowd, ByteTrack can often still follow them. It’s designed to be fast and accurate, working well even when there are many people to track simultaneously.

When you combine YOLOv9 and ByteTrack for people pathing, you can review people’s movements across video frames. YOLOv9 provides person detections in each video frame. ByteTrack takes these detections and associates them across frames, creating consistent tracks for each individual, showing how people move through the video over time.

Example code

The following code example is a Python script that can be used as an AWS Lambda function or as part of your processing pipeline. You can also deploy YOLOv9 and ByteTrack for inference using Amazon SageMaker. SageMaker provides several options for model deployment, such as real-time inference, asynchronous inference, serverless inference, and batch inference. You can choose the suitable option based on your business requirements.

Here’s a high-level breakdown of how the Python script is executed:

  1. Load the YOLOv9 model – This model is used for detecting objects in each frame.
  2. Start the ByteTrack tracker – This tracker assigns unique IDs to objects and tracks them across frames.
  3. Iterate through video frame by frame – For each frame, the script iterates by detecting objects, tracking path, and drawing bounding boxes and labels around them. All these are saved on a JSON file.
  4. Output the processed video – The final video is saved with all the detected and tracked objects, annotated on each frame.
# install and import necessary packages
!pip install opencv-python ultralytics
!pip install imageio[ffmpeg]

import cv2
import imageio
import json
from ultralytics import YOLO
from pathlib import Path

# Load an official Segment model from YOLOv9
model = YOLO('yolov9e-seg.pt') 

# define the function that changes YOLOV9 output to Person pathing API output format
def change_format(results, ts, person_only):
    #set person_only to True if you only want to track persons, not other objects.
    object_json = []

    for i, obj in enumerate(results.boxes):
        x_center, y_center, width, height = obj.xywhn[0]
        # Calculate Left and Top from center
        left = x_center - (width / 2)
        top = y_center - (height / 2)
        obj_name = results.names[int(obj.cls)]
        # Create dictionary for each object detected
        if (person_only and obj_name == "person") or not person_only:
            obj_data = {
                obj_name: {
                    "BoundingBox": {
                        "Height": float(height),
                        "Left": float(left),
                        "Top": float(top),
                        "Width": float(width)
                    },
                    "Index": int(obj.id)  # Object index
                },
                "Timestamp": ts  # timestamp of the detected object
            }
        object_json.append(obj_data)

    return object_json

#  Function for person tracking with json outputs and optional videos with annotation 
def person_tracking(video_path, person_only=True, save_video=True):
    # open the video file
    reader = imageio.get_reader(video_path)
    frames = []
    i = 0
    all_object_data = []
    file_name = Path(video_path).stem

    for frame in reader:
        # Convert frame from RGB (imageio's default) to BGR (OpenCV's default)
        frame_bgr = cv2.cvtColor(frame, cv2.COLOR_RGB2BGR)
        try:
            # Run YOLOv9 tracking on the frame, persisting tracks between frames with bytetrack
            conf = 0.2
            iou = 0.5
            results = model.track(frame_bgr, persist=True, conf=conf, iou=iou, show=False, tracker="bytetrack.yaml")

            # change detection results to Person pathing API output formats.
            object_json = change_format(results[0], i, person_only)
            all_object_data.append(object_json)

            # Append the annotated frame to the frames list (for mp4 creation)
            annotated_frame = results[0].plot()
            frames.append(annotated_frame)
            i += 1

        except Exception as e:
            print(f"Error processing frame: {e}")
            break

    # save the object tracking array to json file
    with open(f'{file_name}_output.json', 'w') as file:
        json.dump(all_object_data, file, indent=4)
   
     # save annotated video
    if save_video is True:
        # Create a VideoWriter object of mp4
        fourcc = cv2.VideoWriter_fourcc(*'mp4v')
        output_path = f"{file_name}_annotated.mp4"
        fps = reader.get_meta_data()['fps']
        frame_size = reader.get_meta_data()['size']
        video_writer = cv2.VideoWriter(output_path, fourcc, fps, frame_size)

        # Write each frame to the video and release the video writer object when done
        for frame in frames:
            video_writer.write(frame)
        video_writer.release()
        print(f"Video saved to {output_path}")

    return all_object_data
    
        
#main function to call 
video_path = './MOT17-09-FRCNN-raw.webm'
all_object_data = person_tracking(video_path, person_only=True, save_video=True)

Validation

We use the following video to showcase this integration. The video shows a football practice session, where the quarter back is starting a play.

The following table shows an example of the content from the JSON file with person tracking outputs by timestamp.

Timestamp PersonIndex Bounding box
Height Left Top Width
0 42 0.51017 0.67687 0.44032 0.17873
0 63 0.41175 0.05670 0.3148 0.07048
1 42 0.49158 0.69260 0.44224 0.16388
1 65 0.35100 0.06183 0.57447 0.06801
4 42 0.49799 0.70451 0.428963 0.13996
4 63 0.33107 0.05155 0.59550 0.09304
4 65 0.78138 0.49435 0.20948 0.24886
7 42 0.42591 0.65892 0.44306 0.0951
7 63 0.28395 0.06604 0.58020 0.13908
7 65 0.68804 0.43296 0.30451 0.18394

The video below show the results with the people tracking output

Other open source solutions for people pathing

Although YOLOv9 and ByteTrack offer a powerful combination for people pathing, several other open source alternatives are worth considering:

  1. DeepSORT – A popular algorithm that combines deep learning features with traditional tracking methods
  2. FairMOT – Integrates object detection and reidentification in a single network, offering users the ability to track objects in crowded scenes

These solutions can be effectively deployed using Amazon SageMaker for inference.

Conclusion

In this post, we have outlined how you can test and implement YOLOv9 and Byte Track as an alternative to Rekognition people pathing. Combined with AWS tool offerings such as AWS Lambda and Amazon SageMaker, you can implement such open source tools for your applications.


About the Authors

Fangzhou Cheng is a Senior Applied Scientist at AWS. He builds science solutions for AWS Rekgnition and AWS Monitron to provide customers with state-of-the-art models. His areas of focus include generative AI, computer vision, and time-series data analysis

Marcel Pividal is a Senior AI Services SA in the World- Wide Specialist Organization, bringing over 22 years of expertise in transforming complex business challenges into innovative technological solutions. As a thought leader in generative AI implementation, he specializes in developing secure, compliant AI architectures for enterprise- scale deployments across multiple industries.

Read More