Model customization, RAG, or both: A case study with Amazon Nova

Model customization, RAG, or both: A case study with Amazon Nova

As businesses and developers increasingly seek to optimize their language models for specific tasks, the decision between model customization and Retrieval Augmented Generation (RAG) becomes critical. In this post, we seek to address this growing need by offering clear, actionable guidelines and best practices on when to use each approach, helping you make informed decisions that align with your unique requirements and objectives.

The introduction of Amazon Nova models represent a significant advancement in the field of AI, offering new opportunities for large language model (LLM) optimization. In this post, we demonstrate how to effectively perform model customization and RAG with Amazon Nova models as a baseline. We conducted a comprehensive comparison study between model customization and RAG using the latest Amazon Nova models, and share these valuable insights.

Approach and base model overview

In this section, we discuss the differences between a fine-tuning and RAG approach, present common use cases for each approach, and provide an overview of the base model used for experiments.

Demystifying RAG and model customization

RAG is a technique to enhance the capability of pre-trained models by allowing the model access to external domain-specific data sources. It combines two components: retrieval of external knowledge and generation of responses. It allows pre-trained language models to dynamically incorporate external data during the response-generation process, enabling more contextually accurate and updated outputs. Unlike fine-tuning, in RAG, the model doesn’t undergo any training and the model weights aren’t updated to learn the domain knowledge. Although fine-tuning implicitly uses domain-specific information by embedding the required knowledge directly into the model, RAG explicitly uses the domain-specific information through external retrieval.

Model customization refers to adapting a pre-trained language model to better fit specific tasks, domains, or datasets. Fine-tuning is one such technique, which helps in injecting task-specific or domain-specific knowledge for improving model performance. It adjusts the model’s parameters to better align with the nuances of the target task while using its general knowledge.

Common use cases for each approach

RAG is optimal for use cases requiring dynamic or frequently updated data (such as customer support FAQs and ecommerce catalogs), domain-specific insights (such as legal or medical Q&A), scalable solutions for broad applications (such as software as a service (SaaS) platforms), multimodal data retrieval (such as document summarization), and strict compliance with secure or sensitive data (such as financial and regulatory systems).

Conversely, fine-tuning thrives in scenarios demanding precise customization (such as personalized chatbots or creative writing), high accuracy for narrow tasks (such as code generation or specialized summarization), ultra-low latency (such as real-time customer interactions), stability with static datasets (such as domain-specific glossaries), and cost-efficient scaling for high-volume tasks (such as call center automation).

Although RAG excels at real-time grounding in external data and fine-tuning specializes in static, structured, and personalized workflows, choosing between them often depends on nuanced factors. This post offers a comprehensive comparison of RAG and fine-tuning, clarifying their strengths, limitations, and contexts where each approach delivers the best performance.

Introduction to Amazon Nova models

Amazon Nova is a new generation of foundation model (FM) offering frontier intelligence and industry-leading price-performance. Amazon Nova Pro and Amazon Nova Lite are multimodal models excelling in accuracy and speed, with Amazon Nova Lite optimized for low-cost, fast processing. Amazon Nova Micro focuses on text tasks with ultra-low latency. They offer fast inference, support agentic workflows with Amazon Bedrock Knowledge Bases and RAG, and allow fine-tuning for text and multi-modal data. Optimized for cost-effective performance, they are trained on data in over 200 languages.

Solution overview

To evaluate the effectiveness of RAG compared to model customization, we designed a comprehensive testing framework using a set of AWS-specific questions. Our study used Amazon Nova Micro and Amazon Nova Lite as baseline FMs and tested their performance across different configurations.

We structured our evaluation as follows:

  • Base model:
    • Used out-of-box Amazon Nova Micro and Amazon Nova Lite
    • Generated responses to AWS-specific questions without additional context
  • Base model with RAG:
    • Connected the base models to Amazon Bedrock Knowledge Bases
    • Provided access to relevant AWS documentation and blogs
  • Model customization:
    • Fine-tuned both Amazon Nova models using 1,000 AWS-specific question-answer pairs generated from the same set of AWS articles
    • Deployed the customized models through provisioned throughput
    • Generated responses to AWS-specific questions with fine-tuned models
  • Model customization and RAG combined approach:
    • Connected the fine-tuned models to Amazon Bedrock Knowledge Bases
    • Provided fine-tuned models access to relevant AWS articles at inference time

In the following sections, we walk through how to set up the second and third approaches (base model with RAG and model customization with fine-tuning) in Amazon Bedrock.

Prerequisites

To follow along with this post, you need the following prerequisites:

  • An AWS account and appropriate permissions
  • An Amazon Simple Storage Service (Amazon S3) bucket with two folders: one containing your training data, and one for your model output and training metrics

Implement RAG with the baseline Amazon Nova model

In this section, we walk through the steps to implement RAG with the baseline model. To do so, we create a knowledge base. Complete the following steps:

  1. On the Amazon Bedrock console, choose Knowledge Bases in the navigation pane.
  2. Under Knowledge Bases, choose Create.

kb_creation

  1. On the Configure data source page, provide the following information:
    1. Specify the Amazon S3 location of the documents.
    2. Specify a chunking strategy.
  2. Choose Next.

configure_kb

  1. On the Select embeddings model and configure vector store page, provide the following information:
    1. In the Embeddings model section, choose your embeddings model, which is used for embedding the chunks.
    2. In the Vector database section, create a new vector store or use an existing one where the embeddings will be stored for retrieval.
  2. Choose Next.

select_embedding_models

  1. On the Review and create page, review the settings and choose Create Knowledge Base.

kb_confirmation

Fine-tune an Amazon Nova model using the Amazon Bedrock API

In this section, we provide detailed walkthroughs on fine-tuning and hosting customized Amazon Nova models using Amazon Bedrock. The following diagram illustrates the solution architecture.

ft_diagram 

Create a fine-tuning job

Fine-tuning Amazon Nova models through the Amazon Bedrock API is a streamlined process:

  1. On the Amazon Bedrock console, choose us-east-1 as your AWS Region.

At the time of writing, Amazon Nova model fine-tuning is exclusively available in us-east-1.

  1. Choose Custom models under Foundation models in the navigation pane.
  2. Under Customization methods, choose Create Fine-tuning job.

ft_job_creation

  1. For Source model, choose Select model.
  2. Choose Amazon as the provider and the Amazon Nova model of your choice.
  3. Choose Apply.

ft_model_selection

  1. For Fine-tuned model name, enter a unique name for the fine-tuned model.
  2. For Job name, enter a name for the fine-tuning job.
  3. Under Input data, enter the location of the source S3 bucket (training data) and target S3 bucket (model outputs and training metrics), and optionally the location of your validation dataset.

ft_input_data

Configure hyperparameters

For Amazon Nova models, the following hyperparameters can be customized:

Parameter Range/Constraints
Epochs 1–5
Batch Size Fixed at 1
Learning Rate 0.000001–0.0001
Learning Rate Warmup Steps 0–100

Prepare the dataset for compatibility with Amazon Nova models

Similar to other LLMs, Amazon Nova requires prompt-completion pairs, also known as question and answer (Q&A) pairs, for supervised fine-tuning (SFT). This dataset should contain the ideal outputs you want the language model to produce for specific tasks or prompts. Refer to Guidelines for preparing your data for Amazon Nova on best practices and example formats when preparing datasets for fine-tuning Amazon Nova models.

Examine fine-tuning job status and training artifacts

After you create your fine-tuning job, choose Custom models under Foundation models in the navigation pane. You will find the current fine-tuning job listed under Jobs. You can use this page to monitor your fine-tuning job status.

examine_ft_status

When your fine-tuning job status changes to Complete, you can choose the job name and navigate to the Training job overview page. You will find the following information:

  • Training job specifications
  • Amazon S3 location for input data used for fine-tuning
  • Hyperparameters used during fine-tuning
  • Amazon S3 location for training output

ft_job_overview

Host the fine-tuned model with provisioned throughput

After your fine-tuning job completes successfully, you can access your customized model through the following steps:

  1. On the Amazon Bedrock console, choose Custom models under Foundation models in the navigation pane.
  2. Under Models, choose your custom model.

select_custom_models

The model details page shows the following information:

  • Fine-tuned model details
  • Amazon S3 location for input data used for fine-tuning
  • Hyperparameters used during fine-tuning
  • Amazon S3 location for training output

ft_output

  1. To make your fine-tuned model available for inference, choose Purchase provisioned throughput.
  2. Choose a commitment term (no commitment, 1 month, or 6 months) and review the associated cost for hosting the fine-tuned models.

After the customized model is hosted through provisioned throughput, a model ID will be assigned and can be used for inference.

The aforementioned fine-tuning and inference steps can also be done programmatically. For more information, refer to the following GitHub repo, which contains sample code.

Evaluation framework and results

In this section, we first introduce our multi-LLM-judge evaluation framework, which is set up to mitigate an individual LLM judge’s bias. We then compare RAG vs. fine-tuning results in terms of response quality as well as latency and token implications.

Multiple LLMs as judges to mitigate bias

The following diagram illustrates our workflow using multiple LLMs as judges.

multi-llm-judge

Using LLMs as judges has become an increasingly popular approach to evaluate tasks that are challenging to assess through traditional methods or human evaluation. For our evaluation framework, we constructed 10 domain-specific test questions covering key aspects of AWS services and features, designed to test both factual accuracy and depth of understanding. Each model-generated response was evaluated using a standardized scoring system on a scale of 0–10, where 0–3 indicates incorrect or misleading information, 4–6 represents partially correct but incomplete answers, 7–8 signifies mostly correct with minor inaccuracies, and 9–10 denotes completely accurate with comprehensive explanation.

We use the following LLM judge evaluation prompt:

{
    "system_prompt": "You are a helpful assistant.",
    "prompt_template": "[Instruction] Please act as an impartial judge and evaluate the quality of the response provided by an AI assistant to the user question displayed below. Your evaluation should consider factors such as the helpfulness, relevance, accuracy, depth, creativity, and level of detail of the response. Begin your evaluation by providing a short explanation. Be as objective as possible. After providing your explanation, you must rate the response on a scale of 1 to 10 by strictly following this format: "[[rating]]", for example: "Rating: [[5]]".nn[Question]n{question}nn[The Start of Assistant's Answer]n{answer}n[The End of Assistant's Answer]",
    "description": "Prompt for general questions",
    "category": "general",
    "output_format": "[[rating]]"
}

We use the following sample evaluation question and ground truth:


{
    "question_id": 9161,
    "category": "AWS",
    "turns": [
        " "What specific details are collected and sent to AWS when anonymous operational metrics are enabled for an Amazon EFS file system?",
        "What's required for a successful AWS CloudFormation launch?"
    ],
    "reference": [
        "When anonymous operational metrics are enabled for an Amazon EFS file system, the following specific details are collected and sent to AWS: Solution ID, Unique ID, Timestamp, Backup ID, Backup Start Time, Backup Stop Time, Backup Window, Source EFS Size, Destination EFS Size, Instance Type, Retain, S3 Bucket Size, Source Burst Credit Balance, Source Burst Credit Balance Post Backup, Source Performance Mode, Destination Performance Mode, Number of Files, Number of Files Transferred, Total File Size, Total Transferred File Size, Region, Create Hard Links Start Time, Create Hard Links Stop Time, Remove Snapshot Start Time, Remove Snapshot Stop Time, Rsync Delete Start Time, Rsync Delete Stop Time.",
        "For a successful AWS CloudFormation launch, you need to sign in to the AWS Management Console, choose the correct AWS Region, use the button to launch the template, verify the correct template URL, assign a name to your solution stack, review and modify the parameters as necessary, review and confirm the settings, check the boxes acknowledging that the template creates AWS Identity and Access Management resources and may require an AWS CloudFormation capability, and choose Create stack to deploy the stack. You should receive a CREATE_COMPLETE status in approximately 15 minutes."
    ]
}

To mitigate potential intrinsic biases among different LLM judges, we adopted two LLM judges to evaluate the model-generated responses: Anthropic’s Claude Sonnet 3.5 and Meta’s Llama 3.1 70B. Each judge was provided with the original test question, the model-generated response, and specific scoring criteria focusing on factual accuracy, completeness, relevance, and clarity. Overall, we observed a high level of rank correlation among LLM judges in assessing different approaches, with consistent evaluation patterns across all test cases.

Response quality comparison

Both fine-tuning and RAG significantly improve the quality of generated responses on AWS-specific questions over the base model. Using Amazon Nova Lite as the base model, we observed that both fine-tuning and RAG improved the average LLM judge score on response quality by 30%, whereas combining fine-tuning with RAG enhanced the response quality by a total of 83%, as shown in the following figure.

nova_lite

Notably, our evaluation revealed an interesting finding (as shown in the following figure): when combining fine-tuning and RAG approaches, smaller models like Amazon Nova Micro showed significant performance improvements in domain-specific tasks, nearly matching the performance of bigger models. This suggests that for specialized use cases with well-defined scope, using smaller models with both fine-tuning and RAG could be a more cost-effective solution compared to deploying larger models.

nova_micro_lite

Latency and token implications

In addition to enhancing the response quality, both fine-tuning and RAG help reduce the response generation latency compared to the base model. For both Amazon Nova Micro and Amazon Nova Lite, fine-tuning reduced the base model latency by approximately 50%, whereas RAG reduced it by about 30%, as shown in the following figure.

latency

Fine-tuning also presented the unique advantage of improving the tone and style of the generated answers to align more closely with the training data. In our experiments, the average total tokens (input and output tokens) dropped by more than 60% with both fine-tuned models. However, the average total tokens more than doubled with the RAG approach due to passing of context, as shown in the following figure. This finding suggests that for latency-sensitive use cases or when the objective is to align the model’s responses to a specific tone, style, or brand voice, model customization might offer more business value.

tokens

Conclusion

In this post, we compared model customization (fine-tuning) and RAG for domain-specific tasks with Amazon Nova. We first provided a detailed walkthrough on how to fine-tune, host, and conduct inference with customized Amazon Nova through the Amazon Bedrock API. We then adopted an LLM-as-a-judge approach to evaluate response quality from different approaches. In addition, we examined the latency and token implications of different setups.

Both fine-tuning and RAG improved the model performance. Depending on the task and evaluation criteria, model customization showed similar, or sometimes better, performance compared to RAG. Model customization can also be helpful to improve the style and tone of a generated answer. In this experiment, the customized model’s response follows the succinct answer style of the given training data, which resulted in lower latency compared to the baseline counterpart. Additionally, model customization can also be used for many use cases where RAG isn’t as straightforward to be used, such as tool calling, sentiment analysis, entity extraction, and more. Overall, we recommend combining model customization and RAG for question answering or similar tasks to maximize performance.

For more information on Amazon Bedrock and the latest Amazon Nova models, refer to the Amazon Bedrock User Guide and Amazon Nova User Guide. The AWS Generative AI Innovation Center has a group of AWS science and strategy experts with comprehensive expertise spanning the generative AI journey, helping customers prioritize use cases, build a roadmap, and move solutions into production. Check out the Generative AI Innovation Center for our latest work and customer success stories.


About the Authors

Mengdie (Flora) Wang is a Data Scientist at AWS Generative AI Innovation Center, where she works with customers to architect and implement scalable Generative AI solutions that address their unique business challenges. She specializes in model customization techniques and agent-based AI systems, helping organizations harness the full potential of generative AI technology. Prior to AWS, Flora earned her Master’s degree in Computer Science from the University of Minnesota, where she developed her expertise in machine learning and artificial intelligence.

Sungmin Hong is a Senior Applied Scientist at Amazon Generative AI Innovation Center where he helps expedite the variety of use cases of AWS customers. Before joining Amazon, Sungmin was a postdoctoral research fellow at Harvard Medical School. He holds Ph.D. in Computer Science from New York University. Outside of work, he prides himself on keeping his indoor plants alive for 3+ years.

Jae Oh Woo is a Senior Applied Scientist at the AWS Generative AI Innovation Center, where he specializes in developing custom solutions and model customization for a diverse range of use cases. He has a strong passion for interdisciplinary research that connects theoretical foundations with practical applications in the rapidly evolving field of generative AI. Prior to joining Amazon, Jae Oh was a Simons Postdoctoral Fellow at the University of Texas at Austin, where he conducted research across the Mathematics and Electrical and Computer Engineering departments. He holds a Ph.D. in Applied Mathematics from Yale University.

Rahul Ghosh is an Applied Scientist at Amazon’s Generative AI Innovation Center, where he works with AWS customers across different verticals to expedite their use of Generative AI. Rahul holds a Ph.D. in Computer Science from the University of Minnesota.

Baishali Chaudhury is an Applied Scientist at the Generative AI Innovation Center at AWS,
where she focuses on advancing Generative AI solutions for real-world applications. She has a
strong background in computer vision, machine learning, and AI for healthcare. Baishali holds a PhD in Computer Science from University of South Florida and PostDoc from Moffitt Cancer Centre.

Anila Joshi has more than a decade of experience building AI solutions. As a AWSI Geo Leader at AWS Generative AI Innovation Center, Anila pioneers innovative applications of AI that push the boundaries of possibility and accelerate the adoption of AWS services with customers by helping customers ideate, identify, and implement secure generative AI solutions.

Read More

Generate user-personalized communication with Amazon Personalize and Amazon Bedrock

Generate user-personalized communication with Amazon Personalize and Amazon Bedrock

Today, businesses are using AI and generative models to improve productivity in their teams and provide better experiences to their customers. Personalized outbound communication can be a powerful tool to increase user engagement and conversion.

For instance, as a marketing manager for a video-on-demand company, you might want to send personalized email messages tailored to each individual user—taking into account their demographic information, such as gender and age, and their viewing preferences. You want the messaging and movie recommendations to be both engaging and applicable to the customer. To achieve this, you can use Amazon Personalize to generate user-personalized recommendations and Amazon Bedrock to generate the text of the email.

Amazon Personalize enables your business to improve customer engagement by creating personalized product and content recommendations in websites, applications, and targeted marketing campaigns. You can get started without any prior machine learning (ML) experience, and Amazon Personalize allows you to use APIs to build sophisticated personalization capabilities. Using this service, all your data is encrypted to be private and secure, and is only used to create recommendations for your users.

Amazon Bedrock is a fully managed service that offers a choice of high-performing foundation models (FMs) from leading AI companies and Amazon through a single API, along with a broad set of capabilities to build generative AI applications with security, privacy, and responsible AI. Using Amazon Bedrock, you can experiment with and evaluate top FMs for your use case, customize the model using fine tuning, or restrict the model output using Retrieval Augmented Generaion (RAG), and build agents that execute tasks using your enterprise systems and data sources.

In this post, we demonstrate how to use Amazon Personalize and Amazon Bedrock to generate personalized outreach emails for individual users using a video-on-demand use case. This concept can be applied to other domains, such as compelling customer experiences for ecommerce and digital marketing use cases.

Solution overview

The following diagram shows how you can use Amazon Personalize and Amazon Bedrock to generate user-personalized outreach messages for each user.

Workflow Diagram: 1. Import your user, item, and interaction data into Amazon Personalize. 2. Train an Amazon Personalize “Top pics for you” recommender. 3. Get the top recommended movies for each user. 4. Use a prompt template, the recommended movies, and the user demographics to generate the model prompt. 5. Use Amazon Bedrock LLMs to generate personalized outbound communication with the prompt. 6. Share the personalize outbound communication with each of your users.

The workflow consists of the following steps:

  1. Import your user, item, and interaction data into Amazon Personalize. The user and item datasets are not required for Amazon Personalize to generate recommendations, but providing good item and user metadata provides the best results in your trained models.
  2. Train an Amazon Personalize “Top picks for you” recommender. Amazon Personalize recommenders are domain-specific resources that generate recommendations. When you create an Amazon Personalize recommender, Amazon Personalize trains the models backing the recommender with the best configurations for the use case. In our example, we use the “Top picks for you” recommender. This recommender generates personalized content recommendations for a user that you specify. With this use case, Amazon Personalize automatically filters videos the user watched.
  3. After the model is trained, you can get the top recommended movies for each user by querying the recommender with each user ID through the Amazon Personalize Runtime API.
  4. Combine a predefined prompt template with the top recommendations and user demographic information to generate an enhanced prompt.
  5. Use the enhanced prompt in Amazon Bedrock through its API to generate your personalized outbound communication.
  6. Amazon Bedrock returns the personalized outbound communication that you can email to your users.

We go deeper into each of these steps in the following sections. A code sample for this use case is available on AWS Samples on GitHub.

Prerequisites

To generate personalized recommendations, you must first set up Amazon Personalize resources. You start by creating your dataset group, loading your data, and then training a recommender. For full instructions, see Getting started tutorials.

    1. Create a dataset group.
    2. Create an Interactions dataset using the following schema:
      {
          "type": "record"
          "name": "Interactions",
          "namespace": "com.amazonaws.personalize.schema",
          "fields": [
              {
                  "name": "USER_ID",
                  "type": "string"
              },
              {
                  "name": "ITEM_ID",
                  "type": "string"
              },
              {
                  "name": "TIMESTAMP",
                  "type": "long"
              },
              {
                  "name": "EVENT_TYPE",
                  "type": "string"
              }
          ],
          "version": "1.0"
      }

      Interaction data consists of information about the user interactions with the content in your application. This usually comes from analytics tools or a customer data platform (CDP). The best interaction data to use in Amazon Personalize includes the sequential order of user behavior and the content the user watched or clicked on. For this example, we use the ml-latest-small dataset from the MovieLens dataset to simulate user-item interactions.

    3. Import the interaction data to Amazon Personalize from Amazon Simple Storage Service (Amazon S3). For this example, we convert the data to the appropriate format following the steps in the notebook 01_Introduction_and_Data_Preparation.
    4. Item data consists of information about the content that is being interacted with, which generally comes from a content management system (CMS) in video-on-demand use cases. This can be information like the title, description, or movie genre. To provide additional metadata, and also provide a consistent experience for our users, we use a subset of the IMDb Essential Metadata for Movies/TV/OTT dataset. IMDb has multiple datasets available in AWS Data Exchange. For this post, we have extracted and prepared a subset of data for use with the following information from the IMDb Essential Metadata for Movies/TV/OTT (Bulk data) dataset.With this data, create an Items dataset using the following schema:
      items_schema = {
          "type": "record",
          "name": "Items",
          "namespace": "com.amazonaws.personalize.schema",
          "fields": [
              {
                  "name": "ITEM_ID",
                  "type": "string"
              },
              {
                  "name": "TITLE",
                  "type": "string"
              },
              {
                  "name": "YEAR",
                  "type": "int"
              },
              {
                  "name": "IMDB_RATING",
                  "type": "int"
              },
              {
                  "name": "IMDB_NUMBEROFVOTES",
                  "type": "int"
              },
              {
                  "name": "PLOT",
                  "type": "string",
                  "textual": True
              },
              {
                  "name": "US_MATURITY_RATING_STRING",
                  "type": "string"
              },
              {
                  "name": "US_MATURITY_RATING",
                  "type": "int"
              },
              {
                  "name": "GENRES",
                  "type": "string",
                  "categorical": True
              },
              {
                  "name": "CREATION_TIMESTAMP",
                  "type": "long"
              },
              {
                  "name": "PROMOTION",
                  "type": "string"
              }
          ],
          "version": "1.0
      }

    5. Import the item data to Amazon Personalize from Amazon S3. For this example, we convert the data to the appropriate format following the steps in the notebook 01_Introduction_and_Data_Preparation.
      For more information on formatting and importing your interactions and items data from Amazon S3, see Importing bulk records.
    6. Create a recommender. In this example, we create a “Top picks for you” recommender.

Get personalized recommendations using Amazon Personalize

Now that we have trained the “Top picks for you” recommender, we can generate recommendations for our users. For more details and ways to use Amazon Personalize to get recommendations, see Getting recommendations from Amazon Personalize.We include the item metadata in the response so we can use this information in our outbound communication in the next step.You can use the following code to get recommended movies for each user:

get_recommendations_response = personalize_runtime.get_recommendations(
    recommenderArn = workshop_recommender_top_picks_arn,
    userId = str(user_id),
    numResults = number_of_movies_to_recommend,
    metadataColumns = {
        "ITEMS": [
            'TITLE', 'PLOT', 'GENRES']
        }
)

In the items dataset, we can specify the metadata columns we want the recommender to return. In this case, we request the Title, Plot, and Genres of the recommended movie. You can request metadata columns only if this feature has been enabled when the recommender was created.

For an example user_Id, the following movies are recommended:

Title: There's Something About Mary
Genres: Comedy and Romance
Plot: A man gets a chance to meet up with his dream girl from high school, even though his date with her back then was a complete disaster.

Title: Shakespeare in Love
Genres: Comedy and Drama and History and Romance
Plot: The world's greatest ever playwright, William Shakespeare, is young, out of ideas and short of cash, but meets his ideal woman and is inspired to write one of his most famous plays.

Title: The Birdcage
Genres: Comedy
Plot: A gay cabaret owner and his drag queen companion agree to put up a false straight front so that their son can introduce them to his fiancée's right-wing moralistic parents.

Get the user’s favorite movie genre

To provide a better personalized outbound communication experience, we determine the user’s favorite movie genre based on the genres of all the movies they have interacted with in the past. There are a number of ways to do this, such as counting the number of interactions per genre for our user. In this example, our sample user’s favorite genre is Comedy.

Generate personalized marketing emails with recommended movies

To generate personalized marketing emails, we use Amazon Bedrock. Amazon Bedrock users must request access to models before they are available for use. Amazon Bedrock is a fully managed service that makes base models from Amazon and third-party model providers accessible through an API.

To request access, select choose Model access in the navigation pane on the Amazon Bedrock console. For more information, see Access Amazon Bedrock foundation models.

In this example, we use Anthropic’s Claude 3.7 on Amazon Bedrock and have defined the following configuration parameters:

# The LLM we will be using
model_id = 'us.anthropic.claude-3-7-sonnet-20250219-v1:0'

# The maximum number of tokens to use in the generated response
max_tokens_to_sample = 1000

Let’s generate a simple outreach email using the recommended movies and the following prompt template:

prompt_template = f'''Write a marketing email advertising several movies available in a video-on-demand streaming platform next week, given the movie and user information below. The movies to recommend and their information is contained in the <movie> tag. Put the email between <email> tags.

<movie>
{movie_list}
</movie>

Assistant: Email body:
<email>
'''

Using the recommended movies, the full prompt is as follows:

"Write a marketing email advertising several movies available in a video-on-demand streaming platform next week, given the movie and user information below. The movies to recommend and their information is contained in the <movie> tag. Put the email between <email> tags.
n
n
<movie>
n
[
{
'title': "There's Something About Mary",
'genres': 'Comedy and Romance',
'plot': 'A man gets a chance to meet up with his dream girl from high school, even though his date with her back then was a complete disaster.'
},
{
'title': 'Shakespeare in Love',
'genres': 'Comedy and Drama and History and Romance',
'plot': "The world's greatest ever playwright, William Shakespeare, is young, out of ideas and short of cash, but meets his ideal woman and is inspired to write one of his most famous plays."
},
{
'title': 'The Birdcage',
'genres': 'Comedy',
'plot': "A gay cabaret owner and his drag queen companion agree to put up a false straight front so that their son can introduce them to his fiancu00e9e's right-wing moralistic parents."
}
]
n
</movie>
n
n
Assistant: Email body:
n
<email>.
"

We then use an Amazon Bedrock API call to generate the personalized email. For more information, see Amazon Bedrock API Reference.

request_body = json.dumps({
    "max_tokens": max_tokens_to_sample,
    "messages": [{"role": "user", "content": prompt}],
    "anthropic_version": "bedrock-2023-05-31"
})

personalized_email_response = bedrock_client.invoke_model(
    body = request_body,
    modelId = identifier_of_the_model
)

Amazon Bedrock returns a personalized email for the user:

Subject: Your Weekend Movie Escape Awaits! Three Laugh-Out-Loud Comedies Coming Next Week

Hi there,

Need a break from reality? We’ve got you covered with three fantastic comedies hitting our streaming platform next week!

## This Week’s Spotlight: Comedy Gems That Will Make Your Day

**There’s Something About Mary**
This hilarious romantic comedy follows a man who finally gets a second chance with his high school dream girl—after their first date went hilariously wrong. With unforgettable laughs and heartwarming moments, it’s the perfect weekend watch!

**Shakespeare in Love**
When the greatest playwright of all time faces writer’s block and money troubles, an unexpected romance changes everything! This award-winning comedy-drama blends history, romance, and witty humor as Shakespeare finds his muse and creates one of his most beloved plays. A delightful escape for literature lovers and romantics alike!

**The Birdcage**
Prepare for non-stop laughter in this comedy classic! When a gay cabaret owner and his drag queen partner pretend to be straight to impress their future in-laws (who happen to be ultra-conservative), chaos and hilarity ensue. A perfect blend of humor and heart that still resonates today.

So grab your popcorn, get comfortable on the couch, and enjoy these comedy classics starting next week!

Happy streaming!

The Movies-On-Demand Team

P.S. Don’t forget to check out our complete catalog for more great films in every genre!

Although this is already a good outreach email because the recommendations are personalized to the user, we can personalize it further by adding more information about the user.

Generate personalized communication with recommended movies, user demographic information, and favorite genre

We will generate emails by assuming two different demographics for the users as well as their favorite genre.

The version of the ml-latest-small dataset from the MovieLens dataset we used in this example doesn’t contain demographic data; therefore, we will try out multiple options. In a real-world scenario, you might know the demographics of your audience.

To experiment, let’s use the following example demographic:

# Sample user demographics
user_demographic_1 = f'The user is a 50 year old adult called Otto.'

We also add the user’s favorite genre to the prompt as follows:

prompt_template = f'''You are a skilled publicist. Write a high-converting marketing email advertising several movies available in a video-on-demand streaming platform next week,
given the movie and user information below. Do not add additional information. Your email will leverage the power of storytelling and persuasive language.
You want the email to impress the user, so make it appealing to them based on the information contained in the <user> tags,
and take into account the user's favorite genre in the <genre> tags.
The movies to recommend and their information is contained in the <movie> tag.
All movies in the <movie> tag must be recommended. Give a summary of the movies and why the human should watch them.
Put the email between <email> tags.

<user>
{user_demographic}
</user>

<genre>
{favorite_genre}
</genre>

<movie>
{movie_list}
</movie>

Assistant:

<email>
'''

After adding the information, the new prompt is as follows:

"You are a skilled publicist. Write a high-converting marketing email advertising several movies available in a video-on-demand streaming platform next week, given the movie and user information below. Do not add additional information. Your email will leverage the power of storytelling and persuasive language. You want the email to impress the user, so make it appealing to them based on the information contained in the <user> tags, and take into account the user's favorite genre in the <genre> tags. The movies to recommend and their information is contained in the <movie> tag. All movies in the <movie> tag must be recommended. Give a summary of the movies and why the human should watch them. Put the email between <email> tags.
n
n
<user>
n
The user is a 50 year old adult called Otto.
n
</user>
n
n
<genre>
n
Comedy
n
</genre>
n
n
<movie>
n
[
{
'title': "There's Something About Mary",
'genres': 'Comedy and Romance',
'plot': 'A man gets a chance to meet up with his dream girl from high school, even though his date with her back then was a complete disaster.'
},
{
'title': 'Shakespeare in Love',
'genres': 'Comedy and Drama and History and Romance',
'plot': "The world's greatest ever playwright, William Shakespeare, is young, out of ideas and short of cash, but meets his ideal woman and is inspired to write one of his most famous plays."
},
{
'title': 'The Birdcage',
'genres': 'Comedy',
'plot': "A gay cabaret owner and his drag queen companion agree to put up a false straight front so that their son can introduce them to his fiancu00e9e's right-wing moralistic parents."
}
]
n
</movie>
n
n
Assistant:
n
<email>
n    "

Amazon Bedrock returns a personalized email for the user:

Subject: Otto, Get Ready for a Comedy Extravaganza on Your Screen Next Week!

Dear Otto,

We’re thrilled to bring you an exclusive lineup of comedy classics hitting our streaming platform next week! As someone who appreciates a good laugh, you’re in for a treat with these award-winning comedies that will brighten your evenings.

## “There’s Something About Mary”
This hilarious romantic comedy follows the misadventures of a man who finally gets a second chance with his high school dream girl. After a first date that was nothing short of catastrophic, he’s determined to make things right years later. With its perfect blend of outrageous humor and heartwarming moments, this comedy classic delivers laughs that have stood the test of time.

## “Shakespeare in Love”
Experience the witty and charming story of a young, broke William Shakespeare who finds his muse in the most unexpected place. This brilliant comedy-drama offers a fictional account of how the greatest playwright found inspiration through love. With its clever dialogue, historical setting, and romantic storyline, this Academy Award-winning film combines your love of comedy with rich storytelling that will keep you engaged from beginning to end.

## “The Birdcage”
A comedy masterpiece that delivers non-stop laughs! When a gay cabaret owner and his flamboyant partner must pretend to be straight to impress their future in-laws (who happen to be ultra-conservative), chaos ensues. The brilliant performances and hilarious situations make this one of the most beloved comedies of its era. It’s the perfect film for when you need genuine belly laughs and brilliant comedic timing.

Otto, these comedies are among the best in their genre and will be available for your enjoyment starting next week. Whether you’re in the mood for slapstick humor, clever wit, or situational comedy, this collection has something perfect for your evening entertainment.

Grab your favorite snack, get comfortable on the couch, and prepare for an unforgettable comedy marathon!

Happy streaming!

The VOD Team

The email now contains information about the user’s favorite genre and is personalized to the user using their name and the recommended the movies the user is most likely to be interested in.

Clean up

Make sure you clean up any unused resources you created in your account while following the steps outlined in this post. You can delete filters, recommenders, datasets, and dataset groups using the AWS Management Console or the Python SDK.

Conclusion

Traditional AI and generative AI allow you to build hyper-personalized experiences for your users. In this post, we showed how to generate personalized outbound communication by getting personalized recommendations for each user using Amazon Personalize and then using user preferences and demographic information to write a personalized email communication using Amazon Bedrock. By using AWS managed services, such as Amazon Personalize and Amazon Bedrock, you can create this content with only a few API calls—no ML experience required.

For more information about Amazon Personalize, see the Amazon Personalize Developer Guide. For more information on working with generative AI on AWS, see Announcing New Tools for Building with Generative AI on AWS.


About the Author

Anna Grüebler Clark is a Specialist Solutions Architect at AWS focusing on in Artificial Intelligence. She has more than 16 years experience helping customers develop and deploy machine learning applications. Her passion is taking new technologies and putting them in the hands of everyone, and solving difficult problems leveraging the advantages of using traditional and generative AI in the cloud.

Read More

Automating regulatory compliance: A multi-agent solution using Amazon Bedrock and CrewAI

Automating regulatory compliance: A multi-agent solution using Amazon Bedrock and CrewAI

Financial institutions today face an increasingly complex regulatory world that demands robust, efficient compliance mechanisms. Although organizations traditionally invest countless hours reviewing regulations such as the Anti-Money Laundering (AML) rules and the Bank Secrecy Act (BSA), modern AI solutions offer a transformative approach to this challenge. By using Amazon Bedrock Knowledge Bases alongside CrewAI—an open source multi-agent orchestration framework, organizations can now deploy intelligent systems where multiple AI agents work together to automate and streamline specific compliance processes. This powerful combination enables financial institutions to move from manual, time-intensive compliance reviews to a streamlined, assisted compliance management approach that adapts to evolving regulatory requirements.

In this post, we explore how AI agents can streamline compliance and fulfill regulatory requirements for financial institutions using Amazon Bedrock and CrewAI. We demonstrate how to build a multi-agent system that can automatically summarize new regulations, assess their impact on operations, and provide prescriptive technical guidance. You’ll learn how to use Amazon Bedrock Knowledge Bases and Amazon Bedrock Agents with CrewAI to create a comprehensive, automated compliance solution.

This solution’s architecture can be adapted to help healthcare systems, enable manufacturers to maintain ISO safety documentation, and assist retailers in monitoring Federal Trade Commission (FTC) advertising regulations. It can also assist in other segments such as legal, finance, or human resources, offering wide-ranging potential for process automation and efficiency gains across various industries.The code used for this post is available on GitHub.

Solution overview

Traditional large language model (LLM) applications excel at following predefined instructions, but solving complex challenges such as compliance automation requires an autonomous network of specialized agents that mirror the structure of a comprehensive compliance department. Our system employs three key agents:

  1. Compliance analyst agent that continuously monitors and analyzes regulatory changes
  2. Compliance specialist agent that transforms requirements into organizational policies
  3. Enterprise architect agent that designs and implements the necessary security controls

In this multi-agent approach, specialized AI agents work together seamlessly to streamline the compliance lifecycle. The compliance analyst agent collects latest regulatory changes and helps to stay ahead of regulatory changes and their potential impact where the Compliance specialist agent translates these regulatory requirements into actionable organizational procedures. Meanwhile, the enterprise architect agent makes sure that the technical controls align with organizational controls. CrewAI provides an open source framework to orchestrate this collaborative system, enabling these agents to work in concert while maintaining clear handoffs and accountability. Next, we will explore how to create this multi-agent compliance automation system using CrewAI.

Although this solution demonstrates CrewAI’s capabilities, it’s important to note that Amazon Bedrock Agents has built-in support for multi-agent collaboration, and organizations could implement their agent workflows entirely within Amazon Bedrock Agents. However, we’ve chosen CrewAI for this demonstration to showcase how open source frameworks can extend Amazon Bedrock capabilities while maintaining enterprise-grade security through Bedrock Guardrails.

Solution components

This solution shows you how to combine multiple capabilities. It shows how to:

  1. Develop a multi-agent solution using a CrewAI framework
  2. Enrich the solution using domain-specific data using Amazon Bedrock Knowledge Bases
  3. Safeguard your generative AI application using Amazon Bedrock Guardrails
  4. Bring everything together using CrewAI and Amazon Bedrock Agents

You can use CrewAI to develop AI agents and coordinate tasks among those agents. This structure enables systematic management of complex AI workflows while maintaining oversight of agent interactions and outcomes. The framework has the following components, which are shown in the following figure:

CrewAI Framework is built around the following components:

  • Agents in CrewAI are autonomous components designed to perform specific tasks or roles within a multi-agent system. They have specific roles (such as researcher or writer) and make autonomous decisions with or without using external tools. LLMs are the core intelligence behind CrewAI agents. LLMs enable agents to understand context, make decisions, and generate human-like responses.
  • Tasks are defined jobs assigned to agents with clear objectives, including execution details and required resources.
  • Crews are coordinated teams of agents working together on a shared goal. Crews require defining agent roles, task assignments, and execution order.
  • Tools refer to the skills or functions that agents can use to carry out various actions.
  • Processes are responsible for orchestrating how tasks are executed by agents, similar to project management in human teams. These processes make sure that tasks are allocated and completed efficiently, in accordance with a predefined strategy.

Prerequisites

Before getting started with the solution, you need to get access to Amazon Bedrock models:

  1. Sign in to the Amazon Bedrock console and in the navigation pane under Bedrock configurations, select Model access to request access to Amazon Bedrock models. This step is shown in the following screenshots.

In this example, we use Amazon Nova Pro through Amazon Bedrock as our LLM. CrewAI provides built-in integration with Amazon Bedrock.

  1. Clone the GitHub repo into a local folder
git clone https://github.com/aws-samples/sample-compliance-assistant-with-agents.git

3. Use the following command to install the dependencies for running CrewAI in your Python environment:

pip install crewai uv

Your compliance agents

In this step, you will define your agents:

  1. Define compliance agents in the agents.yaml file. Each agent has a specific role to play:
    compliance_analyst:
      role: {topic} Senior Compliance Analyst
      goal: Review and understand regulatory and compliance requirements around {topic}
      backstory: You're a seasoned Compliance analyst with deep expertise in areas such as PCI DSS, HIPAA, NIST, ISO and knack for uncovering the latest regulations and requirements in {topic}.
    compliance_specialist:
      role: {topic} Compliance Specialist
      goal: Create detailed reports based on {topic} compliance analysis and research findings
      backstory: You're a meticulous compliance specialist with deep understanding of compliance and regulatory landscape for Financial services and Technology Industry. You create standards and policies for the organization to meet regulations and compliance needs.

  2. Define tasks for the agents:
    compliance_analysis_task:
      description: Conduct a thorough analysis about {topic}. Make sure you find relevant information given the current year is {current_year}.
      expected_output: A list with 10 bullet points of the most relevant information about {topic}
      agent: compliance_analyst
    compliance_reporting_task:
      description: Review the context you got and expand each topic into a full section for a report.
        Make sure the report is detailed and contains any and all relevant information for Financial Services Organization
      expected_output: A fully fledged report with the main topics, each with a full section of information.
      agent: compliance_specialist

  3. The execution and process steps are defined in crew.py:
    def crew(self) -> Crew:
    """Creates the Compliance Automation crew"""
        return Crew(
           agents=self.agents, 
           tasks=self.tasks, 
           process=Process.sequential,
           verbose=True)

  4. Define your LLM, topic, and runtime parameters in the .env file:
    MODEL=bedrock/us.amazon.nova-pro-v1:0
    AWS_REGION_NAME=us-west-2
    TOPIC='GDPR requirements for Data Privacy'

  5. Run the crew as follows:
    crewai run

  6. The following demo shows the output of the crew. You can see the agents collaborating to generate a detailed solutionComplianceAgents-Topic_GDPR

In the output, notice that the compliance analyst and the compliance specialist are working together to solve multiple aspects of General Data Protection Regulation (GDPR) requirements for trading services. Note the synergistic collaboration between agents as they refine their approach and develop a comprehensive compliance management response through iterative problem-solving.

Addressing LLM challenges with domain-specific knowledge

LLMs, although impressive in their broad knowledge, face two key limitations when dealing with specialized domains or recent information. First, they struggle with specialized information specific to your organization. Second, because their knowledge is limited to their training data, they might not reflect the latest updates in rapidly changing fields. This limitation becomes particularly important when dealing with evolving compliance requirements, such as Payment Card Industry Data Security Standard (PCI DSS), GDPR, AML rules, and Know Your Customer (KYC) regulations. Additionally, organizations need solutions that are customized to their specific compliance requirements and internal standards, rather than generic responses from LLMs.

Retrieval Augmented Generation (RAG) is a technique that enables generative AI models to retrieve and incorporate current organizational and domain-specific information from external databases. Amazon Bedrock Knowledge Base is a managed capability that helps you implement the entire RAG technique without having to build custom integrations to data sources and manage data flows. By incorporating a knowledge base containing the latest publications, regulatory updates, and compliance guidelines from authoritative sources such as NIST, ISO, PCI, and regulatory bodies, Amazon Bedrock Knowledge Bases helps make sure that your AI system stays current with the latest compliance requirements. During prompt generation, RAG first retrieves relevant data from this continually updated knowledge base, then uses it to create informed responses. This helps provide more relevant, accurate, and customized responses aligned with current regulatory and organizational standards. For example, when querying about PCI DSS v4.0 requirements or recent GDPR amendments, the system can pull the most up-to-date information directly from authoritative sources rather than relying on potentially outdated training data.

Create an Amazon Bedrock knowledge base with contextual information from your data sources

  1. From the Amazon Bedrock navigation pane, select Knowledge Bases under Builder tools and choose a Knowledge Base with vector store.
  2. Provide the Knowledge Base name and Data source details. You’ll use the web crawler for ingesting data.
  3. The web crawler provided by Amazon Bedrock connects to and crawls URLs you selected for use in your Amazon Bedrock knowledge base. Add the URLs as data sources under Source URLs.
  1. Select the model for embeddings. We have selected Amazon Titan Text Embeddings v2, as shown in the following screenshot.

After a few minutes, the knowledge base will be ready. After it has synced, Amazon Bedrock Knowledge Bases handles generating, running, and formatting the result of the query, simplifying building natural language interfaces to structured data.

Amazon Bedrock Agents

Amazon Bedrock Agents is a comprehensive environment for building sophisticated AI agent systems. At its core, it enables seamless multi-agent collaboration and maintains conversation context through native memory retention across interactions. Amazon Bedrock Agents integrates naturally with knowledge bases and enforce security through built-in guardrails. For this solution, we focus on two key capabilities: the RAG feature, which allows agents to access and utilize information from knowledge bases, and the security features provided through Amazon Bedrock Guardrails. These guardrails serve as an essential safeguard for your generative AI applications, promoting responsible and secure AI interactions.

  1. To create an agent, from the Amazon Bedrock navigation pane under Builder tools, select Agents and select Create Agent.
  1. Under Agent details, choose the model. We use Amazon Nova Pro for our use case, as shown in the following screenshot.
  1. Under Knowledge Bases, add knowledge bases to your agent.
  1. Choose the knowledge base name from the dropdown list, as shown in the following screenshot.

Amazon Bedrock Guardrails

Amazon Bedrock Guardrails provides safety controls that help maintain responsible AI use by providing a layer of security. Guardrails provide content filtering to monitor and filter AI model outputs to help prevent harmful, inappropriate, or biased content. You can set up filters for things such as hate speech, explicit content, or personally identifiable information (PII). You can also apply customizable rules and input/output validation.

  1. You can find Guardrails in the Amazon Bedrock navigation pane under Safeguards. Choose Create guardrail and provide a guardrail name
  1. As shown in the following screenshot, select the content filters you want to implement for your Amazon Bedrock based application
  2. Add denied topics with specific examples
  3. After you’ve created your guardrail, attach guardrail to the agent.

Putting it all together: Integrating Amazon Bedrock Agents with CrewAI

CrewAI provides seamless integration with Amazon Bedrock features, including Amazon Bedrock Knowledge Bases and Amazon Bedrock Agents through CrewAI tools functionality. When these tools are triggered from CrewAI agents, they process your query, retrieve the relevant information from the Amazon Bedrock knowledge base, and return responses back to CrewAI agent.

  1. Refer to the sample code demonstrating CrewAI tools for Amazon Bedrock Agent. You need to define your Amazon Bedrock AgentId and Alias as parameters in the .env file
  2. Execute the crew again with Amazon Bedrock Agents:
    crewai run

  3. You can find the generated output below
    ComplianceAgents-Topic_PCI

When you execute the crew, the compliance analyst agent initiates the process by invoking the CrewAI Bedrock tool to extract regulatory requirements from Amazon Bedrock Knowledge Bases, which is then seamlessly transformed into technical requirements by the compliance specialist agent. Through iterative collaboration, these specialized agents work together to fill information gaps, and the enterprise architect agent synthesizes the gathered insights to develop a robust implementation strategy and execution plan. This streamlined process demonstrates how multiple AI agents can effectively coordinate to transform compliance requirements into actionable technical solutions.

Clean up

To avoid ongoing charges, follow these steps to clean up resources:

  1. Delete the Amazon Bedrock knowledge base that you created:
aws bedrock-agent delete-knowledge-base --knowledge-base-id <your-kb-id>
  1. Delete the Amazon Bedrock agents that you created:
aws bedrock-agent delete-agent --agent-id <your-agent-id>

Conclusion

In this post, we demonstrated how to:

  • Build a multi-agent AI system using CrewAI that mimics the structure of a comprehensive compliance department with specialized agents for different functions
  • Enhance AI responses with domain-specific knowledge by implementing RAG using Amazon Bedrock Knowledge Bases
  • Safeguard your generative AI applications with Amazon Bedrock Guardrails to help prevent harmful, inappropriate, or biased content
  • Create custom tools in CrewAI to integrate with Amazon Bedrock Agents for more powerful and context-aware compliance solutions
  • Automate the entire compliance lifecycle from monitoring regulatory changes to implementing technical controls without extensive manual effort
  • Deploy a production-ready solution that continually adapts to evolving regulatory requirements in financial services and other highly regulated industries

This solution combines Amazon Bedrock Knowledge Bases and CrewAI to create smart, multi-agent AI systems that help streamline regulatory compliance tasks. With simplified RAG implementation, sophisticated workflows that mirror human teams, and faster adaptation to new regulations, this approach shows how AI can assist organizations with specific aspects of complex regulatory requirements.

This solution serves as a practical starting point for organizations looking to enhance their compliance processes with AI capabilities, demonstrating how intelligent systems could complement and streamline existing compliance workflows. The complete source code for this project is available on the GitHub repository. Feel free to explore, fork, or contribute!


About the Authors

Balu Mathew is a Senior Solutions Architect at AWS, based in Raleigh, NC. He collaborates with Global Financial Services customers to design and implement secure, scalable and resilient solutions on AWS. With deep expertise in security, machine learning, and the financial services industry, he helps organizations build, protect, and scale large-scale distributed systems efficiently. Outside of work, he enjoys spending time with his kids and exploring the mountains and the outdoors.

Read More

Pixtral Large is now available in Amazon Bedrock

Pixtral Large is now available in Amazon Bedrock

Today, we are excited to announce that Mistral AI’s Pixtral Large foundation model (FM) is generally available in Amazon Bedrock. With this launch, you can now access Mistral’s frontier-class multimodal model to build, experiment, and responsibly scale your generative AI ideas on AWS. AWS is the first major cloud provider to deliver Pixtral Large as a fully managed, serverless model.

In this post, we discuss the features of Pixtral Large and its possible use cases.

Overview of Pixtral Large

Pixtral Large is an advanced multimodal model developed by Mistral AI, featuring 124 billion parameters. This model combines a powerful 123-billion-parameter multimodal decoder with a specialized 1-billion-parameter vision encoder. It can seamlessly handle complex visual and textual tasks while retaining the exceptional language-processing capabilities of its predecessor, Mistral Large 2.

A distinguishing feature of Pixtral Large is its expansive context window of 128,000 tokens, enabling it to simultaneously process multiple images alongside extensive textual data. This capability makes it particularly effective in analyzing documents, detailed charts, graphs, and natural images, accommodating a broad range of practical applications.

The following are key capabilities of Pixtral Large:

  • Multilingual Text Analysis – Pixtral Large accurately interprets and extracts written information across multiple languages from images and documents. This is particularly beneficial for tasks like automatically processing receipts or invoices, where it can perform calculations and context-aware evaluations, streamlining processes such as expense tracking or financial analysis.
  • Chart and data visualization interpretation – The model demonstrates exceptional proficiency in understanding complex visual data representations. It can effortlessly identify trends, anomalies, and key data points within graphical visualizations. For instance, Pixtral Large is highly effective at spotting irregularities or insightful trends within training loss curves or performance metrics, enhancing the accuracy of data-driven decision-making.
  • General visual analysis and contextual understanding – Pixtral Large is adept at analyzing general visual data, including screenshots and photographs, extracting nuanced insights, and responding effectively to queries based on image content. This capability significantly broadens its usability, allowing it to support varied scenarios—from explaining visual contexts in presentations to automating content moderation and contextual image retrieval.

Additional model details include:

  • Pixtral Large is available in the eu-north-1 and us-west-2 AWS Regions
  • Cross-Region inference is available for the following Regions:
    • us-east-2
    • us-west-2
    • us-east-1
    • eu-west-1
    • eu-west-3
    • eu-north-1
    • eu-central-1
  • Model ID: mistral.pixtral-large-2502-v1:0
  • Context window: 128,000

Get started with Pixtral Large in Amazon Bedrock

If you’re new to using Mistral AI models, you can request model access on the Amazon Bedrock console. For more information, see Access Amazon Bedrock foundation models.

To test Pixtral Large on the Amazon Bedrock console, choose Text or Chat under Playgrounds in the navigation pane. Then, choose Select model and choose Mistral as the category and Pixtral Large as the model.

By choosing View API, you can also access the model using code examples in the AWS Command Line Interface (AWS CLI) and AWS SDKs. You can use a model ID such as mistral.mistral-large-2407-v1:0, as shown in the following code:

$ aws bedrock-runtime invoke-model  
--model-id mistral.pixtral-large-2502-v1:0 
--body "{"prompt":"<s>[INST] this is where you place your input text [/INST]", "max_tokens":200, "temperature":0.5, "top_p":0.9, "top_k":50}"  
--cli-binary-format raw-in-base64-out 
--region eu-north-1  
invoke-model-output.txt

In the following sections, we dive into the capabilities of Pixtral Large.

Pixtral Large use cases

In this section, we provide example use cases of Pixtral Large using sample prompts. Because Pixtral Large is built on Mistral Large 2, it includes a native JSON output mode. This feature enables developers to receive the model’s responses in a structured and simple-to-read format, which can be seamlessly integrated into various applications and systems. Because JSON is a widely used data exchange standard, this functionality streamlines the process of working with the model’s outputs, making it more accessible and practical for developers across different industries and use cases. For more information on generating JSON using the Converse API, refer to Generating JSON with the Amazon Bedrock Converse API.

Additionally, Pixtral Large supports the Converse API and tool usage. By using the Amazon Bedrock API, you can grant the model access to tools that assist in generating responses to the messages you send.

Generate SQL code from a database entity-relationship diagram

An entity-relationship (ER) diagram is a visual representation used in database design to illustrate the relationships between entities and their attributes. It is a crucial tool for conceptual modeling, helping developers and analysts understand and communicate the structure of data within a database.

This example tests the model’s ability to generate PostgreSQL-compatible SQL CREATE TABLE statements for creating entities and their relationships.

We use the following prompt:

prompt = """
You are a database designer. You have created an ER diagram. You need to generate PostgreSQL compatible SQL statements to create the tables"
"""

We input the following ER diagram.

The model response is as follows:

Sure! Below are the SQL statements to create the tables based on the provided 
ER diagram for a PostgreSQL database.

```sql
-- Create the Airport table
CREATE TABLE Airport (
    Airport_ID SERIAL PRIMARY KEY,
    AirportCity VARCHAR(255),
    AirportCountry VARCHAR(255)
);

-- Create the Calendar table
CREATE TABLE Calendar (
    Day_Date DATE PRIMARY KEY,
    Business_Day_YN BOOLEAN
);

-- Create the Flight_Details table
CREATE TABLE Flight_Details (
    Flight_ID SERIAL PRIMARY KEY,
    Source_Airport_ID INT REFERENCES Airport(Airport_ID),
    Destination_Airport_ID INT REFERENCES Airport(Airport_ID),
    Departure_Date_Time TIMESTAMP,
    Arrival_Date_Time TIMESTAMP,
    Airplane_Type VARCHAR(255)
);

-- Create the Travel_Class table
CREATE TABLE Travel_Class (
    Travel_Class_ID SERIAL PRIMARY KEY,
    Travel_Class_Name VARCHAR(255),
    Travel_Class_Capacity INT
);

-- Create the Seat_Details table
CREATE TABLE Seat_Details (
    Seat_ID SERIAL PRIMARY KEY,
    Travel_Class_ID INT REFERENCES Travel_Class(Travel_Class_ID),
    Flight_ID INT REFERENCES Flight_Details(Flight_ID)
);

-- Create the Flight_Cost table
CREATE TABLE Flight_Cost (
    Seat_ID INT REFERENCES Seat_Details(Seat_ID),
    Valid_From_Date DATE,
    Cost DECIMAL(10, 2),
    Valid_To_Date DATE,
    PRIMARY KEY (Seat_ID, Valid_From_Date)
);

-- Create the Flight_Service table
CREATE TABLE Flight_Service (
    Service_ID SERIAL PRIMARY KEY,
    Service_Name VARCHAR(255)
);

-- Create the Service_Offering table
CREATE TABLE Service_Offering (
    Travel_Class_ID INT REFERENCES Travel_Class(Travel_Class_ID),
    Service_ID INT REFERENCES Flight_Service(Service_ID),
    Offered_YN BOOLEAN,
    From_Date DATE,
    To_Date DATE,
    PRIMARY KEY (Travel_Class_ID, Service_ID)
);

-- Create the Passenger table
CREATE TABLE Passenger (
    Passenger_ID SERIAL PRIMARY KEY,
    P_FirstName VARCHAR(255),
    P_LastName VARCHAR(255),
    P_Email VARCHAR(255),
    P_PhoneNumber VARCHAR(255),
    P_Address VARCHAR(255),
    P_City VARCHAR(255),
    P_State VARCHAR(255),
    P_Zipcode VARCHAR(255),
    P_Country VARCHAR(255)
);

-- Create the Reservation table
CREATE TABLE Reservation (
    Reservation_ID SERIAL PRIMARY KEY,
    Passenger_ID INT REFERENCES Passenger(Passenger_ID),
    Seat_ID INT REFERENCES Seat_Details(Seat_ID),
    Date_Of_Reservation DATE
);

-- Create the Payment_Status table
CREATE TABLE Payment_Status (
    Payment_ID SERIAL PRIMARY KEY,
    Payment_Status_YN BOOLEAN,
    Payment_Due_Date DATE,
    Payment_Amount DECIMAL(10, 2),
    Reservation_ID INT REFERENCES Reservation(Reservation_ID)
);
```

These SQL statements will create the tables as per the ER diagram you provided. Make sure to run these statements in the order they are listed to ensure that foreign key constraints are satisfied.

Convert organization hierarchy to structured text

Pixtral Large has the capabilities to understand organization structure and generate structured output. Let’s test it with an organization structure.

We use the following prompt:

prompt = """
Extract organization hierarchy from the given org structure. provide response in a structured json format with below:
- role
- name
- reporting_manager
"""

We input the following organization structure image.

The model response is as follows:

Here is the organization hierarchy extracted from the given org structure in a 
structured JSON format:

```json
[
    {
        "role": "President",
        "name": "John Smith",
        "reporting_manager": null
    },
    {
        "role": "VP Marketing",
        "name": "Susan Jones",
        "reporting_manager": "John Smith"
    },
    {
        "role": "VP Sales",
        "name": "Rachel Parker",
        "reporting_manager": "John Smith"
    },
    {
        "role": "VP Production",
        "name": "Tom Allen",
        "reporting_manager": "John Smith"
    },
    {
        "role": "Manager",
        "name": "Alice Johnson",
        "reporting_manager": "Susan Jones"
    },
    {
        "role": "Manager",
        "name": "Tim Moore",
        "reporting_manager": "Susan Jones"
    },
    {
        "role": "Manager",
        "name": "Michael Gross",
        "reporting_manager": "Rachel Parker"
    },
    {
        "role": "Manager",
        "name": "Kim Dole",
        "reporting_manager": "Rachel Parker"
    },
    {
        "role": "Manager",
        "name": "Kathy Roberts",
        "reporting_manager": "Tom Allen"
    },
    {
        "role": "Manager",
        "name": "Betsy Foster",
        "reporting_manager": "Tom Allen"
    }
]
```

Chart understanding and reasoning

Pixtral Large has the capability to understand and reason over charts and graphs. Let’s test Pixtral Large with a visualization of the portion of motorcycle ownership per country around the world.

We use the following prompt:

prompt = """Examine this visualization:
1. First, describe what this visualization represents:
   - What information is being shown?
   - How is the data displayed?
   - What do the different components represent?
   - What does the size variation indicate?
2. Analyze specific patterns:
   - Which region shows the highest proportion of the first category?
   - Which region shows the highest proportion of the second category?
   - Where do you observe the largest total values?
   - Which areas show the most balanced distribution?
3. Compare regional trends:
   - How do the proportions differ between continents?
   - What patterns emerge between different hemispheres?
   - Are there clear differences between different economic zones?
4. Identify interesting outliers:
   - Which regions stand out from their neighbors?
   - Can you identify any unexpected patterns?
   - Where do you notice significant data variations?
5. Consider geographical and demographic factors:
   - How might local conditions influence these patterns?
   - What socioeconomic factors might explain the variations?
   - Can you identify any correlation between size and proportions?
6. Make comparisons between:
   - Different geographical zones
   - Various population densities
   - Different economic development levels"""

We input the following image.

By Dennis Bratland – Own work, CC BY-SA 3.0, https://commons.wikimedia.org/w/index.php?curid=15186498

The model response is as follows:


Description of the Visualization:
Information Shown: The visualization presents data on the population (in millions) and the proportion of vehicles (cars and motorcycles) across various regions worldwide.
Display Method: The data is displayed using pie charts overlaid on a world map. The size of each pie chart corresponds to the population of the region, with larger pies indicating larger populations.
Components:
Pie Charts: Each pie chart is divided into two segments representing the proportion of cars (light blue) and motorcycles (dark blue).
Size Variation: The size of the pie charts indicates the total population of the region, with larger pies representing higher populations.
Specific Patterns:
Highest Proportion of Cars: Regions like the United States and parts of Europe show a higher proportion of cars.
Highest Proportion of Motorcycles: Southeast Asian countries like Indonesia and Vietnam show a higher proportion of motorcycles.
Largest Total Values: Large pie charts are observed in highly populated regions like India, China, and parts of Southeast Asia.
Most Balanced Distribution: Some regions in Europe and South America show a more balanced distribution of cars and motorcycles.
Regional Trends:
Continental Differences:
North America: Predominantly cars.
Europe: A mix with a slight preference for cars.
Africa: Generally lower total values with varied proportions.
Asia: High population with a significant proportion of motorcycles in Southeast Asia.
Hemispheric Patterns: The Northern Hemisphere has more densely populated regions with varied vehicle proportions, while the Southern Hemisphere shows lower population densities and more balanced distributions.
Economic Zones: Developed regions tend to have higher car proportions, while developing regions, especially in Asia, show higher motorcycle usage.
Interesting Outliers:
Indonesia: Stands out with a very high proportion of motorcycles compared to neighboring regions.
United States: Shows a predominantly high proportion of cars, distinct from other regions.
Unexpected Patterns: Some smaller African countries show unexpectedly high proportions of motorcycles despite lower populations.
Geographical and Demographic Factors:
Local Conditions: Urbanization and infrastructure development influence vehicle preferences.
Socioeconomic Factors: Economic status and affordability play significant roles in vehicle choices.
Correlation: Larger populations in developing regions often correlate with higher motorcycle usage due to affordability and convenience.
Comparisons:
Geographical Zones: Urban areas in developed countries show higher car usage, while rural and developing regions favor motorcycles.
Population Densities: Highly populated areas in Asia show a preference for motorcycles due to traffic congestion and affordability.
Economic Development: Developed economies have higher car proportions, while developing economies rely more on motorcycles.

Conclusion

In this post, we demonstrated how to get started with the Pixtral Large model in Amazon Bedrock. The Pixtral Large multimodal model allows you to tackle a variety of use cases, such as document understanding, logical reasoning, handwriting recognition, image comparison, entity extraction, extracting structured data from scanned images, and caption generation. These capabilities can enhance productivity across numerous enterprise applications, including ecommerce (retail), marketing, financial services, and beyond.

Mistral AI’s Pixtral Large FM is now available in Amazon Bedrock. To get started with Pixtral Large in Amazon Bedrock, visit the Amazon Bedrock console.

Curious to explore further? Take a look at the Mistral-on-AWS repo. For more information on Mistral AI models available on Amazon Bedrock, refer to Mistral AI models now available on Amazon Bedrock.


About the Authors

Deepesh Dhapola is a Senior Solutions Architect at AWS India, specializing in helping financial services and fintech clients optimize and scale their applications on the AWS Cloud. With a strong focus on trending AI technologies, including generative AI, AI agents, and the Model Context Protocol (MCP), Deepesh leverages his expertise in machine learning to design innovative, scalable, and secure solutions. Passionate about the transformative potential of AI, he actively explores cutting-edge advancements to drive efficiency and innovation for AWS customers. Outside of work, Deepesh enjoys spending quality time with his family and experimenting with diverse culinary creations.

Andre Boaventura is a Principal AI/ML Solutions Architect at AWS, specializing in generative AI and scalable machine learning solutions. With over 25 years in the high-tech software industry, he has deep expertise in designing and deploying AI applications using AWS services such as Amazon Bedrock, Amazon SageMaker, and Amazon Q. Andre works closely with global system integrators (GSIs) and customers across industries to architect and implement cutting-edge AI/ML solutions to drive business value.

Preston Tuggle is a Sr. Specialist Solutions Architect with the Third-Party Model Provider team at AWS. He focuses on working with model providers across Amazon Bedrock and Amazon SageMaker, helping them accelerate their go-to-market strategies through technical scaling initiatives and customer engagement

Shane Rai is a Principal GenAI Specialist with the AWS World Wide Specialist Organization (WWSO). He works with customers across industries to solve their most pressing and innovative business needs using AWS’s breadth of cloud-based AI/ML services, including model offerings from top-tier foundation model providers.

Ankit Agarwal is a Senior Technical Product Manager at Amazon Bedrock, where he operates at the intersection of customer needs and foundation model providers. He leads initiatives to onboard cutting-edge models onto Amazon Bedrock Serverless and drives the development of core features that enhance the platform’s capabilities.

Niithiyn Vijeaswaran is a Generative AI Specialist Solutions Architect with the Third-Party Model Science team at AWS. His area of focus is AWS AI accelerators (AWS Neuron). He holds a Bachelor’s in Computer Science and Bioinformatics.

Aris Tsakpinis is a Specialist Solutions Architect for Generative AI focusing on open source models on Amazon Bedrock and the broader generative AI open source ecosystem. Alongside his professional role, he is pursuing a PhD in Machine Learning Engineering at the University of Regensburg, where his research focuses on applied natural language processing in scientific domains.

Read More

Implement human-in-the-loop confirmation with Amazon Bedrock Agents

Implement human-in-the-loop confirmation with Amazon Bedrock Agents

Agents are revolutionizing how businesses automate complex workflows and decision-making processes. Amazon Bedrock Agents helps you accelerate generative AI application development by orchestrating multi-step tasks. Agents use the reasoning capability of foundation models (FMs) to break down user-requested tasks into multiple steps. In addition, they use the developer-provided instruction to create an orchestration plan and then carry out the plan by invoking company APIs and accessing knowledge bases using Retrieval Augmented Generation (RAG) to provide an answer to the user’s request.

Building intelligent autonomous agents that effectively handle user queries requires careful planning and robust safeguards. Although FMs continue to improve, they can still produce incorrect outputs, and because agents are complex systems, errors can occur at multiple stages. For example, an agent might select the wrong tool or use correct tools with incorrect parameters. Although Amazon Bedrock agents can self-correct through their reasoning and action (ReAct) strategy, repeated tool execution might be acceptable for non-critical tasks but risky for business-critical operations, such as database modifications.

In these sensitive scenarios, human-in-the-loop (HITL) interaction is essential for successful AI agent deployments, encompassing multiple critical touchpoints between humans and automated systems. HITL can take many forms, from end-users approving actions and providing feedback, to subject matter experts reviewing responses offline and agents working alongside customer service representatives. The common thread is maintaining human oversight and using human intelligence to improve agent performance. This human involvement helps establish ground truth, validates agent responses before they go live, and enables continuous learning through feedback loops.

In this post, we focus specifically on enabling end-users to approve actions and provide feedback using built-in Amazon Bedrock Agents features, specifically HITL patterns for providing safe and effective agent operations. We explore the patterns available using a Human Resources (HR) agent example that helps employees requesting time off. You can recreate the example manually or using the AWS Cloud Development Kit (AWS CDK) by following our GitHub repository. We show you what these methods look like from an application developer’s perspective while providing you with the overall idea behind the concepts. For the post, we apply user confirmation and return of control on Amazon Bedrock to achieve the human confirmation.

Amazon Bedrock Agents frameworks for human-in-the-loop confirmation

When implementing human validation in Amazon Bedrock Agents, developers have two primary frameworks at their disposal: user confirmation and return of control (ROC). These mechanisms, though serving similar oversight purposes, address different validation needs and operate at different levels of the agent’s workflow.

User confirmation provides a straightforward way to pause and validate specific actions before execution. With user confirmation, the developer receives information about the function (or API) and parameters values that an agent wants to use to complete a certain task. The developer can then expose this information to the user in the agentic application to collect a confirmation that the function should be executed before continuing the agent’s orchestration process.

With ROC, the agent provides the developer with the information about the task that it wants to execute and completely relies on the developer to execute the task. In this approach, the developer has the possibility to not only validate the agent’s decision, but also contribute with additional context and modify parameters during the agent’s execution process. ROC also happens to be configured at the action group level, covering multiple actions.

Let’s explore how each framework can be implemented and their specific use cases.

Autonomous agent execution: No human-in-the-loop

First, let’s demonstrate what a user experience might look like if your application doesn’t have a HITL. For that, let’s consider the following architecture.

Simplified AWS Cloud architecture diagram showing core components of PTO request system including employee interaction, HR Assistant, and Lambda functions.

In the preceding diagram, the employee interacts with the HR Assistant agent, which then invokes actions that can change important details about the employee’s paid time off (PTO). In this scenario, when an employee requests time off, the agent will automatically request the leave after confirming that enough PTO days are still available for the requesting employee.

The following screenshot shows a sample frontend UI for an Amazon Bedrock agent with functions to retrieve PTOs and request new ones.

HR Assistant interface screenshot showing successful submission of time off request with ID 456 and remaining balance of 7.25 days.

In this interaction, the PTO request was submitted with no confirmation from the end-user. What if the user didn’t want to actually submit a request, but only check that it could be done? What if the date they provided was incorrect and had a typo? For any action that changes the state of a user’s PTO, it would provide a better user experience if the system asked for confirmation before actually making those changes.

Simple human validation: User confirmation

When requesting PTO, employees expect to be able to confirm their actions. This minimizes the execution of accidental requests and helps confirm that the agent understood the request and its parameters correctly.

For such scenarios, a Boolean confirmation is already sufficient to continue to execution of the agentic flow. Amazon Bedrock Agents offers an out-of-the-box user confirmation feature that enables developers to incorporate an extra layer of safety and control into their AI-driven workflows. This mechanism strikes a balance between automation and human oversight by making sure that critical actions are validated by users before execution. With user confirmation, developers can decide which tools can be executed automatically and which ones should be first confirmed.

For our example, reading the values for available PTO hours and listing the past PTO requests taken by an employee are non-critical operations that can be executed automatically. However, booking, updating, or canceling a PTO request requires changes on a database and are actions that should be confirmed before execution. Let’s change our agent architecture to include user confirmation, as shown in the following updated diagram.

AWS Cloud architecture diagram showing employee PTO request workflow with confirm action execution flow between components.

In the updated architecture, when the employee interacts with the HR Assistant agent and the create_pto_request() action needs to be invoked, the agent will first request user confirmation before execution.

To enable user confirmation, agent developers can use the AWS Management Console, an SDK such as Boto3, or infrastructure as code (IaC) with AWS CloudFormation (see AWS::Bedrock::Agent Function). The user experience with user confirmation will look like the following screenshot.

HR Assistant interface screenshot showing confirmation dialog with Confirm and Reject buttons for a 3-day time off request.

In this interaction, the agent requests a confirmation from the end-user in order to execute. The user can then choose if they want to proceed with the time off request or not. Choosing Confirm will let the agent execute the action based on the parameter displayed.

HR Assistant interface screenshot showing dialog after user confirmed time off.

The following diagram illustrates the workflow for confirming the action.

Sequence diagram depicting interaction between User, Client Side UI, Agent, Model and API with user confirmation flow for time off requests.

In this scenario, the developer maps the way the confirmation is displayed to the user in the client-side UI and the agent validates the confirmation state before executing the action.

Customized human input: Return of control

User confirmation provides a simple yes/no validation, but some scenarios require a more nuanced human input. This is where ROC comes into play. ROC allows for a deeper level of human intervention, enabling users to modify parameters or provide additional context before an action is executed.

Let’s consider our HR agent example. When requesting PTO, a common business requirement is for employees to review and potentially edit their requests before submission. This expands upon the simple confirmation use case by allowing users to alter their original input before sending a request to the backend. Amazon Bedrock Agents offers an out-of-the-box solution to effectively parse user input and send it back in a structured format using ROC.

To implement ROC, we need to modify our agent architecture slightly, as shown in the following diagram.

AWS Cloud architecture diagram showing interaction between Employee, HR Assistant, and two Action Groups (Get PTO and Request PTO) with Lambda functions for handling PTO requests.

In this architecture, ROC is implemented at the action group level. When an employee interacts with the HR Assistant agent, the system requires explicit confirmation of all function parameters under the “Request PTO Action Group” before executing actions within the action group.

With ROC, the user experience becomes more interactive and flexible. The following screenshot shows an example with our HR agent application.

Screenshot of HR Assistant interface with a time off request form showing fields for number of days (3.00) and start date (2025/04/14) with a submit button.

Instead of executing the action automatically or just having a confirm/deny option, users are presented with a form to edit their intentions directly before processing. In this case, our user can realize they accidentally started their time off request on a Sunday and can edit this information before submission.

After the user reviews and potentially modifies the request, they can approve the parameters.

Screenshot of HR Assistant interface showing a conversation about requesting 3 days off starting 2025-04-14, with system responses confirming the request steps.

When implementing ROC, it’s crucial to understand that parameter validation occurs at two distinct points. The agent performs initial validation before returning control to the user (for example, checking available PTO balance), and the final execution relies on the application’s API validation layer.

For instance, if a user initially requests 3 days of PTO, the agent validates against their 5-day balance and returns control. However, if the user modifies the request to 100 days during ROC, the final validation and enforcement happen at the API level, not through the agent. This differs from confirmation flows where the agent directly executes API calls. In ROC, the agent’s role is to facilitate the interaction and return API responses, and the application maintains ultimate control over parameter validation and execution.

The core difference in the ROC approach is that the responsibility of processing the time off request is now handled by the application itself instead of being automatically handled by the agent. This allows for more complex workflows and greater human oversight.

To better understand the flow of information in a ROC scenario, let’s examine the following sequence diagram.

Sequence diagram showing interaction between User, Client Side UI, Agent, Model and Function/API for processing a time off request. The flow shows pre-processing, model determination, parameter handling, and result processing.

In this workflow, the agent prepares the action but doesn’t execute it. Instead, it returns control to the application, which then presents the editable information to the user. After the user reviews and potentially modifies the request, the application is responsible for executing the action with the final, user-approved parameters.

This approach provides several benefits:

  • Enhanced accuracy – Users can correct misunderstandings or errors in the agent’s interpretation of their request
  • Flexibility – It allows for last-minute changes or additions to the request
  • User empowerment – It gives users more control over the final action, increasing trust in the system
  • Compliance – In regulated industries, this level of human oversight can be crucial for adhering to legal or policy requirements

Implementing ROC requires more development effort compared to user confirmation, because it involves creating UIs for editing and handling the execution of actions within the application. However, for scenarios where precision and user control are paramount, the additional complexity is often justified.

Conclusion

In this post, we explored two primary frameworks for implementing human validation in Amazon Bedrock Agents: user confirmation and return of control. Although these mechanisms serve similar oversight purposes, they address different validation needs and operate at distinct levels of the agent’s workflow. User confirmation provides a straightforward Boolean validation, allowing users to approve or reject specific actions before execution. This method is ideal for scenarios where a simple yes/no decision is sufficient to promote safety and accuracy.

ROC offers a more nuanced approach, enabling users to modify parameters and provide additional context before action execution. This framework is particularly useful in complex scenarios, where changing of the agent’s decisions is necessary.

Both methods contribute to a robust HITL approach, providing an essential layer of human validation to the agentic application.

User confirmation and ROC are just two aspects of the broader HITL paradigm in AI agent deployments. In future posts, we will address other crucial use cases for HITL interactions with agents.

To get started creating your own agentic application with HITL validation, we encourage you to explore the HR example discussed in this post. You can find the complete code and implementation details in our GitHub repository.


About the Authors

Clement Perrot is a Senior Solutions Architect and AI/ML Specialist at AWS, where he helps early-stage startups build and implement AI solutions on the AWS platform. In his role, he architects large-scale GenAI solutions, guides startups in implementing LLM-based applications, and drives the technical adoption of AWS GenAI services globally. He collaborates with field teams on complex customer implementations and authors technical content to enable AWS GenAI adoption. Prior to AWS, Clement founded two successful startups that were acquired, and was recognized with an Inc 30 under 30 award.

Ryan Sachs is a Solutions Architect at AWS, specializing in GenAI application development. Ryan has a background in developing web/mobile applications at companies large and small through REST APIs. Ryan helps early-stage companies solve their business problems by integrating Generative AI technologies into their existing architectures.

Maira Ladeira Tanke is a Tech Lead for Agentic workloads in Amazon Bedrock at AWS, where she enables customers on their journey todevelop autonomous AI systems. With over 10 years of experience in AI/ML. At AWS, Maira partners with enterprise customers to accelerate the adoption of agentic applications using Amazon Bedrock, helping organizations harness the power of foundation models to drive innovation and business transformation. In her free time, Maira enjoys traveling, playing with her cat, and spending time with her family someplace warm.

Mark Roy is a Principal Machine Learning Architect for AWS, helping customers design and build AI/ML solutions. Mark’s work covers a wide range of ML use cases, with a primary interest in computer vision, deep learning, and scaling ML across the enterprise. He has helped companies in many industries, including insurance, financial services, media and entertainment, healthcare, utilities, and manufacturing. Mark holds six AWS Certifications, including the ML Specialty Certification. Prior to joining AWS, Mark was an architect, developer, and technology leader for over 25 years, including 19 years in financial services.

Read More

Boost team productivity with Amazon Q Business Insights

Boost team productivity with Amazon Q Business Insights

Employee productivity is a critical factor in maintaining a competitive advantage. Amazon Q Business offers a unique opportunity to enhance workforce efficiency by providing AI-powered assistance that can significantly reduce the time spent searching for information, generating content, and completing routine tasks. Amazon Q Business is a fully managed, generative AI-powered assistant that lets you build interactive chat applications using your enterprise data, generating answers based on your data or large language model (LLM) knowledge. At the core of this capability are native data source connectors that seamlessly integrate and index content from multiple data sources like Salesforce, Jira, and SharePoint into a unified index.

Key benefits for organizations include:

  • Simplified deployment and management – Provides a ready-to-use web experience with no machine learning (ML) infrastructure to maintain or manage
  • Access controls – Makes sure users only access content they have permission to view
  • Accurate query responses – Delivers precise answers with source citations, analyzing enterprise data
  • Privacy and control – Offers comprehensive guardrails and fine-grained access controls
  • Broad connectivitySupports over 45 native data source connectors (at the time of writing), and provides the ability to create custom connectors

Data privacy and the protection of intellectual property are paramount concerns for most organizations. At Amazon, “Security is Job Zero,” which is why Amazon Q Business is designed with these critical considerations in mind. Your data is not used for training purposes, and the answers provided by Amazon Q Business are based solely on the data users have access to. This makes sure that enterprises can quickly find answers to questions, provide summaries, generate content, and complete tasks across various use cases with complete confidence in data security. Amazon Q Business supports encryption in transit and at rest, allowing end-users to use their own encryption keys for added security. This robust security framework enables end-users to receive immediate, permissions-aware responses from enterprise data sources with citations, helping streamline workplace tasks while maintaining the highest standards of data privacy and protection.

Amazon Q Business Insights provides administrators with details about the utilization and effectiveness of their AI-powered applications. By monitoring utilization metrics, organizations can quantify the actual productivity gains achieved with Amazon Q Business. Understanding how employees interact with and use Amazon Q Business becomes crucial for measuring its return on investment and identifying potential areas for further optimization. Tracking metrics such as time saved and number of queries resolved can provide tangible evidence of the service’s impact on overall workplace productivity. It’s essential for admins to periodically review these metrics to understand how users are engaging with Amazon Q Business and identify potential areas of improvement.

The dashboard enables administrators to track user interactions, including the helpfulness of generated answers through user ratings. By visualizing this feedback, admins can pinpoint instances where users aren’t receiving satisfactory responses. With Amazon Q Business Insights, administrators can diagnose potential issues such as unclear user prompts, misconfigured topics and guardrails, insufficient metadata boosters, or inadequate data source configurations. This comprehensive analytics approach empowers organizations to continuously refine their Amazon Q Business implementation, making sure users receive the most relevant and helpful AI-assisted support.

In this post, we explore Amazon Q Business Insights capabilities and its importance for organizations. We begin with an overview of the available metrics and how they can be used for measuring user engagement and system effectiveness. Then we provide instructions for accessing and navigating this dashboard. Finally, we demonstrate how to integrate Amazon Q Business logs with Amazon CloudWatch, enabling deeper insights into user interaction patterns and identifying areas for improvement. This integration can empower administrators to make data-driven decisions for optimizing their Amazon Q Business implementations and maximizing return on investment (ROI).

Amazon Q Business and Amazon Q Apps analytics dashboards

In this section, we discuss the Amazon Q Business and Amazon Q Apps analytics dashboards.

Overview of key metrics

Amazon Q Business Insights (see the following screenshot) offers a comprehensive set of metrics that provide valuable insights into user engagement and system performance. Key metrics include Total queries and Total conversations, which give an overall picture of system usage. More specific metrics such as Queries per conversation and Queries per user offer deeper insights into user interaction patterns and the complexity of inquiries. The Number of conversations and Number of queries metrics help administrators track adoption and usage trends over time.

Amazon Q Business Insights

The dashboard also provides critical information on system effectiveness through metrics like Unsuccessful query responses and Thumbs down reasons (see the following screenshot), which highlight areas where the AI assistant might be struggling to provide adequate answers. This is complemented by the end-user feedback metric, which includes user ratings and response effectiveness reasons. These metrics are particularly valuable for identifying specific issues users are encountering and areas where the system needs improvement.

Helpfulness cards

Complementing the main dashboard, Amazon Q Business provides a dedicated analytics dashboard for Amazon Q Apps that offers detailed insights into application creation, usage, and adoption patterns. The dashboard tracks user engagement through metrics like:

  • Active users (average unique daily users interacting with Amazon Q Apps)
  • Active creators (average unique daily users creating or updating Amazon Q Apps)

Application metrics include:

  • Total Q Apps (average daily total)
  • Active Q Apps (average number of applications run or updated daily)

These metrics help provide a clear picture of application utilization.

The dashboard also features several trend analyses that help administrators understand usage patterns over time:

  • Q App participants trend shows the relationship between daily active users and creators
  • Q App trend displays the correlation between total applications created and active applications
  • Total Q App runs trend and Published Q App trend track daily execution rates and publication patterns, respectively

These metrics enable administrators to evaluate the performance and adoption of Amazon Q Apps within their organization, helping identify successful implementation patterns and areas needing attention.

Amazon Q Apps insights

These comprehensive metrics are crucial for organizations to optimize their Amazon Q Business implementation and maximize ROI. By analyzing trends in Total queries, Total conversations, and user-specific metrics, administrators can gauge adoption rates and identify potential areas for user training or system improvements. The Unsuccessful query responses and Customer feedback metrics help pinpoint gaps in the knowledge base or areas where the system struggles to provide satisfactory answers. By using these metrics, organizations can make data-driven decisions to enhance the effectiveness of their AI-powered assistant, ultimately leading to improved productivity and user experience across various use cases within the enterprise.

How to access Amazon Q Business Insights dashboards

As an Amazon Q admin, you can view the dashboards on the Amazon Q Business console. You can view the metrics in these dashboards over different pre-selected time intervals. They are available at no additional charge in AWS Regions where the Amazon Q Business service is offered.

To view these dashboards on the Amazon Q Business console, you choose your application environment and navigate to the Insights page. For more details, see Viewing the analytics dashboards.

The following screenshot illustrates how to access the dashboards for Amazon Q Business applications and Amazon Q Apps Insights.

Access to Amazon Q Business Insights

Monitor Amazon Q Business user conversations

In addition to Amazon Q Business and Amazon Q Apps dashboards, you can use Amazon CloudWatch Logs to deliver user conversations and response feedback in Amazon Q Business for you to analyze. These logs can be delivered to multiple destinations, such as CloudWatch, Amazon Simple Storage Service (Amazon S3), or Amazon Data Firehose.

The following diagram depicts the flow of user conversation and feedback responses from Amazon Q Business to Amazon S3. These logs are then queryable using Amazon Athena.

Amazon Q Business logs ingestion

Prerequisites

To set up CloudWatch Logs for Amazon Q Business, make sure you have the appropriate permissions for the intended destination. Refer to Monitoring Amazon Q Business and Q Apps for more details.

Set up log delivery with CloudWatch as a destination

Complete the following steps to set up log delivery with CloudWatch as the destination:

  1. Open the Amazon Q Business console and sign in to your account.
  2. In Applications, choose the name of your application environment.
  3. In the navigation pane, choose Enhancements and choose Admin Controls and Guardrails.
  4. In Log delivery, choose Add and select the option To Amazon CloudWatch Logs.
  5. For Destination log group, enter the log group where the logs will be stored.

Log groups prefixed with /aws/vendedlogs/ will be created automatically. Other log groups must be created prior to setting up a log delivery.

  1. To filter out sensitive or personally identifiable information (PII), choose Additional settings – optional and specify the fields to be logged, output format, and field delimiter.

If you want the users’ email recorded in your logs, it must be added explicitly as a field in Additional settings.

  1. Choose Add.
  2. Choose Enable logging to start streaming conversation and feedback data to your logging destination.

Set up log delivery with Amazon S3 as a destination

To use Amazon S3 as a log destination, you will need an S3 bucket and grant Amazon Q Business the appropriate permissions to write your logs to Amazon S3.

  1. Open the Amazon Q Business console and sign in to your account.
  2. In Applications, choose the name of your application environment.
  3. In the navigation pane, choose Enhancements and choose Admin Controls and Guardrails.
  4. In Log delivery, choose Add and select the option To Amazon S3
  5. For Destination S3 bucket, enter your bucket.
  6. To filter out sensitive or PII data, choose Additional settings – optional and specify the fields to be logged, output format, and field delimiter.

If you want the users’ email recorded in your logs, it must be added explicitly as a field in Additional settings.

  1. Choose Add.
  2. Choose Enable logging to start streaming conversation and feedback data to your logging destination.

The logs are delivered to your S3 bucket with the following prefix: AWSLogs/<your-aws-account-id>/AmazonQBusinessLogs/<your-aws-region>/<your-q-business-application--id>/year/month/day/hour/ The placeholders will be replaced with your AWS account, Region, and Amazon Q Business application identifier, respectively.

Set up Data Firehose as a log destination

Amazon Q Business application event logs can also be streamed to Data Firehose as a destination. This can be used for real-time observability. We have excluded setup instructions for brevity.

To use Data Firehose as a log destination, you need to create a Firehose delivery stream (with Direct PUT enabled) and grant Amazon Q Business the appropriate permissions to write your logs to Data Firehose. For example AWS Identity and Access Management (IAM) policies with the required permissions for your specific logging destination, see Enable logging from AWS services.

Protecting sensitive data

You can prevent an AWS console user or group of users from viewing specific CloudWatch log groups, S3 buckets, or Firehose streams by applying specific deny statements in their IAM policies. AWS follows an explicit deny overrides allow model, meaning that if you explicitly deny an action, it will take precedence over allow statements. For more information, see Policy evaluation logic.

Real-world use cases

This section outlines several key use cases for Amazon Q Business Insights, demonstrating how you can use Amazon Q Business operational data to improve your operational posture to help Amazon Q Business meet your needs.

Measure ROI using Amazon Q Business Insights

The dashboards offered by Amazon Q Business Insights provide powerful metrics that help organizations quantify their ROI. Consider this common scenario: traditionally, employees spend countless hours searching through siloed documents, knowledge bases, and various repositories to find answers to their questions. This time-consuming process not only impacts productivity but also leads to significant operational costs. With the dashboards provided by Amazon Q Business Insights, administrators can now measure the actual impact of their investment by tracking key metrics such as total questions answered, total conversations, active users, and positive feedback rates. For instance, if an organization knows that it previously took employees an average of 3 minutes to find an answer in their documentation, and with Amazon Q Business this time is reduced to 20 seconds, they can calculate the time savings per query (2 minutes and 40 seconds). When the dashboard shows 1,000 successful queries per week, this translates to approximately 44 hours of productivity gained—time that employees can now dedicate to higher-value tasks. Organizations can then translate these productivity gains into tangible cost savings based on their specific business metrics.

Furthermore, the dashboard’s positive feedback rate metric helps validate the quality and accuracy of responses, making sure employees aren’t just getting answers, but reliable ones that help them do their jobs effectively. By analyzing these metrics over time—whether it’s over 24 hours, 7 days, or 30 days—organizations can demonstrate how Amazon Q Business is transforming their knowledge management approach from a fragmented, time-intensive process to an efficient, centralized system. This data-driven approach to measuring ROI not only justifies the investment but also helps identify areas where the service can be optimized for even greater returns.

Organizations looking to quantify financial benefits can develop their own ROI calculators tailored to their specific needs. By combining Amazon Q Business Insights metrics with their internal business variables, teams can create customized ROI models that reflect their unique operational context. Several reference calculators are publicly available online, ranging from basic templates to more sophisticated models, which can serve as a starting point for organizations to build their own ROI analysis tools. This approach enables leadership teams to demonstrate the tangible financial benefits of their Amazon Q Business investment and make data-driven decisions about scaling their implementation, based on their organization’s specific metrics and success criteria.

Enforce financial services compliance with Amazon Q Business analytics

Maintaining regulatory compliance while enabling productivity is a delicate balance. As organizations adopt AI-powered tools like Amazon Q Business, it’s crucial to implement proper controls and monitoring. Let’s explore how a financial services organization can use Amazon Q Business Insights capabilities and logging features to maintain compliance and protect against policy violations.

Consider this scenario: A large investment firm has adopted Amazon Q Business to help their financial advisors quickly access client information, investment policies, and regulatory documentation. However, the compliance team needs to make sure the system isn’t being used to circumvent trading restrictions, particularly around day trading activities that could violate SEC regulations and company policies.

Identify policy violations through Amazon Q Business logs

When the compliance team enables log delivery to CloudWatch with the user_email field selected, Amazon Q Business begins sending detailed event logs to CloudWatch. These logs are separated into two CloudWatch log streams:

  • QBusiness/Chat/Message – Contains user interactions
  • QBusiness/Chat/Feedback – Contains user feedback on responses

For example, the compliance team monitoring the logs might spot this concerning chat from Amazon Q Business:

{ 
    "application_id": "881486e0-c027-40ae-96c2-8bfcf8b99c2a",
    "event_timestamp": "2025-01-30T19:19:23Z",
    "log_type": "Message",
    "conversation_id": "ffd1116d-5a6d-4db0-a00e-331e4eea172f", 
    "user_message": "What are the best strategies for day trading client accounts?", 
    "user_email": "janedoe@example.com"
}

The compliance team can automate this search by creating an alarm on CloudWatch Metrics Insights queries in CloudWatch.

Implement preventative controls

Upon identifying these attempts, the Amazon Q Business admin can implement several immediate controls within Amazon Q Business:

  • Configure blocked phrases to make sure chat responses don’t include these words
  • Configure topic-level controls to configure rules to customize how Amazon Q Business should respond when a chat message matches a special topic

The following screenshot depicts configuring topic-level controls for the phrase “day trading.”

Configuring topic-level controls

Using the previous topic-level controls, different variations of the phrase “day trading” will be blocked. The following screenshot represents a user entering variations of the phrase “day trading” and how Amazon Q Business blocks that phrase due to the topic-level control for the phrase.

Blocking topics

By implementing monitoring and configuring guardrails, the investment firm can maintain its regulatory compliance while still allowing legitimate use of Amazon Q Business for approved activities. The combination of real-time monitoring through logs and preventive guardrails creates a robust defense against potential violations while maintaining detailed audit trails for regulatory requirements.

Analyze user feedback through the Amazon Q Business Insights dashboard

After log delivery has been set up, administrators can use the Amazon Q Business Insights dashboard to get a comprehensive view of user feedback. This dashboard provides valuable data about user experience and areas needing improvement through two key metric cards: Unsuccessful query responses and Thumbs down reasons. The Thumbs down reasons chart offers a detailed breakdown of user feedback, displaying the distribution and frequency of specific reasons why users found responses unhelpful. This granular feedback helps administrators identify patterns in user feedback, whether it’s due to incomplete information, inaccurate responses, or other factors.

Similarly, the Unsuccessful query responses chart distinguishes between queries that failed because answers weren’t found in the knowledge base vs. those blocked by guardrail settings. Both metrics allow administrators to drill down into specific queries through filtering options and detailed views, enabling them to investigate and address issues systematically. This feedback loop is crucial for continuous improvement, helping organizations refine their content, adjust guardrails, and enhance the overall effectiveness of their Amazon Q Business implementation.

To view a breakdown of unsuccessful query responses, follow these steps:

  1. Select your application on the Amazon Q Business console.
  2. Select Amazon Q Business insights under Insights.
  3. Go to the Unsuccessful query responses metrics card and choose View details to resolve issues.

Unsuccessful query responses

A new page will open with two tabs: No answers found and Blocked queries.

  1. You can use these tabs to filter by response type. You can also filter by date using the date filter at the top.

Filtering response queries

  1. Choose any of the queries to view the Query chain

This will give you more details and context on the conversation the user had when providing their feedback.

Unsuccessful responses details

Analyze user feedback through CloudWatch logs

This use case focuses on identifying and analyzing unsatisfactory feedback from specific users in Amazon Q Business. After log delivery is enabled with the user_email field selected, the Amazon Q Business application sends event logs to the previously created CloudWatch log group. User chat interactions and feedback submissions generate events in the QBusiness/Chat/Message and QBusiness/Chat/Feedback log streams, respectively.

For example, consider if a user asks about their vacation policy and no answer is returned. The user can then choose the thumbs down icon and send feedback to the administrator.

No answer found

The Send your feedback form provides the user the option to categorize the feedback and provide additional details for the administrator to review.

This feedback will be sent to the QBusiness/Chat/Feedback log stream for the administrator to later analyze. See the following example log entry:

{ 
    "application_id": "881486e0-c027-40ae-96c2-8bfcf8b99c2a",
    "event_timestamp": "2025-02-25T18:50:41Z", 
    "log_type": "Feedback", 
    "account_id": "123456789012", 
    "conversation_id": "da2d22bf-86a1-4cc4-a7e2-96663aa05cc2", 
    "system_message_id": "3410aa16-5824-40cf-9d3d-1718cbe5b6bd", 
    "user_message_id": "221f85aa-494b-41e5-940a-034a3d22fba8",
    "user_message": "Can you tell me about my vacation policy?",
    "system_message": "No answer is found.",
    "comment": "There is no response when asking about vacation policies. ", 
    "usefulness_reason": "NOT_HELPFUL", 
    "usefulness": "NOT_USEFUL", 
    "timestamp": "1740509448782",
    "user_email": "jane.doe@example.com" 
}

By analyzing queries that result in unsatisfactory responses (thumbs down), administrators can take actions to improve answer quality, accuracy, and security. This feedback can help identify gaps in data sources. Patterns in feedback can indicate topics where users might benefit from extra training or guidance on effectively using Amazon Q Business.

To address issues identified through feedback analysis, administrators can take several actions:

  • Configure metadata boosting to prioritize more accurate content in responses for queries that consistently receive negative feedback
  • Refine guardrails and chat controls to better align with user expectations and organizational policies
  • Develop targeted training or documentation to help users formulate more effective prompts, including prompt engineering techniques
  • Analyze user prompts to identify potential risks and reinforce proper data handling practices

By monitoring the chat messages and which users are giving “thumbs up” or “thumbs down” responses for the associated prompts, administrators can gain insights into areas where the system might be underperforming, not meeting user expectations, or not complying with your organization’s security policies.

This use case is applicable to the other log delivery options, such as Amazon S3 and Data Firehose.

Group users getting the most unhelpful answers

For administrators seeking more granular insights beyond the standard dashboard, CloudWatch Logs Insights offers a powerful tool for deep-dive analysis of Amazon Q Business usage metrics. By using CloudWatch Log Insights, administrators can create custom queries to extract and analyze detailed performance data. For instance, you can generate a sorted list of users experiencing the most unhelpful interactions, such as identifying which employees are consistently receiving unsatisfactory responses. A typical query might reveal patterns like “User A received 9 unhelpful answers in the last 4 weeks, User B received 5 unhelpful answers, and User C received 3 unhelpful answers.” This level of detailed analysis enables organizations to pinpoint specific user groups or departments that might require additional training, data source configuration, or targeted support to improve their Amazon Q Business experience.

To get these kinds of insights, complete the following steps:

  1. To obtain the Amazon Q Business application ID, open the Amazon Q Business console, open the specific application, and note the application ID on the Application settings

This unique identifier will be used to filter log groups in CloudWatch Logs Insights.

  1. On the CloudWatch console, choose Logs Insights under Logs in the navigation pane.

CloudWatch Logs Insights

  1. Under Selection criteria, enter the application ID you previously copied. Choose the log group that follows the pattern /aws/vendedlogs/qbusiness/application/EVENT_LOGS/<your application id>.

Selecting log group

  1. For the data time range, select the range you want to use. In our case, we are using the last 4 weeks and so we choose Custom, then we specify 4 Weeks.
  2. Replace the default query in the editor with this one:
filter usefulness = "NOT_USEFUL" and ispresent(user_email) 
| stats count(*) as total_unhelpful_anwers by user_email

We use the condition NOT_USEFUL because we want to list users getting unhelpful answers. To get a list of users who received helpful answers, change the condition to USEFUL.

  1. Choose Run query.

Query results

With this information, particularly user_email, you can write a new query to analyze the conversation logs where users got unhelpful answers. For example, to list messages where user john_doe gave a thumbs down, replace your query with the following:

filter usefulness = "NOT_USEFUL" and user_email = "john_doe@anycompany.com"

Alternatively, to filter unhelpful answers, you could use the following query:

filter usefulness = "NOT_USEFUL"

The results of these queries can help you better understand the context of the feedback users are providing. As mentioned earlier, it might be possible your guardrails are too restrictive, your application is missing a data source, or maybe your users’ prompts are not clear enough.

Clean up

To make sure you don’t incur ongoing costs, clean up resources by removing log delivery configurations, deleting CloudWatch resources, removing the Amazon Q Business application, and deleting any additional AWS resources created after you’re done experimenting with this functionality.

Conclusion

In this post, we explored several ways to improve your operational posture with Amazon Q Business Insights dashboards, the Amazon Q Apps analytics dashboard, and logging with CloudWatch Logs. By using these tools, organizations can gain valuable insights into user engagement patterns, identify areas for improvement, and make sure their Amazon Q Business implementation aligns with security and compliance requirements.

To learn more about Amazon Q Business key usage metrics, refer to Viewing Amazon Q Business and Q App metrics in analytics dashboards. For a comprehensive review of Amazon Q Business CloudWatch logs, including log query examples, refer to Monitoring Amazon Q Business user conversations with Amazon CloudWatch Logs.


About the Authors

Guillermo MansillaGuillermo Mansilla is a Senior Solutions Architect based in Orlando, Florida. Guillermo has developed a keen interest in serverless architectures and generative AI applications. Prior to his current role, he gained over a decade of experience working as a software developer. Away from work, Guillermo enjoys participating in chess tournaments at his local chess club, a pursuit that allows him to exercise his analytical skills in a different context.

Amit GuptaAmit Gupta is a Senior Q Business Solutions Architect Solutions Architect at AWS. He is passionate about enabling customers with well-architected generative AI solutions at scale.

Jed LechnerJed Lechner is a Specialist Solutions Architect at Amazon Web Services specializing in generative AI solutions with Amazon Q Business and Amazon Q Apps. Prior to his current role, he worked as a Software Engineer at AWS and other companies, focusing on sustainability technology, big data analytics, and cloud computing. Outside of work, he enjoys hiking and photography, and capturing nature’s moments through his lens.

Leo Mentis Raj Selvaraj is a Sr. Specialist Solutions Architect – GenAI at AWS with 4.5 years of experience, currently guiding customers through their GenAI implementation journeys. Previously, he architected data platform and analytics solutions for strategic customers using a comprehensive range of AWS services including storage, compute, databases, serverless, analytics, and ML technologies. Leo also collaborates with internal AWS teams to drive product feature development based on customer feedback, contributing to the evolution of AWS offerings.

Read More

Multi-LLM routing strategies for generative AI applications on AWS

Multi-LLM routing strategies for generative AI applications on AWS

Organizations are increasingly using multiple large language models (LLMs) when building generative AI applications. Although an individual LLM can be highly capable, it might not optimally address a wide range of use cases or meet diverse performance requirements. The multi-LLM approach enables organizations to effectively choose the right model for each task, adapt to different domains, and optimize for specific cost, latency, or quality needs. This strategy results in more robust, versatile, and efficient applications that better serve diverse user needs and business objectives.

Deploying a multi-LLM application comes with the challenge of routing each user prompt to an appropriate LLM for the intended task. The routing logic must accurately interpret and map the prompt into one of the pre-defined tasks, and then direct it to the assigned LLM for that task. In this post, we provide an overview of common multi-LLM applications. We then explore strategies for implementing effective multi-LLM routing in these applications, discussing the key factors that influence the selection and implementation of such strategies. Finally, we provide sample implementations that you can use as a starting point for your own multi-LLM routing deployments.

Overview of common multi-LLM applications

The following are some of the common scenarios where you might choose to use a multi-LLM approach in your applications:

  • Multiple task types – Many use cases need to handle different task types within the same application. For example, a marketing content creation application might need to perform task types such as text generation, text summarization, sentiment analysis, and information extraction as part of producing high-quality, personalized content. Each distinct task type will likely require a separate LLM, which might also be fine-tuned with custom data.
  • Multiple task complexity levels – Some applications are designed to handle a single task type, such as text summarization or question answering. However, they must be able to respond to user queries with varying levels of complexity within the same task type. For example, consider a text summarization AI assistant intended for academic research and literature review. Some user queries might be relatively straightforward, simply asking the application to summarize the core ideas and conclusions from a short article. Such queries could be effectively handled by a simple, lower-cost model. In contrast, more complex questions might require the application to summarize a lengthy dissertation by performing deeper analysis, comparison, and evaluation of the research results. These types of queries would be better addressed by more advanced models with greater reasoning capabilities.
  • Multiple task domains – Certain applications need to serve users across multiple domains of expertise. An example is a virtual assistant for enterprise business operations. Such a virtual assistant should support users across various business functions, such as finance, legal, human resources, and operations. To handle this breadth of expertise, the virtual assistant needs to use different LLMs that have been fine-tuned on datasets specific to each respective domain.
  • Software-as-a-service (SaaS) applications with tenant tiering – SaaS applications are often architected to provide different pricing and experiences to a spectrum of customer profiles, referred to as tiers. Through the use of different LLMs tailored to each tier, SaaS applications can offer capabilities that align with the varying needs and budgets of their diverse customer base. For instance, consider an AI-driven legal document analysis system designed for businesses of varying sizes, offering two primary subscription tiers: Basic and Pro. The Basic tier would use a smaller, more lightweight LLM well-suited for straightforward tasks, such as performing simple document searches or generating summaries of uncomplicated legal documents. The Pro tier, however, would require a highly customized LLM that has been trained on specific data and terminology, enabling it to assist with intricate tasks like drafting complex legal documents.

Multi-LLM routing strategies

In this section, we explore two main approaches to routing requests to different LLMs: static routing and dynamic routing.

Static routing

One effective strategy for directing user prompts to appropriate LLMs is to implement distinct UI components within the same interface or separate interfaces tailored to specific tasks. For example, an AI-powered productivity tool for an ecommerce company might feature dedicated interfaces for different roles, such as content marketers and business analysts. The content marketing interface incorporates two main UI components: a text generation module for creating social media posts, emails, and blogs, and an insight extraction module that identifies the most relevant keywords and phrases from customer reviews to improve content strategy. Meanwhile, the business analysis interface would focus on text summarization for analyzing various business documents. This is illustrated in the following figure.

Multi-LLM static prompt routing

This approach works well for applications where the user experience supports having a distinct UI component for each task. It also allows for a flexible and modular design, where new LLMs can be quickly plugged into or swapped out from a UI component without disrupting the overall system. However, the static nature of this approach implies that the application might not be easily adaptable to evolving user requirements. Adding a new task would necessitate the development of a new UI component in addition to the selection and integration of a new model.

Dynamic routing

In some use cases, such as virtual assistants and multi-purpose chatbots, user prompts usually enter the application through a single UI component. For instance, consider a customer service AI assistant that handles three types of tasks: technical support, billing support, and pre-sale support. Each of these tasks requires its own custom LLM to provide appropriate responses. In this scenario, you need to implement a dynamic routing layer to intercept each incoming request and direct it to the downstream LLM, which is best suited to handle the intended task within that prompt. This is illustrated in the following figure.

Multi-LLM dynamic prompt routing

In this section, we discuss common approaches for implementing this dynamic routing layer: LLM-assisted routing, semantic routing, and a hybrid approach.

LLM-assisted routing

This approach employs a classifier LLM at the application’s entry point to make routing decisions. The LLM’s ability to comprehend complex patterns and contextual subtleties makes this approach well-suited for applications requiring fine-grained classifications across task types, complexity levels, or domains. However, this method presents trade-offs. Although it offers sophisticated routing capabilities, it introduces additional costs and latency. Furthermore, maintaining the classifier LLM’s relevance as the application evolves can be demanding. Careful model selection, fine-tuning, configuration, and testing might be necessary to balance the impact of latency and cost with the desired classification accuracy.

Semantic routing

This approach uses semantic search as an alternative to using a classifier LLM for prompt classification and routing in multi-LLM systems. Semantic search uses embeddings to represent prompts as numerical vectors. The system then makes routing decisions by measuring the similarity between the user’s prompt embedding and the embeddings for a set of reference prompts, each representing a different task category. The user prompt is then routed to the LLM associated with the task category of the reference prompt that has the closest match.

Although semantic search doesn’t provide explicit classifications like a classifier LLM, it succeeds at identifying broad similarities and can effectively handle variations in a prompt’s wording. This makes it particularly well-suited for applications where routing can be based on coarse-grained classification of prompts, such as task domain classification. It also excels in scenarios with a large number of task categories or when new domains are frequently introduced, because it can quickly accommodate updates by simply adding new prompts to the reference prompt set.

Semantic routing offers several advantages, such as efficiency gained through fast similarity search in vector databases, and scalability to accommodate a large number of task categories and downstream LLMs. However, it also presents some trade-offs. Having adequate coverage for all possible task categories in your reference prompt set is crucial for accurate routing. Additionally, the increased system complexity due to the additional components, such as the vector database and embedding LLM, might impact overall performance and maintainability. Careful design and ongoing maintenance are necessary to address these challenges and fully realize the benefits of the semantic routing approach.

Hybrid approach

In certain scenarios, a hybrid approach combining both techniques might also prove highly effective. For instance, in applications with a large number of task categories or domains, you can use semantic search for initial broad categorization or domain matching, followed by classifier LLMs for more fine-grained classification within those broad categories. This initial filtering allows you to use a simpler, more focused classifier LLM for the final routing decision.

Consider, for instance, a customer service AI assistant handling a diverse range of inquiries. In this context, semantic routing could initially route the user’s prompt to the appropriate department—be it billing, technical support, or sales. After the broad category is established, a dedicated classifier LLM for that specific department takes over. This specialized LLM, which can be trained on nuanced distinctions within its domain, can then determine crucial factors such as task complexity or urgency. Based on this fine-grained analysis, the prompt is then routed to the most appropriate LLM or, when necessary, escalated to a human agent.

This hybrid approach combines the scalability and flexibility of semantic search with the precision and context-awareness of classifier LLMs. The result is a robust, efficient, and highly accurate routing mechanism capable of adapting to the complexities and diverse needs of modern multi-LLM applications.

Implementation of dynamic routing

In this section, we explore different approaches to implementing dynamic routing on AWS, covering both built-in routing features and custom solutions that you can use as a starting point to build your own.

Intelligent prompt routing with Amazon Bedrock

Amazon Bedrock is a fully managed service that makes high-performing LLMs and other foundation models (FMs) from leading AI startups and Amazon available through an API, so you can choose from a wide range of FMs to find the model that is best suited for your use case. With the Amazon Bedrock serverless experience, you can get started quickly, privately customize FMs with your own data, and integrate and deploy them into your applications using AWS tools without having to manage infrastructure.

If you’re building applications with Amazon Bedrock LLMs and need a fully managed solution with straightforward routing capabilities, Amazon Bedrock Intelligent Prompt Routing offers an efficient way to implement dynamic routing. This feature of Amazon Bedrock provides a single serverless endpoint for efficiently routing requests between different LLMs within the same model family. It uses advanced prompt matching and model understanding techniques to predict the performance of each model for every request. Amazon Bedrock then dynamically routes each request to the model that it predicts is most likely to give the desired response at the lowest cost. Intelligent Prompt Routing can reduce costs by up to 30% without compromising on accuracy. As of this writing, Amazon Bedrock supports routing within the Anthropic’s Claude and Meta’s Llama model families. For example, Amazon Bedrock can intelligently route requests between Anthropic’s Claude 3.5 Sonnet and Claude 3 Haiku depending on the complexity of the prompt, as illustrated the following figure. Similarly, Amazon Bedrock can route requests between Meta’s Llama 3.1 70B and 8B.

Amazon Bedrock Intelligent Prompt Routing

This architecture workflow includes the following steps:

  1. A user submits a question through a web or mobile application.
  2. Anthropic’s prompt router predicts the performance of each downstream LLM, selecting the model that it predicts will offer the best combination of response quality and cost.
  3. Amazon Bedrock routes the request to the selected LLM, and returns the response along with information about the model.

For detailed implementation guidelines and examples of Intelligent Prompt Routing on Amazon Bedrock, see Reduce costs and latency with Amazon Bedrock Intelligent Prompt Routing and prompt caching.

Custom prompt routing

If your LLMs are hosted outside Amazon Bedrock, such as on Amazon SageMaker or Amazon Elastic Kubernetes Service (Amazon EKS), or you require routing customization, you will need to develop a custom routing solution.

This section provides sample implementations for both LLM-assisted and semantic routing. We discuss the solution’s mechanics, key design decisions, and how to use it as a foundation for developing your own custom routing solutions. For detailed deployment instructions for each routing solution, refer to the GitHub repo. The provided code in this repo is meant to be used in a development environment. Before migrating any of the provided solutions to production, we recommend following the AWS Well-Architected Framework.

LLM-assisted routing

In this solution, we demonstrate an educational tutor assistant that helps students in two domains of history and math. To implement the routing layer, the application uses the Amazon Titan Text G1 – Express model on Amazon Bedrock to classify the questions based on their topic to either history or math. History questions are routed to a more cost-effective and faster LLM such as Anthropic’s Claude 3 Haiku on Amazon Bedrock. Math questions are handled by a more powerful LLM, such as Anthropic’s Claude 3.5 Sonnet on Amazon Bedrock, which is better suited for complex problem-solving, in-depth explanations, and multi-step reasoning. If the classifier LLM is unsure whether a question belongs to the history or math category, it defaults to classifying it as math.

The architecture of this system is illustrated in the following figure. The use of Amazon Titan and Anthropic models on Amazon Bedrock in this demonstration is optional. You can substitute them with other models deployed on or outside of Amazon Bedrock.

LLM-assisted prompt routing

This architecture workflow includes the following steps:

  1. A user submits a question through a web or mobile application, which forwards the query to Amazon API Gateway.
  2. When API Gateway receives the request, it triggers an AWS Lambda
  3. The Lambda function sends the question to the classifier LLM to determine whether it is a history or math question.
  4. Based on the classifier LLM’s decision, the Lambda function routes the question to the appropriate downstream LLM, which will generate an answer and return it to the user.

Follow the deployment steps in the GitHub repo to create the necessary infrastructure for LLM-assisted routing and run tests to generate responses. The following output shows the response to the question “What year did World War II end?”

{
  "answer": "World War II ended in 1945.",
  "question_classification": "history",
  "classifier_LLM": "amazon.titan-text-express-v1",
  "classification_time": 0.5374360084533691,
  "answerer_LLM": "anthropic.claude-3-haiku-20240307-v1:0",
  "answer_generation_time": 0.2473313808441162,
  "total_response_time": 0.7847845554351807
}

The question was correctly classified as a history question, with the classification process taking approximately 0.53 seconds. The question was then routed to and answered by Anthropic’s Claude 3 Haiku, which took around 0.25 seconds. In total, it took about 0.78 seconds to receive the response.

Next, we will ask a math question. The following output shows the response to the question “Solve the quadratic equation: 2x^2 – 5x + 3 = 0.”

{
  "answer": "To solve this quadratic equation, we'll use the quadratic formula: x = [-b ± √(b² - 4ac)] / 2annWhere a = 2, b = -5, and c = 3nnSteps:n1. Substitute values into the formulan2. Simplify under the square rootn3. Calculate the two solutionsnnx = [5 ± √(25 - 24)] / 4nx = (5 ± √1) / 4nx = (5 ± 1) / 4",
  "question_classification": "math",
  "classifier_LLM": "amazon.titan-text-express-v1",
  "classification_time": 0.5975513458251953,
  "answerer_LLM": "anthropic.claude-3-5-sonnet-20240620-v1:0",
  "answer_generation_time": 2.3191726207733154,
  "total_response_time": 2.9167449474334717
}

The question was correctly classified as a math question, with the classification process taking approximately 0.59 seconds. The question was then correctly routed to and answered by Anthropic’s Claude 3.5 Sonnet, which took around 2.3 seconds. In total, it took about 2.9 seconds to receive the response.

Semantic routing

In this solution, we focus on the same educational tutor assistant use case as in LLM-assisted routing. To implement the routing layer, you first need to create a set of reference prompts that represents the full spectrum of history and math topics you intend to cover. This reference set serves as the foundation for the semantic matching process, enabling the application to correctly categorize incoming queries. As an illustrative example, we’ve provided a sample reference set with five questions for each of the history and math topics. In a real-world implementation, you would likely need a much larger and more diverse set of reference questions to have robust routing performance.

History:
    - What were the main causes of World War I?
    - What region of the United States saw the largest economic growth as a result of the Industrial Revolution?
    - Who was the first man on the moon?
    - What country gifted the United States with the Statue of Liberty?
    - What major event sparked the beginning of the Great Depression in 1929?
Math:
    - Solve the quadratic equation: 2x^2 + 5x - 12 = 0.
    - Find the derivative of f(x) = 3x^4 - 2x^3 + 5x - 7.
    - In a right triangle, if one angle is 30° and the hypotenuse is 10 cm, find the lengths of the other two sides.
    - Determine the area of the region bounded by y = x^2, y = 4, and the y-axis.
    - If log_2(x) + log_2(y) = 5 and xy = 64, find the values of x and y.

You can use the Amazon Titan Text Embeddings V2 model on Amazon Bedrock to convert the questions in the reference set into embeddings. You can find the code for this conversion in the GitHub repo. These embeddings are then saved as a reference index inside an in-memory FAISS vector store, which is deployed as a Lambda layer.

The architecture of this system is illustrated in the following figure. The use of Amazon Titan and Anthropic models on Amazon Bedrock in this demonstration is optional. You can substitute them with other models deployed on or outside of Amazon Bedrock.

Semantic prompt routing

This architecture workflow includes the following steps:

  1. A user submits a question through a web or mobile application, which forwards the query to API Gateway.
  2. When API Gateway receives the request, it triggers a Lambda function.
  3. The Lambda function sends the question to the Amazon Titan Text Embeddings V2 model to convert it to an embedding. It then performs a similarity search on the FAISS index to find the closest matching question in the reference index, and returns the corresponding category label.
  4. Based on the retrieved category, the Lambda function routes the question to the appropriate downstream LLM, which will generate an answer and return it to the user.

Follow the deployment steps in the GitHub repo to create the necessary infrastructure for semantic routing and run tests to generate responses. The following output shows the response to the question “What year did World War II end?”

{
  "answer": "World War II ended in 1945.",
  "question_classification": "history",
  "embedding_LLM": "amazon.titan-embed-text-v2:0",
  "classification_time": 0.1058051586151123,
  "answerer_LLM": "anthropic.claude-3-haiku-20240307-v1:0",
  "answer_generation_time": 0.25673604011535645,
  "total_response_time": 0.36255788803100586
}

The question was correctly classified as a history question and the classification took about 0.1 seconds. The question was then routed and answered by Anthropic’s Claude 3 Haiku, which took about 0.25 seconds, resulting in a total of about 0.36 seconds to get the response back.

Next, we ask a math question. The following output shows the response to the question “Solve the quadratic equation: 2x^2 – 5x + 3 = 0.”

{
  "answer": "To solve this quadratic equation, we'll use the quadratic formula: x = [-b ± √(b² - 4ac)] / 2annWhere a = 2, b = -5, and c = 3nnSteps:n1. Substitute the values into the formulan2. Simplify inside the square rootn3. Calculate the two solutionsnnx = [5 ± √(25 - 24)] / 4nx = (5 ± √1) / 4nx = (5 ± 1) / 4",
  "question_classification": "math",
  "embedding_LLM": "amazon.titan-embed-text-v2:0",
  "classification_time": 0.09248232841491699,
  "answerer_LLM": "anthropic.claude-3-5-sonnet-20240620-v1:0",
  "answer_generation_time": 2.6957757472991943,
  "total_response_time": 2.7882847785949707
}

The question was correctly classified as a math question and the classification took about 0.1 seconds. Moreover, the question was correctly routed and answered by Anthropic’s 3.5 Claude Sonnet, which took about 2.7 seconds, resulting in a total of about 2.8 seconds to get the response back.

Additional considerations for custom prompt routing

The provided solutions uses exemplary LLMs for classification in LLM-assisted routing and for text embedding in semantic routing. However, you will likely need to evaluate multiple LLMs to select the LLM that is best suited for your specific use case. Using these LLMs does incur additional cost and latency. Therefore, it’s critical that the benefits of dynamically routing queries to the appropriate LLM can justify the overhead introduced by implementing the custom prompt routing system.

For some use cases, especially those that require specialized domain knowledge, consider fine-tuning the classifier LLM in LLM-assisted routing and the embedding LLM in semantic routing with your own proprietary data. This can increase the quality and accuracy of the classification, leading to better routing decisions.

Additionally, the semantic routing solution used FAISS as an in-memory vector database for similarity search. However, you might need to evaluate alternative vector databases on AWS that better fit your use case in terms of scale, latency, and cost requirements. It will also be important to continuously gather prompts from your users and iterate on the reference prompt set. This will help make sure that it reflects the actual types of questions your users are asking, thereby increasing the accuracy of your similarity search classification over time.

Clean up

To avoid incurring additional costs, clean up the resources you created for LLM-assisted routing and semantic routing by running the following command for each of the respective created stacks:

cdk destroy

Cost analysis for custom prompt routing

This section analyzes the implementation cost and potential savings for the two custom prompt routing solutions, using an exemplary traffic scenario for our educational tutor assistant application.

Our calculations are based on the following assumptions:

  • The application is deployed in the US East (N. Virginia) AWS Region and receives 50,000 history questions and 50,000 math questions per day.
  • For LLM-assisted routing, the classifier LLM processes 150 input tokens and generates 1 output token per question.
  • For semantic routing, the embedding LLM processes 150 input tokens per question.
  • The answerer LLM processes 150 input tokens and generates 300 output tokens per question.
  • Amazon Titan Text G1 – Express model performs question classification in LLM-assisted routing at $0.0002 per 1,000 input tokens, with negligible output costs (1 token per question).
  • Amazon Titan Text Embeddings V2 model generates question embedding in semantic routing at $0.00002 per 1,000 input tokens.
  • Anthropic’s Claude 3 Haiku handles history questions at $0.00025 per 1,000 input tokens and $0.00125 per 1,000 output tokens.
  • Anthropic’s Claude 3.5 Sonnet answers math questions at $0.003 per 1,000 input tokens and $0.015 per 1,000 output tokens.
  • The Lambda runtime is 3 seconds per math question and 1 second per history question.
  • Lambda uses 1024 MB of memory and 512 MB of ephemeral storage, with API Gateway configured as a REST API.

The following table summarizes the cost of answer generation by LLM for both routing strategies.

Question Type Total Input Tokens/Month Total Output Tokens/Month Answer Generation Cost/Month
History 225,000,000 450,000,000 $618.75
Math 225,000,000 450,000,000 $7425

The following table summarizes the cost of dynamic routing implementation for both routing strategies.

  LLM-Assisted Routing Semantic Routing
Question Type Total Input Tokens/Month Classifier LLM Cost/Month Lambda + API Gateway Cost/Month Embedding LLM Cost/Month Lambda + API Gateway Cost/Month
History + Math 450,000,000 $90 $98.9 $9 $98.9

The first table shows that using Anthropic’s Claude 3 Haiku for history questions costs $618.75 per month, whereas using Anthropic’s Claude 3.5 Sonnet for math questions costs $7,425 per month. This demonstrates that routing questions to the appropriate LLM can achieve significant cost savings compared to using the more expensive model for all of the questions. The second table shows that these savings come with an implementation cost of $188.9/month for LLM-assisted routing and $107.9/month for semantic routing, which are relatively small compared to the potential savings in answer generation costs.

Selecting the right dynamic routing implementation

The decision on which dynamic routing implementation is best suited for your use case largely depends on three key factors: model hosting requirements, cost and operational overhead, and desired level of control over routing logic. The following table outlines these dimensions for Amazon Bedrock Intelligent Prompt Routing and custom prompt routing.

Design Criteria Amazon Bedrock Intelligent Prompt Routing Custom Prompt Routing
Model Hosting Limited to Amazon Bedrock hosted models within the same model family Flexible: can work with models hosted outside of Amazon Bedrock
Operational Management Fully managed service with built-in optimization Requires custom implementation and optimization
Routing Logic Control Limited customization, predefined optimization for cost and performance Full control over routing logic and optimization criteria

These approaches arent mutually exclusive. You can implement hybrid solutions, using Amazon Bedrock Intelligent Prompt Routing for certain workloads while maintaining custom prompt routing for others with LLMs hosted outside Amazon Bedrock or where more control on routing logic is needed.

Conclusion

This post explored multi-LLM strategies in modern AI applications, demonstrating how using multiple LLMs can enhance organizational capabilities across diverse tasks and domains. We examined two primary routing strategies: static routing through using dedicated interfaces and dynamic routing using prompt classification at the application’s point of entry.

For dynamic routing, we covered two custom prompt routing strategies, LLM-assisted and semantic routing, and discussed exemplary implementations for each. These techniques enable customized routing logic for LLMs, regardless of their hosting platform. We also discussed Amazon Bedrock Intelligent Prompt Routing as an alternative implementation for dynamic routing, which optimizes response quality and cost by routing prompts across different LLMs within Amazon Bedrock.

Although these dynamic routing approaches offer powerful capabilities, they require careful consideration of engineering trade-offs, including latency, cost optimization, and system maintenance complexity. By understanding these tradeoffs, along with implementation best practices like model evaluation, cost analysis, and domain fine-tuning, you can architect a multi-LLM routing solution optimized for your application’s needs.


About the Authors

Nima Seifi is a Senior Solutions Architect at AWS, based in Southern California, where he specializes in SaaS and GenAIOps. He serves as a technical advisor to startups building on AWS. Prior to AWS, he worked as a DevOps architect in the ecommerce industry for over 5 years, following a decade of R&D work in mobile internet technologies. Nima has authored over 20 technical publications and holds 7 US patents. Outside of work, he enjoys reading, watching documentaries, and taking beach walks.

Manish Chugh is a Principal Solutions Architect at AWS based in San Francisco, CA. He specializes in machine learning and is a generative AI lead for NAMER startups team. His role involves helping AWS customers build scalable, secure, and cost-effective machine learning and generative AI workloads on AWS. He regularly presents at AWS conferences and partner events. Outside of work, he enjoys hiking on East SF Bay trails, road biking, and watching (and playing) cricket.

Read More

How iFood built a platform to run hundreds of machine learning models with Amazon SageMaker Inference

How iFood built a platform to run hundreds of machine learning models with Amazon SageMaker Inference

Headquartered in São Paulo, Brazil, iFood is a national private company and the leader in food-tech in Latin America, processing millions of orders monthly. iFood has stood out for its strategy of incorporating cutting-edge technology into its operations. With the support of AWS, iFood has developed a robust machine learning (ML) inference infrastructure, using services such as Amazon SageMaker to efficiently create and deploy ML models. This partnership has allowed iFood not only to optimize its internal processes, but also to offer innovative solutions to its delivery partners and restaurants.

iFood’s ML platform comprises a set of tools, processes, and workflows developed with the following objectives:

  • Accelerate the development and training of AI/ML models, making them more reliable and reproducible
  • Make sure that deploying these models to production is reliable, scalable, and traceable
  • Facilitate the testing, monitoring, and evaluation of models in production in a transparent, accessible, and standardized manner

To achieve these objectives, iFood uses SageMaker, which simplifies the training and deployment of models. Additionally, the integration of SageMaker features in iFood’s infrastructure automates critical processes, such as generating training datasets, training models, deploying models to production, and continuously monitoring their performance.

In this post, we show how iFood uses SageMaker to revolutionize its ML operations. By harnessing the power of SageMaker, iFood streamlines the entire ML lifecycle, from model training to deployment. This integration not only simplifies complex processes but also automates critical tasks.

AI inference at iFood
iFood has harnessed the power of a robust AI/ML platform to elevate the customer experience across its diverse touchpoints. Using the cutting edge of AI/ML capabilities, the company has developed a suite of transformative solutions to address a multitude of customer use cases:

  • Personalized recommendations – At iFood, AI-powered recommendation models analyze a customer’s past order history, preferences, and contextual factors to suggest the most relevant restaurants and menu items. This personalized approach makes sure customers discover new cuisines and dishes tailored to their tastes, improving satisfaction and driving increased order volumes.
  • Intelligent order tracking – iFood’s AI systems track orders in real time, predicting delivery times with a high degree of accuracy. By understanding factors like traffic patterns, restaurant preparation times, and courier locations, the AI can proactively notify customers of their order status and expected arrival, reducing uncertainty and anxiety during the delivery process.
  • Automated customer Service – To handle the thousands of daily customer inquiries, iFood has developed an AI-powered chatbot that can quickly resolve common issues and questions. This intelligent virtual agent understands natural language, accesses relevant data, and provides personalized responses, delivering fast and consistent support without overburdening the human customer service team.
  • Grocery shopping assistance – Integrating advanced language models, iFood’s app allows customers to simply speak or type their recipe needs or grocery list, and the AI will automatically generate a detailed shopping list. This voice-enabled grocery planning feature saves customers time and effort, enhancing their overall shopping experience.

Through these diverse AI-powered initiatives, iFood is able to anticipate customer needs, streamline key processes, and deliver a consistently exceptional experience—further strengthening its position as the leading food-tech platform in Latin America.

Solution overview

The following diagram illustrates iFood’s legacy architecture, which had separate workflows for data science and engineering teams, creating challenges in efficiently deploying accurate, real-time machine learning models into production systems.

In the past, the data science and engineering teams at iFood operated independently. Data scientists would build models using notebooks, adjust weights, and publish them onto services. Engineering teams would then struggle to integrate these models into production systems. This disconnection between the two teams made it challenging to deploy accurate real-time ML models.

To overcome this challenge, iFood built an internal ML platform that helped bridge this gap. This platform has streamlined the workflow, providing a seamless experience for creating, training, and delivering models for inference. It provides a centralized integration where data scientists could build, train, and deploy models seamlessly from an integrated approach, considering the development workflow of the teams. The interaction with engineering teams could consume these models and integrate them into applications from both an online and offline perspective, enabling a more efficient and streamlined workflow.

By breaking down the barriers between data science and engineering, AWS AI platforms empowered iFood to use the full potential of their data and accelerate the development of AI applications. The automated deployment and scalable inference capabilities provided by SageMaker made sure that models were readily available to power intelligent applications and provide accurate predictions on demand. This centralization of ML services as a product has been a game changer for iFood, allowing them to focus on building high-performing models rather than the intricate details of inference.

One of the core capabilities of iFood’s ML platform is the ability to provide the infrastructure to serve predictions. Several use cases are supported by the inference made available through ML Go!, responsible for deploying SageMaker pipelines and endpoints. The former are used to schedule offline predictions jobs, and the latter are employed to create model services, to be consumed by the application services. The following diagram illustrates iFood’s updated architecture, which incorporates an internal ML platform built to streamline workflows between data science and engineering teams, enabling efficient deployment of machine learning models into production systems.

Integrating model deployment into the service development process was a key initiative to enable data scientists and ML engineers to deploy and maintain those models. The ML platform empowers the building and evolution of ML systems. Several other integrations with other important platforms, like the feature platform and data platform, were delivered to increase the experience for the users as a whole. The process of consuming ML-based decisions was streamlined—but it doesn’t end there. The iFood’s ML platform, ML Go!, is now focusing on new inference capabilities, supported by recent features in which the iFood’s team was responsible for supporting their ideation and development. The following diagram illustrates the final architecture of iFood’s ML platform, showcasing how model deployment is integrated into the service development process, the platform’s connections with feature and data platforms, and its focus on new inference capabilities.

One of the biggest changes is oriented to the creation of one abstraction for connecting with SageMaker Endpoints and Jobs, called ML Go! Gateway, and also, the separation of concerns within the Endpoints, by the use of the Inference Components feature, making the serving faster and more efficient. In this new inference structure, the Endpoints are also managed by the ML Go! CI/CD, leaving for the pipelines, to deal only with model promotions, and not the infrastructure itself. It will reduce the lead time to changes, and change failure ratio over the deployments.

Using SageMaker Inference Model Serving Containers:

One of the key features of modern machine learning platforms is the standardization of machine learning and AI services. By encapsulating models and dependencies as Docker containers, these platforms ensure consistency and portability across different environments and stages of ML. Using SageMaker, data scientists and developers can use pre-built Docker containers, making it straightforward to deploy and manage ML services. As a project progresses, they can spin up new instances and configure them according to their specific requirements. SageMaker provides Docker containers that are designed to work seamlessly with SageMaker. These containers provide a standardized and scalable environment for running ML workloads on SageMaker.

SageMaker provides a set of pre-built containers for popular ML frameworks and algorithms, such as TensorFlow, PyTorch, XGBoost, and many others. These containers are optimized for performance and include all the necessary dependencies and libraries pre-installed, making it straightforward to get started with your ML projects. In addition to the pre-built containers, it provides options to bring your own custom containers to SageMaker, which include your specific ML code, dependencies, and libraries. This can be particularly useful if you’re using a less common framework or have specific requirements that aren’t met by the pre-built containers.

iFood was highly focused on using custom containers for the training and deployment of ML workloads, providing a consistent and reproducible environment for ML experiments, and making it effortless to track and replicate results. The first step in this journey was to standardize the ML custom code, which is actually the piece of code that the data scientists should focus on. Without a notebook, and with BruceML, the way to create the code to train and serve models has changed, to be encapsulated from the start as container images. BruceML was responsible for creating the scaffolding required to seamlessly integrate with the SageMaker platform, allowing the teams to take advantage of its various features, such as hyperparameter tuning, model deployment, and monitoring. By standardizing ML services and using containerization, modern platforms democratize ML, enabling iFood to rapidly build, deploy, and scale intelligent applications.

Automating model deployment and ML system retraining

When running ML models in production, it’s critical to have a robust and automated process for deploying and recalibrating those models across different use cases. This helps make sure the models remain accurate and performant over time. The team at iFood understood this challenge well—not only the model is deployed. Instead, they rely on another concept to keep things running well: ML pipelines.

Using Amazon SageMaker Pipelines, they were able to build a CI/CD system for ML, to deliver automated retraining and model deployment. They also integrated this entire system with the company’s existing CI/CD pipeline, making it efficient and also maintaining good DevOps practices used at iFood. It starts with the ML Go! CI/CD pipeline pushing the latest code artifacts containing the model training and deployment logic. It includes the training process, which uses different containers for implementing the entire pipeline. When training is complete, the inference pipeline can be executed to begin the model deployment. It can be an entirely new model, or the promotion of a new version to increase the performance of an existing one. Every model available for deployment is also secured and registered automatically by ML Go! in Amazon SageMaker Model Registry, providing versioning and tracking capabilities.

The final step depends on the intended inference requirements. For batch prediction use cases, the pipeline creates a SageMaker batch transform job to run large-scale predictions. For real-time inference, the pipeline deploys the model to a SageMaker endpoint, carefully selecting the appropriate container variant and instance type to handle the expected production traffic and latency needs. This end-to-end automation has been a game changer for iFood, allowing them to rapidly iterate on their ML models and deploy updates and recalibrations quickly and confidently across their various use cases. SageMaker Pipelines has provided a streamlined way to orchestrate these complex workflows, making sure model operationalization is efficient and reliable.

Running inference in different SLA formats

iFood uses the inference capabilities of SageMaker to power its intelligent applications and deliver accurate predictions to its customers. By integrating the robust inference options available in SageMaker, iFood has been able to seamlessly deploy ML models and make them available for real-time and batch predictions. For iFood’s online, real-time prediction use cases, the company uses SageMaker hosted endpoints to deploy their models. These endpoints are integrated into iFood’s customer-facing applications, allowing for immediate inference on incoming data from users. SageMaker handles the scaling and management of these endpoints, making sure that iFood’s models are readily available to provide accurate predictions and enhance the user experience.

In addition to real-time predictions, iFood also uses SageMaker batch transform to perform large-scale, asynchronous inference on datasets. This is particularly useful for iFood’s data preprocessing and batch prediction requirements, such as generating recommendations or insights for their restaurant partners. SageMaker batch transform jobs enable iFood to efficiently process vast amounts of data, further enhancing their data-driven decision-making.

Building upon the success of standardization to SageMaker Inference, iFood has been instrumental in partnering with the SageMaker Inference team to build and enhance key AI inference capabilities within the SageMaker platform. Since the early days of ML, iFood has provided the SageMaker Inference team with valuable inputs and expertise, enabling the introduction of several new features and optimizations:

  • Cost and performance optimizations for generative AI inference – iFood helped the SageMaker Inference team develop innovative techniques to optimize the use of accelerators, enabling SageMaker Inference to reduce foundation model (FM) deployment costs by 50% on average and latency by 20% on average with inference components. This breakthrough delivers significant cost savings and performance improvements for customers running generative AI workloads on SageMaker.
  • Scaling improvements for AI inference – iFood’s expertise in distributed systems and auto scaling has also helped the SageMaker team develop advanced capabilities to better handle the scaling requirements of generative AI models. These improvements reduce auto scaling times by up to 40% and auto scaling detection by six times, making sure that customers can rapidly scale their inference workloads on SageMaker to meet spikes in demand without compromising performance.
  • Streamlined generative AI model deployment for inference – Recognizing the need for simplified model deployment, iFood collaborated with AWS to introduce the ability to deploy open source large language models (LLMs) and FMs with just a few clicks. This user-friendly functionality removes the complexity traditionally associated with deploying these advanced models, empowering more customers to harness the power of AI.
  • Scale-to-zero for inference endpoints – iFood played a crucial role in collaborating with SageMaker Inference to develop and launch the scale-to-zero feature for SageMaker inference endpoints. This innovative capability allows inference endpoints to automatically shut down when not in use and rapidly spin up on demand when new requests arrive. This feature is particularly beneficial for dev/test environments, low-traffic applications, and inference use cases with varying inference demands, because it eliminates idle resource costs while maintaining the ability to quickly serve requests when needed. The scale-to-zero functionality represents a major advancement in cost-efficiency for AI inference, making it more accessible and economically viable for a wider range of use cases.
  • Packaging AI model inference more efficiently – To further simplify the AI model lifecycle, iFood worked with AWS to enhance SageMaker’s capabilities for packaging LLMs and models for deployment. These improvements make it straightforward to prepare and deploy these AI models, accelerating their adoption and integration.
  • Multi-model endpoints for GPU – iFood collaborated with the SageMaker Inference team to launch multi-model endpoints for GPU-based instances. This enhancement allows you to deploy multiple AI models on a single GPU-enabled endpoint, significantly improving resource utilization and cost-efficiency. By taking advantage of iFood’s expertise in GPU optimization and model serving, SageMaker now offers a solution that can dynamically load and unload models on GPUs, reducing infrastructure costs by up to 75% for customers with multiple models and varying traffic patterns.
  • Asynchronous inference – Recognizing the need for handling long-running inference requests, the team at iFood worked closely with the SageMaker Inference team to develop and launch Asynchronous Inference in SageMaker. This feature enables you to process large payloads or time-consuming inference requests without the constraints of real-time API calls. iFood’s experience with large-scale distributed systems helped shape this solution, which now allows for better management of resource-intensive inference tasks, and the ability to handle inference requests that might take several minutes to complete. This capability has opened up new use cases for AI inference, particularly in industries dealing with complex data processing tasks such as genomics, video analysis, and financial modeling.

By closely partnering with the SageMaker Inference team, iFood has played a pivotal role in driving the rapid evolution of AI inference and generative AI inference capabilities in SageMaker. The features and optimizations introduced through this collaboration are empowering AWS customers to unlock the transformative potential of inference with greater ease, cost-effectiveness, and performance.

“At iFood, we were at the forefront of adopting transformative machine learning and AI technologies, and our partnership with the SageMaker Inference product team has been instrumental in shaping the future of AI applications. Together, we’ve developed strategies to efficiently manage inference workloads, allowing us to run models with speed and price-performance. The lessons we’ve learned supported us in the creation of our internal platform, which can serve as a blueprint for other organizations looking to harness the power of AI inference. We believe the features we have built in collaboration will broadly help other enterprises who run inference workloads on SageMaker, unlocking new frontiers of innovation and business transformation, by solving recurring and important problems in the universe of machine learning engineering.”

– says Daniel Vieira, ML Platform manager at iFood.

Conclusion

Using the capabilities of SageMaker, iFood transformed its approach to ML and AI, unleashing new possibilities for enhancing the customer experience. By building a robust and centralized ML platform, iFood has bridged the gap between its data science and engineering teams, streamlining the model lifecycle from development to deployment. The integration of SageMaker features has enabled iFood to deploy ML models for both real-time and batch-oriented use cases. For real-time, customer-facing applications, iFood uses SageMaker hosted endpoints to provide immediate predictions and enhance the user experience. Additionally, the company uses SageMaker batch transform to efficiently process large datasets and generate insights for its restaurant partners. This flexibility in inference options has been key to iFood’s ability to power a diverse range of intelligent applications.

The automation of deployment and retraining through ML Go!, supported by SageMaker Pipelines and SageMaker Inference, has been a game changer for iFood. This has enabled the company to rapidly iterate on its ML models, deploy updates with confidence, and maintain the ongoing performance and reliability of its intelligent applications. Moreover, iFood’s strategic partnership with the SageMaker Inference team has been instrumental in driving the evolution of AI inference capabilities within the platform. Through this collaboration, iFood has helped shape cost and performance optimizations, scale improvements, and simplify model deployment features—all of which are now benefiting a wider range of AWS customers.

By taking advantage of the capabilities SageMaker offers, iFood has been able to unlock the transformative potential of AI and ML, delivering innovative solutions that enhance the customer experience and strengthen its position as the leading food-tech platform in Latin America. This journey serves as a testament to the power of cloud-based AI infrastructure and the value of strategic partnerships in driving technology-driven business transformation.

By following iFood’s example, you can unlock the full potential of SageMaker for your business, driving innovation and staying ahead in your industry.


About the Authors

Daniel Vieira is a seasoned Machine Learning Engineering Manager at iFood, with a strong academic background in computer science, holding both a bachelor’s and a master’s degree from the Federal University of Minas Gerais (UFMG). With over a decade of experience in software engineering and platform development, Daniel leads iFood’s ML platform, building a robust, scalable ecosystem that drives impactful ML solutions across the company. In his spare time, Daniel Vieira enjoys music, philosophy, and learning about new things while drinking a good cup of coffee.

Debora Fanin serves as a Senior Customer Solutions Manager AWS for the Digital Native Business segment in Brazil. In this role, Debora manages customer transformations, creating cloud adoption strategies to support cost-effective, timely deployments. Her responsibilities include designing change management plans, guiding solution-focused decisions, and addressing potential risks to align with customer objectives. Debora’s academic path includes a Master’s degree in Administration at FEI and certifications such as Amazon Solutions Architect Associate and Agile credentials. Her professional history spans IT and project management roles across diverse sectors, where she developed expertise in cloud technologies, data science, and customer relations.

Saurabh Trikande is a Senior Product Manager for Amazon Bedrock and Amazon SageMaker Inference. He is passionate about working with customers and partners, motivated by the goal of democratizing AI. He focuses on core challenges related to deploying complex AI applications, inference with multi-tenant models, cost optimizations, and making the deployment of generative AI models more accessible. In his spare time, Saurabh enjoys hiking, learning about innovative technologies, following TechCrunch, and spending time with his family.

Gopi Mudiyala is a Senior Technical Account Manager at AWS. He helps customers in the financial services industry with their operations in AWS. As a machine learning enthusiast, Gopi works to help customers succeed in their ML journey. In his spare time, he likes to play badminton, spend time with family, and travel.

Read More

Build an enterprise synthetic data strategy using Amazon Bedrock

Build an enterprise synthetic data strategy using Amazon Bedrock

The AI landscape is rapidly evolving, and more organizations are recognizing the power of synthetic data to drive innovation. However, enterprises looking to use AI face a major roadblock: how to safely use sensitive data. Stringent privacy regulations make it risky to use such data, even with robust anonymization. Advanced analytics can potentially uncover hidden correlations and reveal real data, leading to compliance issues and reputational damage. Additionally, many industries struggle with a scarcity of high-quality, diverse datasets needed for critical processes like software testing, product development, and AI model training. This data shortage can hinder innovation, slowing down development cycles across various business operations.

Organizations need innovative solutions to unlock the potential of data-driven processes without compromising ethics or data privacy. This is where synthetic data comes in—a solution that mimics the statistical properties and patterns of real data while being entirely fictitious. By using synthetic data, enterprises can train AI models, conduct analyses, and develop applications without the risk of exposing sensitive information. Synthetic data effectively bridges the gap between data utility and privacy protection. However, creating high-quality synthetic data comes with significant challenges:

  • Data quality – Making sure synthetic data accurately reflects real-world statistical properties and nuances is difficult. The data might not capture rare edge cases or the full spectrum of human interactions.
  • Bias management – Although synthetic data can help reduce bias, it can also inadvertently amplify existing biases if not carefully managed. The quality of synthetic data heavily depends on the model and data used to generate it.
  • Privacy vs. utility – Balancing privacy preservation with data utility is complex. There’s a risk of reverse engineering or data leakage if not properly implemented.
  • Validation challenges – Verifying the quality and representation of synthetic data often requires comparison with real data, which can be problematic when working with sensitive information.
  • Reality gap – Synthetic data might not fully capture the dynamic nature of the real world, potentially leading to a disconnect between model performance on synthetic data and real-world applications.

In this post, we explore how to use Amazon Bedrock for synthetic data generation, considering these challenges alongside the potential benefits to develop effective strategies for various applications across multiple industries, including AI and machine learning (ML). Amazon Bedrock offers a broad set of capabilities to build generative AI applications with a focus on security, privacy, and responsible AI. Built within the AWS landscape, Amazon Bedrock is designed to help maintain the security and compliance standards required for enterprise use.

Attributes of high-quality synthetic data

To be truly effective, synthetic data must be both realistic and reliable. This means it should accurately reflect the complexities and nuances of real-world data while maintaining complete anonymity. A high-quality synthetic dataset exhibits several key characteristics that facilitate its fidelity to the original data:

  • Data structure – The synthetic data should maintain the same structure as the real data, including the same number of columns, data types, and relationships between different data sources
  • Statistical properties – The synthetic data should mimic the statistical properties of the real data, such as mean, median, standard deviation, correlation between variables, and distribution patterns.
  • Temporal patterns – If the real data exhibits temporal patterns (for example, diurnal or seasonal patterns), the synthetic data should also reflect these patterns.
  • Anomalies and outliers – Real-world data often contains anomalies and outliers. The synthetic data should also include a similar proportion and distribution of anomalies and outliers to accurately represent the real-world scenario.
  • Referential integrity – If the real data has relationships and dependencies between different data sources, the synthetic data should maintain these relationships to facilitate referential integrity.
  • Consistency – The synthetic data should be consistent across different data sources and maintain the relationships and dependencies between them, facilitating a coherent and unified representation of the dataset.
  • Scalability – The synthetic data generation process should be scalable to handle large volumes of data and support the generation of synthetic data for different scenarios and use cases.
  • Diversity – The synthetic data should capture the diversity present in the real data.

Solution overview

Generating useful synthetic data that protects privacy requires a thoughtful approach. The following figure represents the high-level architecture of the proposed solution. The process involves three key steps:

  1. Identify validation rules that define the structure and statistical properties of the real data.
  2. Use those rules to generate code using Amazon Bedrock that creates synthetic data subsets.
  3. Combine multiple synthetic subsets into full datasets.

workflow to generate synthetic data

Let’s explore these three key steps for creating useful synthetic data in more detail.

Step 1: Define data rules and characteristics

  1. To create synthetic datasets, start by establishing clear rules that capture the essence of your target data:
  2. Use domain-specific knowledge to identify key attributes and relationships.
  3. Study existing public datasets, academic resources, and industry documentation.
  4. Use tools like AWS Glue DataBrew, Amazon Bedrock, or open source alternatives (such as Great Expectations) to analyze data structures and patterns.
  5. Develop a comprehensive rule-set covering:
    • Data types and value ranges
    • Inter-field relationships
    • Quality standards
    • Domain-specific patterns and anomalies

This foundational step makes sure your synthetic data accurately reflects real-world scenarios in your industry.

Step 2: Generate code with Amazon Bedrock

Transform your data rules into functional code using Amazon Bedrock language models:

  1. Choose an appropriate Amazon Bedrock model based on code generation capabilities and domain relevance.
  2. Craft a detailed prompt describing the desired code output, including data structures and generation rules.
  3. Use the Amazon Bedrock API to generate Python code based on your prompts.
  4. Iteratively refine the code by:
    • Reviewing for accuracy and efficiency
    • Adjusting prompts as needed
    • Incorporating developer input for complex scenarios

The result is a tailored script that generates synthetic data entries matching your specific requirements and closely mimicking real-world data in your domain.

Step 3: Assemble and scale the synthetic dataset

Transform your generated data into a comprehensive, real-world representative dataset:

  1. Use the code from Step 2 to create multiple synthetic subsets for various scenarios.
  2. Merge subsets based on domain knowledge, maintaining realistic proportions and relationships.
  3. Align temporal or sequential components and introduce controlled randomness for natural variation.
  4. Scale the dataset to required sizes, reflecting different time periods or populations.
  5. Incorporate rare events and edge cases at appropriate frequencies.
  6. Generate accompanying metadata describing dataset characteristics and the generation process.

The end result is a diverse, realistic synthetic dataset for uses like system testing, ML model training, or data analysis. The metadata provides transparency into the generation process and data characteristics. Together, these measures result in a robust synthetic dataset that closely parallels real-world data while avoiding exposure of direct sensitive information. This generalized approach can be adapted to various types of datasets, from financial transactions to medical records, using the power of Amazon Bedrock for code generation and the expertise of domain knowledge for data validation and structuring.

Importance of differential privacy in synthetic data generation

Although synthetic data offers numerous benefits for analytics and machine learning, it’s essential to recognize that privacy concerns persist even with artificially generated datasets. As we strive to create high-fidelity synthetic data, we must also maintain robust privacy protections for the original data. Although synthetic data mimics patterns in actual data, if created improperly, it risks revealing details about sensitive information in the source dataset. This is where differential privacy enters the picture. Differential privacy is a mathematical framework that provides a way to quantify and control the privacy risks associated with data analysis. It works by injecting calibrated noise into the data generation process, making it virtually impossible to infer anything about a single data point or confidential information in the source dataset.

Differential privacy protects against re-identification exploits by adversaries attempting to extract details about data. The carefully calibrated noise added to synthetic data makes sure that even if an adversary tries, it is computationally infeasible to tie an output back to specific records in the original data, while still maintaining the overall statistical properties of the dataset. This allows the synthetic data to closely reflect real-world characteristics and remain useful for analytics and modeling while protecting privacy. By incorporating differential privacy techniques into the synthetic data generation process, you can create datasets that not only maintain statistical properties of the original data but also offer strong privacy guarantees. It enables organizations to share data more freely, collaborate on sensitive projects, and develop AI models with reduced risk of privacy breaches. For instance, in healthcare, differentially private synthetic patient data can accelerate research without compromising individual patient confidentiality.

As we continue to advance in the field of synthetic data generation, the incorporation of differential privacy is becoming not just a best practice, but a necessary component for responsible data science. This approach paves the way for a future where data utility and privacy protection coexist harmoniously, fostering innovation while safeguarding individual rights. However, although differential privacy offers strong theoretical guarantees, its practical implementation can be challenging. Organizations must carefully balance the trade-off between privacy and utility, because increasing privacy protection often comes at the cost of reduced data utility.

Build synthetic datasets for Trusted Advisor findings with Amazon Bedrock

In this post, we guide you through the process of creating synthetic datasets for AWS Trusted Advisor findings using Amazon Bedrock. Trusted Advisor provides real-time guidance to optimize your AWS environment, improving performance, security, and cost-efficiency through over 500 checks against AWS best practices. We demonstrate the synthetic data generation approach using the “Underutilized Amazon EBS Volumes” check (checkid: DAvU99Dc4C) as an example.

By following this post, you will gain practical knowledge on:

  • Defining data rules for Trusted Advisor findings
  • Using Amazon Bedrock to generate data creation code
  • Assembling and scaling synthetic datasets

This approach can be applied across over 500 Trusted Advisor checks, enabling you to build comprehensive, privacy-aware datasets for testing, training, and analysis. Whether you’re looking to enhance your understanding of Trusted Advisor recommendations or develop new optimization strategies, synthetic data offers powerful possibilities.

Prerequisites

To implement this approach, you must have an AWS account with the appropriate permissions.

  1. AWS Account Setup:
    • IAM permissions for:
      • Amazon Bedrock
      • AWS Trusted Advisor
      • Amazon EBS
  2. AWS Service Access:
    • Access enabled for Amazon Bedrock in your Region
    • Access to Anthropic Claude model in Amazon Bedrock
    • Enterprise or Business support plan for full Trusted Advisor access
  3. Development Environment:
    • Python 3.8 or later installed
    • Required Python packages:
      • pandas
      • numpy
      • random
      • boto3
  4. Knowledge Requirements:
    • Basic understanding of:
      •  Python programming
      •  AWS services (especially EBS and Trusted Advisor)
      •  Data analysis concepts
      •  JSON/YAML file format

Define Trusted Advisor findings rules

Begin by examining real Trusted Advisor findings for the “Underutilized Amazon EBS Volumes” check. Analyze the structure and content of these findings to identify key data elements and their relationships. Pay attention to the following:

  • Standard fields – Check ID, volume ID, volume type, snapshot ID, and snapshot age
  • Volume attributes – Size, type, age, and cost
  • Usage metrics – Read and write operations, throughput, and IOPS
  • Temporal patterns – Volume type and size variations
  • Metadata – Tags, creation date, and last attached date

As you study these elements, note the typical ranges, patterns, and distributions for each attribute. For example, observe how volume sizes correlate with volume types, or how usage patterns differ between development and production environments. This analysis will help you create a set of rules that accurately reflect real-world Trusted Advisor findings.

After analyzing real Trusted Advisor outputs for the “Underutilized Amazon EBS Volumes” check, we identified the following crucial patterns and rules:

  • Volume type – Consider gp2, gp3, io1, io2, and st1 volume types. Verify the volume sizes are valid for volume types.
  • Criteria – Represent multiple AWS Regions, with appropriate volume types. Correlate snapshot ages with volume ages.
  • Data structure – Each finding should include the same columns.

The following is an example ruleset:

Analysis of the AWS Trusted Advisor finding for "Underutilized Amazon EBS Volumes":
1. Columns in the Trusted Advisor Finding:
- Region
- Volume ID
- Volume Name
- Volume Type
- Volume Size
- Monthly Storage Cost
- Snapshot ID
- Snapshot Name
- Snapshot Age
2. Key Columns and Their Significance:
- Region: AWS region where the EBS volume is located
- Volume ID: Unique identifier for the EBS volume
- Volume Type: Type of EBS volume (e.g., gp2, io1, st1)
- Volume Size: Size of the volume in GB
- Monthly Storage Cost: Estimated cost for storing the volume
- Snapshot ID: Identifier of the most recent snapshot (if any)
- Snapshot Age: Age of the most recent snapshot
3. Relationships and Patterns:
- Volume ID and Snapshot ID relationship: Each volume may have zero or more snapshots
- Region and cost correlation: Storage costs may vary by region
- Volume Type and Size correlation: Certain volume types have size limitations
- Volume Size and Cost correlation: Larger volumes generally cost more
- Snapshot Age and utilization: Older snapshots might indicate less active volumes
4. Data Types and Formats:
- Region: String (e.g., "us-east-1")
- Volume ID: String starting with "vol-"
- Volume Name: String (can be null)
- Volume Type: String (gp2, gp3, io1, io2, st1, sc1, standard)
- Volume Size: Integer (in GB)
- Monthly Storage Cost: Decimal number
- Snapshot ID: String starting with "snap-" (can be null)
- Snapshot Name: String (can be null)

Generate code with Amazon Bedrock

With your rules defined, you can now use Amazon Bedrock to generate Python code for creating synthetic Trusted Advisor findings.

The following is an example prompt for Amazon Bedrock:

Give me python code to create a 100 row pandas df with the following data:
<<Copy paste the ruleset from the above step>>

You can submit this prompt to the Amazon Bedrock chat playground using Anthropic’s Claude 3.5 Sonnet on Amazon Bedrock, and receive generated Python code. Review this code carefully, verifying it meets all specifications and generates realistic data. If necessary, iterate on your prompt or make manual adjustments to the code to address any missing logic or edge cases.

The resulting code will serve as the foundation for creating varied and realistic synthetic Trusted Advisor findings that adhere to the defined parameters. By using Amazon Bedrock in this way, you can quickly develop sophisticated data generation code that would otherwise require significant manual effort and domain expertise to create.

Create data subsets

With the code generated by Amazon Bedrock and refined with your custom functions, you can now create diverse subsets of synthetic Trusted Advisor findings for the “Underutilized Amazon EBS Volumes” check. This approach allows you to simulate a wide range of real-world scenarios. In the following sample code, we have customized the volume_id and snapshot_id format to begin with vol-9999 and snap-9999, respectively:

import pandas as pd
import numpy as np
import random

def generate_volume_id():
return f"vol-9999{''.join(random.choices('0123456789abcdef', k=17))}"

def generate_snapshot_id():
return f"snap-9999{''.join(random.choices('0123456789abcdef', k=17))}"

def generate_volume_name():
prefixes = ['app', 'db', 'web', 'cache', 'log']
suffixes = ['prod', 'dev', 'test', 'staging']
return f"{random.choice(prefixes)}-{random.choice(suffixes)}-{random.randint(1, 100)}"

def step3_generate_base_data():

# Generate synthetic data
num_records = 1000
regions = ['us-east-1', 'us-west-2', 'eu-west-1', 'ap-southeast-1']
volume_types = ['gp2', 'gp3', 'io1', 'io2', 'st1', 'sc1', 'standard']

data = {
'Region': np.random.choice(regions, num_records),
'Volume ID': [generate_volume_id() for _ in range(num_records)],
'Volume Name': [generate_volume_name() if random.random() > 0.3 else None for _ in range(num_records)],
'Volume Type': np.random.choice(volume_types, num_records, p=[0.4, 0.2, 0.1, 0.1, 0.1, 0.05, 0.05]),
'Volume Size': np.random.choice(range(1, 1001), num_records),
'Monthly Storage Cost': np.random.uniform(0.1, 100, num_records).round(2),
'Snapshot ID': [generate_snapshot_id() if random.random() > 0.4 else None for _ in range(num_records)],
'Snapshot Name': [f"snapshot-{i}" if random.random() > 0.6 else None for i in range(num_records)],
'Snapshot Age': [random.randint(1, 365) if random.random() > 0.4 else None for _ in range(num_records)]
}

df = pd.DataFrame(data)

# Apply some logic and constraints
df.loc[df['Volume Type'] == 'gp2', 'Volume Size'] = df.loc[df['Volume Type'] == 'gp2', 'Volume Size'].clip(1, 16384)
df.loc[df['Volume Type'] == 'io1', 'Volume Size'] = df.loc[df['Volume Type'] == 'io1', 'Volume Size'].clip(4, 16384)
df.loc[df['Volume Type'] == 'st1', 'Volume Size'] = df.loc[df['Volume Type'] == 'st1', 'Volume Size'].clip(500, 16384)
df.loc[df['Volume Type'] == 'sc1', 'Volume Size'] = df.loc[df['Volume Type'] == 'sc1', 'Volume Size'].clip(500, 16384)

# Adjust Monthly Storage Cost based on Volume Size and Type
df['Monthly Storage Cost'] = df.apply(lambda row: row['Volume Size'] * random.uniform(0.05, 0.15) * (1.5 if row['Volume Type'] in ['io1', 'io2'] else 1), axis=1).round(2)

# Ensure Snapshot ID, Name, and Age are consistent
df.loc[df['Snapshot ID'].isnull(), 'Snapshot Name'] = None
df.loc[df['Snapshot ID'].isnull(), 'Snapshot Age'] = None

# Add some underutilized volumes
df['Underutilized'] = np.random.choice([True, False], num_records, p=[0.7, 0.3])
df.loc[df['Underutilized'], 'Monthly Storage Cost'] *= random.uniform(1.2, 2.0)

return df

This code creates subsets that include:

  • Various volume types and instance types
  • Different levels of utilization
  • Occasional misconfigurations (for example, underutilized volumes)
  • Diverse regional distribution

Combine and scale the dataset

The process of combining and scaling synthetic data involves merging multiple generated datasets while introducing realistic anomalies to create a comprehensive and representative dataset. This step is crucial for making sure that your synthetic data reflects the complexity and variability found in real-world scenarios. Organizations typically introduce controlled anomalies at a specific rate (usually 5–10% of the dataset) to simulate various edge cases and unusual patterns that might occur in production environments. These anomalies help in testing system responses, developing monitoring solutions, and training ML models to identify potential issues.

When generating synthetic data for underutilized EBS volumes, you might introduce anomalies such as oversized volumes (5–10 times larger than needed), volumes with old snapshots (older than 365 days), or high-cost volumes with low utilization. For instance, a synthetic dataset might include a 1 TB gp2 volume that’s only using 100 GB of space, simulating a real-world scenario of overprovisioned resources. See the following code:

import pandas as pd
import numpy as np
import random
def introduce_anomalies(df, anomaly_rate=0.1):
"""
Introduce various volume-related anomalies into the dataset.

:param df: The input DataFrame
:param anomaly_rate: The rate at which to introduce anomalies (default 10%)
:return: DataFrame with anomalies introduced
"""
num_anomalies = int(len(df) * anomaly_rate)
anomaly_indices = np.random.choice(df.index, num_anomalies, replace=False)

df['Anomaly'] = pd.NA  # Initialize Anomaly column with pandas NA

for idx in anomaly_indices:
anomaly_type = random.choice([
'oversized_volume',
'old_snapshot',
'high_cost_low_size',
'mismatched_type',
'very_old_volume'
])

if anomaly_type == 'oversized_volume':
df.at[idx, 'Volume Size'] = int(df.at[idx, 'Volume Size'] * random.uniform(5, 10))
df.at[idx, 'Monthly Storage Cost'] *= random.uniform(5, 10)

elif anomaly_type == 'old_snapshot':
df.at[idx, 'Snapshot Age'] = random.randint(365, 1000)

elif anomaly_type == 'high_cost_low_size':
df.at[idx, 'Volume Size'] = random.randint(1, 10)
df.at[idx, 'Monthly Storage Cost'] *= random.uniform(10, 20)

elif anomaly_type == 'mismatched_type':
if df.at[idx, 'Volume Type'] in ['gp2', 'gp3']:
df.at[idx, 'Volume Type'] = random.choice(['io1', 'io2'])
else:
df.at[idx, 'Volume Type'] = random.choice(['gp2', 'gp3'])

elif anomaly_type == 'very_old_volume':
df.at[idx, 'Volume Name'] = f"old-volume-{random.randint(1, 100)}"
if pd.notna(df.at[idx, 'Snapshot Age']):
df.at[idx, 'Snapshot Age'] = random.randint(1000, 2000)

df.at[idx, 'Anomaly'] = anomaly_type

return df

The following screenshot shows an example of sample rows generated.

Validate the synthetic Trusted Advisor findings

Data validation is a critical step that verifies the quality, reliability, and representativeness of your synthetic data. This process involves performing rigorous statistical analysis to verify that the generated data maintains proper distributions, relationships, and patterns that align with real-world scenarios. Validation should include both quantitative metrics (statistical measures) and qualitative assessments (pattern analysis). Organizations should implement comprehensive validation frameworks that include distribution analysis, correlation checks, pattern verification, and anomaly detection. Regular visualization of the data helps in identifying inconsistencies or unexpected patterns.

For EBS volume data, validation might include analyzing the distribution of volume sizes across different types (gp2, gp3, io1), verifying that cost correlations match expected patterns, and making sure that introduced anomalies (like underutilized volumes) maintain realistic proportions. For instance, validating that the percentage of underutilized volumes aligns with typical enterprise environments (perhaps 15–20% of total volumes) and that the cost-to-size relationships remain realistic across volume types.

The following figures show examples of our validation checks.

  1. The following screenshot shows statistics of the generated synthetic datasets.
  2. The following figure shows the proportion of underutilized volumes in the generated synthetic datasets.
    underutilized volume proportion
  3. The following figure shows the distribution of volume sizes in the generated synthetic datasets.
    volume_size_distribution
  4. The following figure shows the distribution of volume types in the generated synthetic datasets.
    volume_type_distribution
  5. The following figure shows the distribution of snapshot ages in the generated synthetic datasets.
    snapshot_age

Enhancing synthetic data with differential privacy

After exploring the steps to create synthetic datasets for the Trusted Advisor “Underutilized Amazon EBS Volumes” check, it’s worth revisiting how differential privacy strengthens this approach. When a cloud consulting firm analyzes aggregated Trusted Advisor data across multiple clients, differential privacy through OpenDP provides the critical privacy-utility balance needed. By applying carefully calibrated noise to computations of underutilized volume statistics, consultants can generate synthetic datasets that preserve essential patterns across Regions and volume types while mathematically guaranteeing individual client confidentiality. This approach verifies that the synthetic data maintains sufficient accuracy for meaningful trend analysis and recommendations, while eliminating the risk of revealing sensitive client-specific infrastructure details or usage patterns—making it an ideal complement to our synthetic data generation pipeline.

Conclusion

In this post, we showed how to use Amazon Bedrock to create synthetic data for enterprise needs. By combining language models available in Amazon Bedrock with industry knowledge, you can build a flexible and secure way to generate test data. This approach helps create realistic datasets without using sensitive information, saving time and money. It also facilitates consistent testing across projects and avoids ethical issues of using real user data. Overall, this strategy offers a solid solution for data challenges, supporting better testing and development practices.

In part 2 of this series, we will demonstrate how to use pattern recognition for different datasets to automate rule-set generation needed for the Amazon Bedrock prompts to generate corresponding synthetic data.


About the authors

Devi Nair is a Technical Account Manager at Amazon Web Services, providing strategic guidance to enterprise customers as they build, operate, and optimize their workloads on AWS. She focuses on aligning cloud solutions with business objectives to drive long-term success and innovation.

Vishal Karlupia is a Senior Technical Account Manager/Lead at Amazon Web Services, Toronto. He specializes in generative AI applications and helps customers build and scale their AI/ML workloads on AWS. Outside of work, he enjoys being outdoors and keeping bonfires alive.

Srinivas Ganapathi is a Principal Technical Account Manager at Amazon Web Services. He is based in Toronto, Canada, and works with games customers to run efficient workloads on AWS.

Nicolas Simard is a Technical Account Manager based in Montreal. He helps organizations accelerate their AI adoption journey through technical expertise, architectural best practices, and enables them to maximize business value from AWS’s Generative AI capabilities.

Read More