Improve LLM application robustness with Amazon Bedrock Guardrails and Amazon Bedrock Agents

Improve LLM application robustness with Amazon Bedrock Guardrails and Amazon Bedrock Agents

Agentic workflows are a fresh new perspective in building dynamic and complex business use case-based workflows with the help of large language models (LLMs) as their reasoning engine. These agentic workflows decompose the natural language query-based tasks into multiple actionable steps with iterative feedback loops and self-reflection to produce the final result using tools and APIs. This naturally warrants the need to measure and evaluate the robustness of these workflows, in particular those that are adversarial or harmful in nature.

Amazon Bedrock Agents can break down natural language conversations into a sequence of tasks and API calls using ReAct and chain-of-thought (CoT) prompting techniques using LLMs. This offers tremendous use case flexibility, enables dynamic workflows, and reduces development cost. Amazon Bedrock Agents is instrumental in customization and tailoring apps to help meet specific project requirements while protecting private data and securing your applications. These agents work with AWS managed infrastructure capabilities and Amazon Bedrock, reducing infrastructure management overhead.

Although Amazon Bedrock Agents have built-in mechanisms to help avoid general harmful content, you can incorporate a custom, user-defined fine-grained mechanism with Amazon Bedrock Guardrails. Amazon Bedrock Guardrails provides additional customizable safeguards on top of the built-in protections of foundation models (FMs), delivering safety protections that are among the best in the industry by blocking harmful content and filtering hallucinated responses for Retrieval Augmented Generation (RAG) and summarization workloads. This enables you to customize and apply safety, privacy, and truthfulness protections within a single solution.

In this post, we demonstrate how you can identify and improve the robustness of Amazon Bedrock Agents when integrated with Amazon Bedrock Guardrails for domain-specific use cases.

Solution overview

In this post, we explore a sample use case for an online retail chatbot. The chatbot requires dynamic workflows for use cases like searching for and purchasing shoes based on customer preferences using natural language queries. To implement this, we build an agentic workflow using Amazon Bedrock Agents.

To test its adversarial robustness, we then prompt this bot to give fiduciary advice regarding retirement. We use this example to demonstrate robustness concerns, followed by robustness improvement using the agentic workflow with Amazon Bedrock Guardrails to help prevent the bot from giving fiduciary advice.

In this implementation, the preprocessing stage (the first stage of the agentic workflow, before the LLM is invoked) of the agent is turned off by default. Even with preprocessing turned on, there is usually a need for more fine-grained use case-specific control over what can be marked as safe and acceptable or not. In this example, a retail agent for shoes giving away fiduciary advice is definitely out of scope of the product use case and may be detrimental advice, resulting in customers losing trust, among other safety concerns.

Another typical fine-grained robustness control requirement could be to restrict personally identifiable information (PII) from being generated by these agentic workflows. We can configure and set up Amazon Bedrock Guardrails in Amazon Bedrock Agents to deliver improved robustness against such regulatory compliance cases and custom business needs without the need for fine-tuning LLMs.

The following diagram illustrates the solution architecture.

This figure shows a high-level architecture of this blog in its finished state.The user request is captured by Agents for Amazon Bedrock to generate a plan and then it calls lambda to execute the API which can call any database, aws service like email or other applications. These agents are associated with Guardrails for Amazon Bedrock to provide improved adversarial robustness.

We use the following AWS services:

  • Amazon Bedrock to invoke LLMs
  • Amazon Bedrock Agents for the agentic workflows
  • Amazon Bedrock Guardrails to deny adversarial inputs
  • AWS Identity and Access Management (IAM) for permission control across various AWS services
  • AWS Lambda for business API implementation
  • Amazon SageMaker to host Jupyter notebooks and invoke the Amazon Bedrock Agents API

In the following sections, we demonstrate how to use the GitHub repository to run this example using three Jupyter notebooks.

Prerequisites

To run this demo in your AWS account, complete the following prerequisites:

  1. Create an AWS account if you don’t already have one.
  2. Clone the GitHub repository and follow the steps explained in the README.
  3. Set up a SageMaker notebook using an AWS CloudFormation template, available in the GitHub repo. The CloudFormation template also provides the required IAM access to set up SageMaker resources and Lambda functions.
  4. Acquire access to models hosted on Amazon Bedrock. Choose Manage model access in the navigation pane on the Amazon Bedrock console and choose from the list of available options. We use Anthropic Claude 3 Haiku on Amazon Bedrock and Amazon Titan Embeddings Text v1 on Amazon Bedrock for this post.

Create a guardrail

In the Part 1a notebook, complete the following steps to create a guardrail to help prevent the chatbot from providing fiduciary advice:

  1. Create a guardrail with Amazon Bedrock Guardrails using the Boto3 API with content filters, word and phrase filters, and sensitive word filters, such as for PII and regular expressions (regex) to protect sensitive information from our retail customers.
  2. List and create guardrail versions.
  3. Update the guardrails.
  4. Perform unit testing on the guardrails.
  5. Note theguardrail-idand guardrail-arn values to use in Part 1c:
create_response = client.create_guardrail(
    name=guardrail_name,
    description='Prevents our model from providing fiduciary advice.',
    topicPolicyConfig={
        'topicsConfig': [
            {
                'name': 'Fiduciary Advice',
                'definition': 'Providing personalized advice or recommendations on managing financial assets, investments, or trusts in a fiduciary capacity or assuming related obligations and liabilities.',
                'examples': [
                    'What stocks should I invest in for my retirement?',
                    'Is it a good idea to put my money in a mutual fund?',
                    'How should I allocate my 401(k) investments?',
                    'What type of trust fund should I set up for my children?',
                    'Should I hire a financial advisor to manage my investments?'
                ],
                'type': 'DENY'
            }
        ]
    },
….
}

Test the use case without guardrails

In the Part 1b notebook, complete the following steps to demonstrate the use case using Amazon Bedrock Agents without Amazon Bedrock Guardrails and no preprocessing to demonstrate the adversarial robustness problem:

  1. Choose the underlying FM for your agent.
  2. Provide a clear and concise agent instruction.
  3. Create and associate an action group with an API schema and Lambda function.
  4. Create, invoke, test, and deploy the agent.
  5. Demonstrate a chat session with multi-turn conversations.

The agent instruction is as follows:

“You are an agent that helps customers purchase shoes. If the customer does not provide their name in the first input, ask for them name before invoking any functions.
Retrieve customer details like customer ID and preferred activity based on the name.
Then check inventory for shoe best fit activity matching customer preferred activity.
Generate response with shoe ID, style description and colors based on shoe inventory details.
If multiple matches exist, display all of them to the user.
After customer indicates they would like to order the shoe, use the shoe ID corresponding to their choice and
customer ID from initial customer details received, to place order for the shoe.”

A valid user query would be “Hello, my name is John Doe. I am looking to buy running shoes. Can you elaborate more about Shoe ID 10?” However, by using Amazon Bedrock Agents without Amazon Bedrock Guardrails, the agent allows fiduciary advice for queries like the following:

  • “How should I invest for my retirement? I want to be able to generate $5,000 a month.”
  • “How do I make money to prepare for my retirement?”

Test the use case with guardrails

In the Part 1c notebook, repeat the steps in Part 1b but now to demonstrate using Amazon Bedrock Agents with guardrails (and still no preprocessing) to improve and evaluate the adversarial robustness concern by not allowing fiduciary advice. The complete steps are the following:

  1. Choose the underlying FM for your agent.
  2. Provide a clear and concise agent instruction.
  3. Create and associate an action group with an API schema and Lambda function.
  4. During the configuration setup of Amazon Bedrock Agents in this example, associate the guardrail created previously in Part 1a with this agent.
  5. Create, invoke, test, and deploy the agent.
  6. Demonstrate a chat session with multi-turn conversations.

To associate a guardrail-id with an agent during creation, we can use the following code snippet:

gconfig = { 
      "guardrailIdentifier": 'an9l3icjg3kj',
      "guardrailVersion": 'DRAFT'
}

response = bedrock_agent_client.create_agent(
    agentName=agent_name,
    agentResourceRoleArn=agent_role['Role']['Arn'],
    description="Retail agent for shoe purchase.",
    idleSessionTTLInSeconds=3600,
    foundationModel="anthropic.claude-3-haiku-20240307-v1:0",
    instruction=agent_instruction,
    guardrailConfiguration=gconfig,
)

As we can expect, our retail chatbot should decline to answer invalid queries because it has no relationship with its purpose in our use case.

Cost considerations

The following are important cost considerations:

Clean up

For the Part 1b and Part 1c notebooks, to avoid incurring recurring costs, the implementation automatically cleans up resources after an entire run of the notebook. You can check the notebook instructions in the Clean-up Resources section on how to avoid the automatic cleanup and experiment with different prompts.

The order of cleanup is as follows:

  1. Disable the action group.
  2. Delete the action group.
  3. Delete the alias.
  4. Delete the agent.
  5. Delete the Lambda function.
  6. Empty the S3 bucket.
  7. Delete the S3 bucket.
  8. Delete IAM roles and policies.

You can delete guardrails from the Amazon Bedrock console or API. Unless the guardrails are invoked through agents in this demo, you will not be charged. For more details, see Delete a guardrail.

Conclusion

In this post, we demonstrated how Amazon Bedrock Guardrails can improve the robustness of the agent framework. We were able to stop our chatbot from responding to non-relevant queries and protect personal information from our customers, ultimately improving the robustness of our agentic implementation with Amazon Bedrock Agents.

In general, the preprocessing stage of Amazon Bedrock Agents can intercept and reject adversarial inputs, but guardrails can help prevent prompts that may be very specific to the topic or use case (such as PII and HIPAA rules) that the LLM hasn’t seen previously, without having to fine-tune the LLM.

To learn more about creating models with Amazon Bedrock, see Customize your model to improve its performance for your use case. To learn more about using agents to orchestrate workflows, see Automate tasks in your application using conversational agents. For details about using guardrails to safeguard your generative AI applications, refer to Stop harmful content in models using Amazon Bedrock Guardrails.

Acknowledgements

The author thanks all the reviewers for their valuable feedback.


About the Author

Shayan Ray is an Applied Scientist at Amazon Web Services. His area of research is all things natural language (like NLP, NLU, and NLG). His work has been focused on conversational AI, task-oriented dialogue systems, and LLM-based agents. His research publications are on natural language processing, personalization, and reinforcement learning.

Read More

Dive deep into vector data stores using Amazon Bedrock Knowledge Bases

Dive deep into vector data stores using Amazon Bedrock Knowledge Bases

Customers across all industries are experimenting with generative AI to accelerate and improve business outcomes. Generative AI is used in various use cases, such as content creation, personalization, intelligent assistants, questions and answers, summarization, automation, cost-efficiencies, productivity improvement assistants, customization, innovation, and more.

Generative AI solutions often use Retrieval Augmented Generation (RAG) architectures, which augment external knowledge sources for improving content quality, context understanding, creativity, domain-adaptability, personalization, transparency, and explainability.

This post dives deep into Amazon Bedrock Knowledge Bases, which helps with the storage and retrieval of data in vector databases for RAG-based workflows, with the objective to improve large language model (LLM) responses for inference involving an organization’s datasets.

Benefits of vector data stores

Several challenges arise when handling complex scenarios dealing with data like data volumes, multi-dimensionality, multi-modality, and other interfacing complexities. For example:

  • Data such as images, text, and audio need to be represented in a structured and efficient manner
  • Understanding the semantic similarity between data points is essential in generative AI tasks like natural language processing (NLP), image recognition, and recommendation systems
  • As the volume of data continues to grow rapidly, scalability becomes a significant challenge
  •  Traditional databases may struggle to efficiently handle the computational demands of generative AI tasks, such as training complex models or performing inference on large datasets
  •  Generative AI applications frequently require searching and retrieving similar items or patterns within datasets, such as finding similar images or recommending relevant content
  •  Generative AI solutions often involve integrating multiple components and technologies, such as deep learning frameworks, data processing pipelines, and deployment environments

Vector databases serve as a foundation in addressing these data needs for generative AI solutions, enabling efficient representation, semantic understanding, scalability, interoperability, search and retrieval, and model deployment. They contribute to the effectiveness and feasibility of generative AI applications across various domains. Vector databases offer the following capabilities:

  • Provide a means to represent data in a structured and efficient manner, enabling computational processing and manipulation
  • Enable the measurement of semantic similarity by encoding data into vector representations, allowing for comparison and analysis
  • Handle large-scale datasets efficiently, enabling processing and analysis of vast amounts of information in a scalable manner
  •  Provide a common interface for storing and accessing data representations, facilitating interoperability between different components of the AI system
  •  Support efficient search and retrieval operations, enabling quick and accurate exploration of large datasets

To help implement generative AI-based applications securely at scale, AWS provides Amazon Bedrock, a fully managed service that enables deploying generative AI applications that use high-performing LLMs from leading AI startups and Amazon. With the Amazon Bedrock serverless experience, you can experiment with and evaluate top foundation models (FMs) for your use cases, privately customize them with your data using techniques such as fine-tuning and RAG, and build agents that run tasks using enterprise systems and data sources.

In this post, we dive deep into the vector database options available as part of Amazon Bedrock Knowledge Bases and the applicable use cases, and look at working code examples. Amazon Bedrock Knowledge Bases enables faster time to market by abstracting from the heavy lifting of building pipelines and providing you with an out-of-the-box RAG solution to reduce the build time for your application.

Knowledge base systems with RAG

RAG optimizes LLM responses by referencing authoritative knowledge bases outside of its training data sources before generating a response. Out of the box, LLMs are trained on vast volumes of data and use billions of parameters to generate original output for tasks like answering questions, translating languages, and completing sentences. RAG extends the existing powerful capabilities of LLMs to specific domains or an organization’s internal knowledge base, all without the need to retrain the model. It’s a cost-effective approach to improving an LLM’s output so it remains relevant, accurate, and useful in various contexts.

The following diagram depicts the high-level steps of a RAG process to access an organization’s internal or external knowledge stores and pass the data to the LLM.

The workflow consists of the following steps:

  1. Either a user through a chatbot UI or an automated process issues a prompt and requests a response from the LLM-based application.
  2. An LLM-powered agent, which is responsible for orchestrating steps to respond to the request, checks if additional information is needed from knowledge sources.
  3. The agent decides which knowledge source to use.
  4. The agent invokes the process to retrieve information from the knowledge source.
  5. The relevant information (enhanced context) from the knowledge source is returned to the agent.
  6. The agent adds the enhanced context from the knowledge source to the prompt and passes it to the LLM endpoint for the response.
  7. The LLM response is passed back to the agent.
  8. The agent returns the LLM response to the chatbot UI or the automated process.

Use cases for vector databases for RAG

In the context of RAG architectures, the external knowledge can come from relational databases, search and document stores, or other data stores. However, simply storing and searching through this external data using traditional methods (such as keyword search or inverted indexes) can be inefficient and might not capture the true semantic relationships between data points. Vector databases are recommended for RAG use cases because they enable similarity search and dense vector representations.

The following are some scenarios where loading data into a vector database can be advantageous for RAG use cases:

  • Large knowledge bases – When dealing with extensive knowledge bases containing millions or billions of documents or passages, vector databases can provide efficient similarity search capabilities.
  • Unstructured or semi-structured data – Vector databases are particularly well-suited for handling unstructured or semi-structured data, such as text documents, webpages, or natural language content. By converting the textual data into dense vector representations, vector databases can effectively capture the semantic relationships between documents or passages, enabling more accurate retrieval.
  • Multilingual knowledge bases – In RAG systems that need to handle knowledge bases spanning multiple languages, vector databases can be advantageous. By using multilingual language models or cross-lingual embeddings, vector databases can facilitate effective retrieval across different languages, enabling cross-lingual knowledge transfer.
  •  Semantic search and relevance ranking – Vector databases excel at semantic search and relevance ranking tasks. By representing documents or passages as dense vectors, the retrieval component can use vector similarity measures to identify the most semantically relevant content.
  • Personalized and context-aware retrieval – Vector databases can support personalized and context-aware retrieval in RAG systems. By incorporating user profiles, preferences, or contextual information into the vector representations, the retrieval component can prioritize and surface the most relevant content for a specific user or context.

Although vector databases offer advantages in these scenarios, their implementation and effectiveness may depend on factors such as the specific vector embedding techniques used, the quality and representation of the data, and the computational resources available for indexing and retrieval operations. With Amazon Bedrock Knowledge Bases, you can give FMs and agents contextual information from your company’s private data sources for RAG to deliver more relevant, accurate, and customized responses.

Amazon Bedrock Knowledge Bases with RAG

Amazon Bedrock Knowledge Bases is a fully managed capability that helps with the implementation of the entire RAG workflow, from ingestion to retrieval and prompt augmentation, without having to build custom integrations to data sources and manage data flows. Knowledge bases are essential for various use cases, such as customer support, product documentation, internal knowledge sharing, and decision-making systems. A RAG workflow with knowledge bases has two main steps: data preprocessing and runtime execution.

The following diagram illustrates the data preprocessing workflow.

As part of preprocessing, information (structured data, unstructured data, or documents) from data sources is first split into manageable chunks. The chunks are converted to embeddings using embeddings models available in Amazon Bedrock. Lastly, the embeddings are written into a vector database index while maintaining a mapping to the original document. These embeddings are used to determine semantic similarity between queries and text from the data sources. All these steps are managed by Amazon Bedrock.

The following diagram illustrates the workflow for the runtime execution.

During the inference phase of the LLM, when the agent determines that it needs additional information, it reaches out to knowledge bases. The process converts the user query into vector embeddings using an Amazon Bedrock embeddings model, queries the vector database index to find semantically similar chunks to the user’s query, converts the retrieved chunks to text and augments the user query, and then responds back to the agent.

Embeddings models are needed in the preprocessing phase to store data in vector databases and during the runtime execution phase to generate embeddings for the user query to search the vector database index. Embeddings models map high-dimensional and sparse data like text into dense vector representations to be efficiently stored and processed by vector databases, and encode the semantic meaning and relationships of data into the vector space to enable meaningful similarity searches. These models support mapping different data types like text, images, audio, and video into the same vector space to enable multi-modal queries and analysis. Amazon Bedrock Knowledge Bases provides industry-leading embeddings models to enable use cases such as semantic search, RAG, classification, and clustering, to name a few, and provides multilingual support as well.

Vector database options with Amazon Bedrock Knowledge Bases

At the time of writing this post, Amazon Bedrock Knowledge Bases provides five integration options: the Vector Engine for Amazon OpenSearch Serverless, Amazon Aurora, MongoDB Atlas, Pinecone, and Redis Enterprise Cloud, with more vector database options to come. In this post, we discuss use cases, features, and steps to set up and retrieve information using these vector databases. Amazon Bedrock makes it straightforward to adopt any of these choices by providing a common set of APIs, industry-leading embedding models, security, governance, and observability.

Role of metadata while indexing data in vector databases

Metadata plays a crucial role when loading documents into a vector data store in Amazon Bedrock. It provides additional context and information about the documents, which can be used for various purposes, such as filtering, sorting, and enhancing search capabilities.

The following are some key uses of metadata when loading documents into a vector data store:

  • Document identification – Metadata can include unique identifiers for each document, such as document IDs, URLs, or file names. These identifiers can be used to uniquely reference and retrieve specific documents from the vector data store.
  • Content categorization – Metadata can provide information about the content or category of a document, such as the subject matter, domain, or topic. This information can be used to organize and filter documents based on specific categories or domains.
  • Document attributes – Metadata can store additional attributes related to the document, such as the author, publication date, language, or other relevant information. These attributes can be used for filtering, sorting, or faceted search within the vector data store.
  • Access control – Metadata can include information about access permissions or security levels associated with a document. This information can be used to control access to sensitive or restricted documents within the vector data store.
  • Relevance scoring – Metadata can be used to enhance the relevance scoring of search results. For example, if a user searches for documents within a specific date range or authored by a particular individual, the metadata can be used to prioritize and rank the most relevant documents.
  • Data enrichment – Metadata can be used to enrich the vector representations of documents by incorporating additional contextual information. This can potentially improve the accuracy and quality of search results.
  • Data lineage and auditing – Metadata can provide information about the provenance and lineage of documents, such as the source system, data ingestion pipeline, or other transformations applied to the data. This information can be valuable for data governance, auditing, and compliance purposes.

Prerequisites

Complete the steps in this section to set up the prerequisite resources and configurations.

Configure Amazon SageMaker Studio

The first step is to set up an Amazon SageMaker Studio notebook to run the code for this post. You can set up the notebook in any AWS Region where Amazon Bedrock Knowledge Bases is available.

  1. Complete the prerequisites to set up Amazon SageMaker.
  2. Complete the quick setup or custom setup to enable your SageMaker Studio domain and user profile.
    You also need an AWS Identity and Access Management (IAM) role assigned to the SageMaker Studio domain. You can identify the role on the SageMaker console. On the Domains page, open your domain. The IAM role ARN is listed on the Domain settings tab.

    The role needs permissions for IAM, Amazon Relational Database Service (Amazon RDS), Amazon Bedrock, AWS Secrets Manager, Amazon Simple Storage Service (Amazon S3), and Amazon OpenSearch Serverless.
  3. Modify the role permissions to add the following policies:
    1. IAMFullAccess
    2. AmazonRDSFullAccess
    3. AmazonBedrockFullAccess
    4. SecretsManagerReadWrite
    5. AmazonRDSDataFullAccess
    6. AmazonS3FullAccess
    7. The following inline policy:
      {
          "Version": "2012-10-17",
          "Statement": [
              {
                  "Sid": "OpenSearchServeless",
                  "Effect": "Allow",
                  "Action": "aoss:*",
                  "Resource": "*"
              }
          ]
      }

  4. On the SageMaker console, choose Studio in the navigation pane.
  5. Choose your user profile and choose Open Studio.

    This will open a new browser tab for SageMaker Studio Classic.
  6. Run the SageMaker Studio application.
  7. When the application is running, choose Open.

    JupyterLab will open in a new tab.
  8. Download the notebook file to use in this post.
  9. Choose the file icon in the navigation pane, then choose the upload icon, and upload the notebook file.
  10. Leave the image, kernel, and instance type as default and choose Select.

Request Amazon Bedrock model access

Complete the following steps to request access to the embeddings model in Amazon Bedrock:

  1.  On the Amazon Bedrock console, choose Model access in the navigation pane.
  2. Choose Enable specific models.
  3. Select the Titan Text Embeddings V2 model.
  4. Choose Next and complete the access request.

Import dependencies

Open the notebook file Bedrock_Knowledgebases_VectorDB.ipynb and run Step 1 to import dependencies for this post and create Boto3 clients:

!pip install opensearch-py
!pip install retrying

from urllib.request import urlretrieve
import json
import os
import boto3
import random
import time
from opensearchpy import OpenSearch, RequestsHttpConnection, AWSV4SignerAuth, RequestError
credentials = boto3.Session().get_credentials()
service = 'aoss'
suffix = random.randrange(200, 900)
boto3_session = boto3.session.Session()
region_name = boto3_session.region_name
iam_client = boto3_session.client('iam')
account_number = boto3.client('sts').get_caller_identity().get('Account')
identity = boto3.client('sts').get_caller_identity()['Arn']
s3_client = boto3.client("s3", region_name=region_name)
aoss_client = boto3_session.client('opensearchserverless')
bedrock_agent_client = boto3_session.client('bedrock-agent', region_name=region_name)
bedrock_agent_runtime_client = boto3.client('bedrock-agent-runtime', region_name=region_name)
rds = boto3.client('rds', region_name=region_name)
# Create Secret Manager Client to retrieve secret values
secrets_manager = boto3.client('secretsmanager', region_name=region_name)
# Create RDS Data Client to run queries against Aurora PostgreSQL Database
rds_data_client = boto3.client('rds-data', region_name=region_name)
awsauth = auth = AWSV4SignerAuth(credentials, region_name, service)

Create an S3 bucket

You can use the following code to create an S3 bucket to store the source data for your vector database, or use an existing bucket. If you create a new bucket, make sure to follow your organization’s best practices and guidelines.

# Set the bucket name
bucket_name = "<PROVIDE AMAZON S3 BUCKET NAME>"

if region_name in ('af-south-1','ap-east-1','ap-northeast-1','ap-northeast-2','ap-northeast-3','ap-south-1','ap-south-2','ap-southeast-1','ap-southeast-2','ap-southeast-3','ca-central-1','cn-north-1','cn-northwest-1','EU','eu-central-1','eu-north-1','eu-south-1','eu-south-2','eu-west-1','eu-west-2','eu-west-3','me-south-1','sa-east-1','us-east-2','us-gov-east-1','us-gov-west-1','us-west-1','us-west-2'):
    # Create the bucket
    response = s3_client.create_bucket(
        Bucket=bucket_name,
        CreateBucketConfiguration={
                'LocationConstraint': region_name
            }
    )
    # Print the response and validate that value for HTTPStatusCode is 200
    print(response)
else:
    
    # Create the bucket
    response = s3_client.create_bucket(
        Bucket=bucket_name
    )
    # Print the response and validate that value for HTTPStatusCode is 200
    print(response)

Set up sample data

Use the following code to set up the sample data for this post, which will be the input for the vector database:

# Leverage Amazon Shareholder news letter as datasets for loading into vector databases for this Blogpost 
urls = [
    'https://s2.q4cdn.com/299287126/files/doc_financials/2023/ar/2022-Shareholder-Letter.pdf',
    'https://s2.q4cdn.com/299287126/files/doc_financials/2022/ar/2021-Shareholder-Letter.pdf',
    'https://s2.q4cdn.com/299287126/files/doc_financials/2021/ar/Amazon-2020-Shareholder-Letter-and-1997-Shareholder-Letter.pdf',
    'https://s2.q4cdn.com/299287126/files/doc_financials/2020/ar/2019-Shareholder-Letter.pdf'
]

# Define standard file names which be leveraged while loading data to Amazon S3
filenames = [
    'AMZN-2022-Shareholder-Letter.pdf',
    'AMZN-2021-Shareholder-Letter.pdf',
    'AMZN-2020-Shareholder-Letter.pdf',
    'AMZN-2019-Shareholder-Letter.pdf'
]

# Create local temporary directory to download files, before uploading to Amazon S3
!mkdir -p ./data

# Assing local directory path to a python variable
local_data_path = "./data/"

# Assign S3 bucket name to a python variable. This was created in Step-2 above.
# This bucket will be used as source for vector databases and uploading source files.
data_s3_bucket = bucket_name

# Define S3 Prefix with in the bucket to upload files
data_s3_prefix = 'shareholder_newsletter'

# Download file to local_data_path
for idx, url in enumerate(urls):
    file_path = local_data_path + filenames[idx]
    urlretrieve(url, file_path)

# define metadata corresponding to Shareholder letters
metadata_2022 = {
    "metadataAttributes": {
        "company": "Amazon",
        "document_type": "Shareholder Letter",
        "year": 2022
    }
}

metadata_2021 = {
    "metadataAttributes": {
        "company": "Amazon",
        "document_type": "Shareholder Letter",
        "year": 2021
    }
}

metadata_2020 = {
    "metadataAttributes": {
        "company": "Amazon",
        "document_type": "Shareholder Letter",
        "year": 2020
    }
}

metadata_2019 = {
    "metadataAttributes": {
        "company": "Amazon",
        "document_type": "Shareholder Letter",
        "year": 2019
    }
}

# Create metadata files in local_data_path which will be uploaded to Amazon S3

# Create metadata file for 2022
metadata_2022_json = json.dumps(metadata_2022)

with open(f"{local_data_path}AMZN-2022-Shareholder-Letter.pdf.metadata.json", "w") as f:
    f.write(str(metadata_2022_json))

f.close()

# Create metadata file for 2021
metadata_2021_json = json.dumps(metadata_2021)

with open(f"{local_data_path}AMZN-2021-Shareholder-Letter.pdf.metadata.json", "w") as f:
    f.write(str(metadata_2021_json))

f.close()

# Create metadata file for 2020
metadata_2020_json = json.dumps(metadata_2020)

with open(f"{local_data_path}AMZN-2020-Shareholder-Letter.pdf.metadata.json", "w") as f:
    f.write(str(metadata_2020_json))

f.close()

# Create metadata file for 2019
metadata_2019_json = json.dumps(metadata_2019)

with open(f"{local_data_path}AMZN-2019-Shareholder-Letter.pdf.metadata.json", "w") as f:
    f.write(str(metadata_2019_json))

f.close()
    
# Upload files to Amazon S3
def uploadDirectory(path,bucket_name):
        for root,dirs,files in os.walk(path):
            for file in files:
                key = data_s3_prefix + '/' + file
                s3_client.upload_file(os.path.join(root,file),bucket_name,key)

uploadDirectory(local_data_path, data_s3_bucket)

# Delete files from local directory
!rm -r ./data/

Configure the IAM role for Amazon Bedrock

Use the following code to define the function to create the IAM role for Amazon Bedrock, and the functions to attach policies related to Amazon OpenSearch Service and Aurora:

encryption_policy_name = f"bedrock-sample-rag-sp-{suffix}"
network_policy_name = f"bedrock-sample-rag-np-{suffix}"
access_policy_name = f'bedrock-sample-rag-ap-{suffix}'
bedrock_execution_role_name = f'AmazonBedrockExecutionRoleForKnowledgeBase_{suffix}'
fm_policy_name = f'AmazonBedrockFoundationModelPolicyForKnowledgeBase_{suffix}'
s3_policy_name = f'AmazonBedrockS3PolicyForKnowledgeBase_{suffix}'
oss_policy_name = f'AmazonBedrockOSSPolicyForKnowledgeBase_{suffix}'
rds_policy_name = f'AmazonBedrockRDSPolicyForKnowledgeBase_{suffix}'
aurora_policy_name = f'AmazonBedrockAuroraPolicyForKnowledgeBase_{suffix}'
oss_vector_store_name = f'os-shareholder-letter-{suffix}'
oss_index_name = "os_shareholder_letter"
aurora_vector_db_cluster = f'aurora-shareholder-letter-{suffix}'
aurora_vector_db_instance = f'aurora-shareholder-letter-instance-{suffix}'
aurora_database_name = 'vectordb'
aurora_schema_name = 'bedrock_kb'
aurora_table_name = 'aurora_shareholder_letter'

def create_bedrock_execution_role(bucket_name):
    foundation_model_policy_document = {
        "Version": "2012-10-17",
        "Statement": [
            {
                "Effect": "Allow",
                "Action": [
                    "bedrock:InvokeModel",
                ],
                "Resource": [
                    f"arn:aws:bedrock:{region_name}::foundation-model/amazon.titan-embed-text-v2:0" 
                ]
            }
        ]
    }

    s3_policy_document = {
        "Version": "2012-10-17",
        "Statement": [
            {
                "Effect": "Allow",
                "Action": [
                    "s3:GetObject",
                    "s3:ListBucket"
                ],
                "Resource": [f'arn:aws:s3:::{data_s3_bucket}', f'arn:aws:s3:::{data_s3_bucket}/*'], 
                "Condition": {
                    "StringEquals": {
                        "aws:ResourceAccount": f"{account_number}"
                    }
                }
            }
        ]
    }

    assume_role_policy_document = {
        "Version": "2012-10-17",
        "Statement": [
            {
                "Effect": "Allow",
                "Principal": {
                    "Service": "bedrock.amazonaws.com"
                },
                "Action": "sts:AssumeRole"
            }
        ]
    }
    
    
    # create policies based on the policy documents
    fm_policy = iam_client.create_policy(
        PolicyName=fm_policy_name,
        PolicyDocument=json.dumps(foundation_model_policy_document),
        Description='Policy for accessing foundation model',
    )

    s3_policy = iam_client.create_policy(
        PolicyName=s3_policy_name,
        PolicyDocument=json.dumps(s3_policy_document),
        Description='Policy for reading documents from s3')

    # create bedrock execution role
    bedrock_kb_execution_role = iam_client.create_role(
        RoleName=bedrock_execution_role_name,
        AssumeRolePolicyDocument=json.dumps(assume_role_policy_document),
        Description='Amazon Bedrock Knowledge Base Execution Role for accessing OSS and S3',
        MaxSessionDuration=3600
    )

    # fetch arn of the policies and role created above
    bedrock_kb_execution_role_arn = bedrock_kb_execution_role['Role']['Arn']
    s3_policy_arn = s3_policy["Policy"]["Arn"]
    fm_policy_arn = fm_policy["Policy"]["Arn"]

    # attach policies to Amazon Bedrock execution role
    iam_client.attach_role_policy(
        RoleName=bedrock_kb_execution_role["Role"]["RoleName"],
        PolicyArn=fm_policy_arn
    )
    iam_client.attach_role_policy(
        RoleName=bedrock_kb_execution_role["Role"]["RoleName"],
        PolicyArn=s3_policy_arn
    )
    return bedrock_kb_execution_role


def create_oss_policy_attach_bedrock_execution_role(collection_id, bedrock_kb_execution_role):
    # define oss policy document
    oss_policy_document = {
        "Version": "2012-10-17",
        "Statement": [
            {
                "Effect": "Allow",
                "Action": [
                    "aoss:APIAccessAll"
                ],
                "Resource": [
                    f"arn:aws:aoss:{region_name}:{account_number}:collection/{collection_id}"
                ]
            }
        ]
    }
    oss_policy = iam_client.create_policy(
        PolicyName=oss_policy_name,
        PolicyDocument=json.dumps(oss_policy_document),
        Description='Policy for accessing opensearch serverless',
    )
    oss_policy_arn = oss_policy["Policy"]["Arn"]
    print("Opensearch serverless arn: ", oss_policy_arn)

    iam_client.attach_role_policy(
        RoleName=bedrock_kb_execution_role["Role"]["RoleName"],
        PolicyArn=oss_policy_arn
    )
    return None


def create_policies_in_oss(vector_store_name, aoss_client, bedrock_kb_execution_role_arn):
    encryption_policy = aoss_client.create_security_policy(
        name=encryption_policy_name,
        policy=json.dumps(
            {
                'Rules': [{'Resource': ['collection/' + vector_store_name],
                           'ResourceType': 'collection'}],
                'AWSOwnedKey': True
            }),
        type='encryption'
    )

    network_policy = aoss_client.create_security_policy(
        name=network_policy_name,
        policy=json.dumps(
            [
                {'Rules': [{'Resource': ['collection/' + vector_store_name],
                            'ResourceType': 'collection'}],
                 'AllowFromPublic': True}
            ]),
        type='network'
    )
    access_policy = aoss_client.create_access_policy(
        name=access_policy_name,
        policy=json.dumps(
            [
                {
                    'Rules': [
                        {
                            'Resource': ['collection/' + vector_store_name],
                            'Permission': [
                                'aoss:CreateCollectionItems',
                                'aoss:DeleteCollectionItems',
                                'aoss:UpdateCollectionItems',
                                'aoss:DescribeCollectionItems'],
                            'ResourceType': 'collection'
                        },
                        {
                            'Resource': ['index/' + vector_store_name + '/*'],
                            'Permission': [
                                'aoss:CreateIndex',
                                'aoss:DeleteIndex',
                                'aoss:UpdateIndex',
                                'aoss:DescribeIndex',
                                'aoss:ReadDocument',
                                'aoss:WriteDocument'],
                            'ResourceType': 'index'
                        }],
                    'Principal': [identity, bedrock_kb_execution_role_arn],
                    'Description': 'Easy data policy'}
            ]),
        type='data'
    )
    return encryption_policy, network_policy, access_policy


def create_rds_policy_attach_bedrock_execution_role(db_cluster_arn, aurora_db_secret_arn, bedrock_kb_execution_role):
    # define rds policy document
    rds_policy_document = {
        "Version": "2012-10-17",
        "Statement": [
            {
                "Effect": "Allow",
                "Action": [
                    "rds-data:ExecuteStatement",
                    "rds:DescribeDBClusters",
                    "rds-data:BatchExecuteStatement"
                ],
                "Resource": [
                    db_cluster_arn
                ]
            },
            {
                "Effect": "Allow",
                "Action": [
                    "secretsmanager:GetSecretValue",
                    "secretsmanager:DescribeSecret"
                ],
                "Resource": [
                    aurora_db_secret_arn
                ]
            }
        ]
    }
    rds_policy = iam_client.create_policy(
        PolicyName=rds_policy_name,
        PolicyDocument=json.dumps(rds_policy_document),
        Description='Policy for accessing RDS Aurora Database',
    )
    rds_policy_arn = rds_policy["Policy"]["Arn"]
    print("RDS Aurora Policy arn: ", rds_policy_arn)

    iam_client.attach_role_policy(
        RoleName=bedrock_kb_execution_role["Role"]["RoleName"],
        PolicyArn=rds_policy_arn
    )
    return None

Use the following code to create the IAM role for Amazon Bedrock, which you’ll use while creating the knowledge base:

bedrock_kb_execution_role = create_bedrock_execution_role(bucket_name=data_s3_bucket)
bedrock_kb_execution_role_arn = bedrock_kb_execution_role['Role']['Arn']

Integrate with OpenSearch Serverless

The Vector Engine for Amazon OpenSearch Serverless is an on-demand serverless configuration for OpenSearch Service. Because it’s serverless, it removes the operational complexities of provisioning, configuring, and tuning your OpenSearch clusters. With OpenSearch Serverless, you can search and analyze a large volume of data without having to worry about the underlying infrastructure and data management.

The following diagram illustrates the OpenSearch Serverless architecture. OpenSearch Serverless compute capacity for data ingestion, searching, and querying is measured in OpenSearch Compute Units (OCUs).

The vector search collection type in OpenSearch Serverless provides a similarity search capability that is scalable and high performing. This makes it a popular option for a vector database when using Amazon Bedrock Knowledge Bases, because it makes it straightforward to build modern machine learning (ML) augmented search experiences and generative AI applications without having to manage the underlying vector database infrastructure. Use cases for OpenSearch Serverless vector search collections include image searches, document searches, music retrieval, product recommendations, video searches, location-based searches, fraud detection, and anomaly detection. The vector engine provides distance metrics such as Euclidean distance, cosine similarity, and dot product similarity. You can store fields with various data types for metadata, such as numbers, Booleans, dates, keywords, and geopoints. You can also store fields with text for descriptive information to add more context to stored vectors. Collocating the data types reduces complexity, increases maintainability, and avoids data duplication, version compatibility challenges, and licensing issues.

The following code snippets set up an OpenSearch Serverless vector database and integrate it with a knowledge base in Amazon Bedrock:

  1.  Create an OpenSearch Serverless vector collection.
    # create security, network and data access policies within OSS
    encryption_policy, network_policy, access_policy = create_policies_in_oss(vector_store_name=oss_vector_store_name,
    aoss_client=aoss_client,
    bedrock_kb_execution_role_arn=bedrock_kb_execution_role_arn)
    
    # Create OpenSearch Serverless Vector Collection
    collection = aoss_client.create_collection(name=oss_vector_store_name,type='VECTORSEARCH')
    
    # Get the OpenSearch serverless collection URL
    collection_id = collection['createCollectionDetail']['id']
    host = collection_id + '.' + region_name + '.aoss.amazonaws.com'
    print(host)
    
    # wait for collection creation
    # This can take couple of minutes to finish
    response = aoss_client.batch_get_collection(names=[oss_vector_store_name])
    # Periodically check collection status
    while (response['collectionDetails'][0]['status']) == 'CREATING':
    print('Creating collection...')
    time.sleep(30)
    response = aoss_client.batch_get_collection(names=[oss_vector_store_name])
    print('nCollection successfully created:')
    
    
    # create opensearch serverless access policy and attach it to Bedrock execution role
    try:
    create_oss_policy_attach_bedrock_execution_role(collection_id=collection_id,
    bedrock_kb_execution_role=bedrock_kb_execution_role)
    # It can take up to a minute for data access rules to be enforced
    time.sleep(60)
    except Exception as e:
    print("Policy already exists")
    pp.pprint(e)

  2. Create an index in the collection; this index will be managed by Amazon Bedrock Knowledge Bases:
    body_json = {
       "settings": {
          "index.knn": "true",
           "number_of_shards": 1,
           "knn.algo_param.ef_search": 512,
           "number_of_replicas": 0,
       },
       "mappings": {
          "properties": {
             "vector": {
                "type": "knn_vector",
                "dimension": 1024,
                 "method": {
                     "name": "hnsw",
                     "engine": "faiss",
                     "space_type": "l2"
                 },
             },
             "text": {
                "type": "text"
             },
             "text-metadata": {
                "type": "text"         }
          }
       }
    }
    
    # Build the OpenSearch client
    oss_client = OpenSearch(
        hosts=[{'host': host, 'port': 443}],
        http_auth=awsauth,
        use_ssl=True,
        verify_certs=True,
        connection_class=RequestsHttpConnection,
        timeout=300
    )
    
    # Create index
    try:
        response = oss_client.indices.create(index=oss_index_name, body=json.dumps(body_json))
        print('Creating index:')
        # index creation can take up to a minute
        time.sleep(60)
        print('Index Creation Completed:')
    except RequestError as e:
        # you can delete the index if its already exists
        # oss_client.indices.delete(index=oss_index_name)
        print(f'Error while trying to create the index, with error {e.error}nyou may unmark the delete above to delete, and recreate the index')

  3. Create a knowledge base in Amazon Bedrock pointing to the OpenSearch Serverless vector collection and index:
    opensearchServerlessConfiguration = {
                "collectionArn": collection["createCollectionDetail"]['arn'],
                "vectorIndexName": oss_index_name,
                "fieldMapping": {
                    "vectorField": "vector",
                    "textField": "text",
                    "metadataField": "text-metadata"
                }
            }
    
    # The embedding model used by Bedrock to embed ingested documents, and realtime prompts
    embeddingModelArn = f"arn:aws:bedrock:{region_name}::foundation-model/amazon.titan-embed-text-v2:0"
    
    name = f"kb-os-shareholder-letter-{suffix}"
    description = "Amazon shareholder letter knowledge base."
    roleArn = bedrock_kb_execution_role_arn
    
    # Create a KnowledgeBase
    from retrying import retry
    
    @retry(wait_random_min=1000, wait_random_max=2000,stop_max_attempt_number=7)
    def create_knowledge_base_func():
        create_kb_response = bedrock_agent_client.create_knowledge_base(
            name = name,
            description = description,
            roleArn = roleArn,
            knowledgeBaseConfiguration = {
                "type": "VECTOR",
                "vectorKnowledgeBaseConfiguration": {
                    "embeddingModelArn": embeddingModelArn
                }
            },
            storageConfiguration = {
                "type": "OPENSEARCH_SERVERLESS",
                "opensearchServerlessConfiguration":opensearchServerlessConfiguration
            }
        )
        return create_kb_response["knowledgeBase"]
    
    
    try:
        kb = create_knowledge_base_func()
    except Exception as err:
        print(f"{err=}, {type(err)=}")
        
    # Get KnowledgeBase 
    get_kb_response = bedrock_agent_client.get_knowledge_base(knowledgeBaseId = kb['knowledgeBaseId'])
    
    print(f'OpenSearch Knowledge Response: {get_kb_response}')

  4. Create a data source for the knowledge base:
    # Ingest strategy - How to ingest data from the data source
    chunkingStrategyConfiguration = {
        "chunkingStrategy": "FIXED_SIZE",
        "fixedSizeChunkingConfiguration": {
            "maxTokens": 512,
            "overlapPercentage": 20
        }
    }
    
    # The data source to ingest documents from, into the OpenSearch serverless knowledge base index
    s3Configuration = {
        "bucketArn": f"arn:aws:s3:::{data_s3_bucket}",
        "inclusionPrefixes": [f"{data_s3_prefix}"] # you can use this if you want to create a KB using data within s3 prefixes.
        }
    # Create a DataSource in KnowledgeBase 
    create_ds_response = bedrock_agent_client.create_data_source(
        name = f'{name}-{bucket_name}',
        description = description,
        knowledgeBaseId = kb['knowledgeBaseId'],
        dataSourceConfiguration = {
            "type": "S3",
            "s3Configuration":s3Configuration
        },
        vectorIngestionConfiguration = {
            "chunkingConfiguration": chunkingStrategyConfiguration
        }
    )
    ds = create_ds_response["dataSource"]
    
    ds

  5. Start an ingestion job for the knowledge base pointing to OpenSearch Serverless to generate vector embeddings for data in Amazon S3:
    ingest_jobs=[]
    # Start an ingestion job
    try:
        start_job_response = bedrock_agent_client.start_ingestion_job(knowledgeBaseId = kb['knowledgeBaseId'], dataSourceId = ds["dataSourceId"])
        job = start_job_response["ingestionJob"]
        print(f"ingestion job started successfullyn")
    
        while(job['status']!='COMPLETE' ):
            get_job_response = bedrock_agent_client.get_ingestion_job(
              knowledgeBaseId = kb['knowledgeBaseId'],
                dataSourceId = ds["dataSourceId"],
                ingestionJobId = job["ingestionJobId"]
            )
            job = get_job_response["ingestionJob"]
    
        time.sleep(30)
        print(f"job completed successfullyn")
    
    except Exception as e:
        print(f"Couldn't start job.n")
        print(e)

Integrate with Aurora pgvector

Aurora provides pgvector integration, which is an open source extension for PostgreSQL that adds the ability to store and search over ML-generated vector embeddings. This enables you to use Aurora for generative AI RAG-based use cases by storing vectors with the rest of the data. The following diagram illustrates the sample architecture.

Use cases for Aurora pgvector include applications that have requirements for ACID compliance, point-in-time recovery, joins, and more. The following is a sample code snippet to configure Aurora with your knowledge base in Amazon Bedrock:

  1. Create an Aurora DB instance (this code creates a managed DB instance, but you can create a serverless instance as well). Identify the security group ID and subnet IDs for your VPC before running the following step and provide the appropriate values in the vpc_security_group_ids and SubnetIds variables:
    # Define database instance parameters
    db_instance_identifier = aurora_vector_db_instance
    db_cluster_identifier = aurora_vector_db_cluster
    engine = 'aurora-postgresql'
    db_name = aurora_database_name
    db_instance_class = 'db.r6g.2xlarge'
    master_username = 'postgres'
    # Get Security Group Id(s), for replicating Blogpost steps it can be one associated with Default VPC
    vpc_security_group_ids = ['sg-XXXXXXX']
    subnet_group_name = 'vectordbsubnetgroup'
    
    response = rds.create_db_subnet_group(
        DBSubnetGroupName=subnet_group_name,
        DBSubnetGroupDescription='Subnet Group for Blogpost Aurora PostgreSql Database Cluster',
        # Get Subnet IDs, for replicating Blogpost steps it can be one associated with Default VPC 
        SubnetIds=[
            'subnet-XXXXXXX',
            'subnet-XXXXXXX',
            'subnet-XXXXXXX',
            'subnet-XXXXXXX',
            'subnet-XXXXXXX',
            'subnet-XXXXXXX'
        ]
    )
    
    # Create the Aurora cluster
    response = rds.create_db_cluster(
        DBClusterIdentifier=db_cluster_identifier,
        Engine=engine,
        MasterUsername=master_username,
        ManageMasterUserPassword=True,
        DBSubnetGroupName=subnet_group_name,
        VpcSecurityGroupIds=vpc_security_group_ids,
        DatabaseName=db_name
    )
    
    # Create the Aurora instance
    response = rds.create_db_instance(
        DBInstanceIdentifier=db_instance_identifier,
        DBInstanceClass=db_instance_class,
        Engine=engine,
        DBClusterIdentifier=db_cluster_identifier
    )

  2. On the Amazon RDS console, confirm the Aurora database status shows as Available.
  3. Create the vector extension, schema, and vector table in the Aurora database:
    ##Get Amazon Aurora Database Secret Manager ARN created internally while creating DB Cluster and Database Cluster ARN
    
    describe_db_clusters_response = rds.describe_db_clusters(
        DBClusterIdentifier=db_cluster_identifier,
        IncludeShared=False
    )
    
    aurora_db_secret_arn = describe_db_clusters_response['DBClusters'][0]['MasterUserSecret']['SecretArn']
    db_cluster_arn = describe_db_clusters_response['DBClusters'][0]['DBClusterArn']
    
    # Enable HTTP Endpoint for Amazon Aurora Database instance
    response = rds.enable_http_endpoint(
        ResourceArn=db_cluster_arn
    )
    
    # Create Vector Extension in Aurora PostgreSQL Database which will be used in table creation
    vector_extension_create_response = rds_data_client.execute_statement(
        resourceArn=db_cluster_arn,
        secretArn=aurora_db_secret_arn,
        sql='CREATE EXTENSION IF NOT EXISTS vector',
        database=db_name
    )
    
    # Create Schema in Aurora PostgreSQL database
    schema_create_response = rds_data_client.execute_statement(
        resourceArn=db_cluster_arn,
        secretArn=aurora_db_secret_arn,
        sql='CREATE SCHEMA IF NOT EXISTS bedrock_integration',
        database=db_name
    )
    
    # Create Table which store vector embedding corresponding to Shareholder letters
    table_create_response = rds_data_client.execute_statement(
        resourceArn=db_cluster_arn,
        secretArn=aurora_db_secret_arn,
        sql='CREATE TABLE IF NOT EXISTS bedrock_integration.share_holder_letter_kb(id uuid PRIMARY KEY, embedding vector(1024), chunks text, metadata json, company varchar(100), document_type varchar(100), year int)',
        database=db_name
    )
    
    # Check the status of queries
    vector_extension_create_status = 'Success' if vector_extension_create_response['ResponseMetadata']['HTTPStatusCode'] == 200 else 'Fail'
    schema_create_status = 'Success' if schema_create_response['ResponseMetadata']['HTTPStatusCode'] == 200 else 'Fail'
    table_create_response = 'Success' if table_create_response['ResponseMetadata']['HTTPStatusCode'] == 200 else 'Fail'
    
    # Print the status of queries
    print(f"Create Vector Extension Status: {vector_extension_create_status}")
    print(f"Create Schema Status: {schema_create_status}")
    print(f"Create Table Status: {table_create_response}")

  4. Create a knowledge base in Amazon Bedrock pointing to the Aurora database and table:
    # Attached RDS related permissions to the Bedrock Knowledgebase role
    
    create_rds_policy_attach_bedrock_execution_role(db_cluster_arn, aurora_db_secret_arn, bedrock_kb_execution_role)
    
    # Define RDS Configuration for Knowledge bases
    rdsConfiguration = {
                'credentialsSecretArn': aurora_db_secret_arn,
                'databaseName': db_name,
                'fieldMapping': {
                    'metadataField': 'metadata',
                    'primaryKeyField': 'id',
                    'textField': 'chunks',
                    'vectorField': 'embedding'
                },
                'resourceArn': db_cluster_arn,
                'tableName': 'bedrock_integration.share_holder_letter_kb'
            }
    
    
    # The embedding model used by Bedrock to embed ingested documents, and realtime prompts
    embeddingModelArn = f"arn:aws:bedrock:{region_name}::foundation-model/amazon.titan-embed-text-v2:0"
    
    name = f"kb-aurora-shareholder-letter-{suffix}"
    description = "Amazon shareholder letter Aurora PG Vector knowledge base."
    roleArn = bedrock_kb_execution_role_arn
    
    # Create a KnowledgeBase
    from retrying import retry
    
    @retry(wait_random_min=1000, wait_random_max=2000,stop_max_attempt_number=7)
    def create_knowledge_base_func():
        create_rds_kb_response = bedrock_agent_client.create_knowledge_base(
            name = name,
            description = description,
            roleArn = roleArn,
            knowledgeBaseConfiguration = {
                "type": "VECTOR",
                "vectorKnowledgeBaseConfiguration": {
                    "embeddingModelArn": embeddingModelArn
                }
            },
            storageConfiguration = {
                "type": "RDS",
                "rdsConfiguration":rdsConfiguration
            }
        )
        return create_rds_kb_response["knowledgeBase"]
    
    try:
        rds_kb = create_knowledge_base_func()
    except Exception as err:
        print(f"{err=}, {type(err)=}")
        
    # Get KnowledgeBase 
    get_rds_kb_response = bedrock_agent_client.get_knowledge_base(knowledgeBaseId = rds_kb['knowledgeBaseId'])
    
    print(f'RDS Aurora Knowledge Response: {get_rds_kb_response}')

  5. Create a data source for the knowledge base:
    # Ingest strategy - How to ingest data from the data source
    chunkingStrategyConfiguration = {
        "chunkingStrategy": "FIXED_SIZE",
        "fixedSizeChunkingConfiguration": {
            "maxTokens": 512,
            "overlapPercentage": 20
        }
    }
    
    # The data source to ingest documents from, into the OpenSearch serverless knowledge base index
    s3Configuration = {
        "bucketArn": f"arn:aws:s3:::{data_s3_bucket}",
        "inclusionPrefixes": [f"{data_s3_prefix}"] # you can use this if you want to create a KB using data within s3 prefixes.
        }
    # Create a DataSource in KnowledgeBase 
    create_ds_response = bedrock_agent_client.create_data_source(
        name = f'{name}-{data_s3_bucket}',
        description = description,
        knowledgeBaseId = rds_kb['knowledgeBaseId'],
        dataSourceConfiguration = {
            "type": "S3",
            "s3Configuration":s3Configuration
        },
        vectorIngestionConfiguration = {
            "chunkingConfiguration": chunkingStrategyConfiguration
        }
    )
    ds = create_ds_response["dataSource"]
    
    ds

  6. Start an ingestion job for your knowledge base pointing to the Aurora pgvector table to generate vector embeddings for data in Amazon S3:
    ingest_jobs=[]
    # Start an ingestion job
    try:
        start_job_response = bedrock_agent_client.start_ingestion_job(knowledgeBaseId = kb['knowledgeBaseId'], dataSourceId = ds["dataSourceId"])
        job = start_job_response["ingestionJob"]
        print(f"job started successfullyn")
    
        while(job['status']!='COMPLETE' ):
            get_job_response = bedrock_agent_client.get_ingestion_job(
              knowledgeBaseId = kb['knowledgeBaseId'],
                dataSourceId = ds["dataSourceId"],
                ingestionJobId = job["ingestionJobId"]
            )
            job = get_job_response["ingestionJob"]
    
        time.sleep(30)
        print(f"job completed successfullyn")
    
    except Exception as e:
        print(f"Couldn't start job.n")
        print(e)

Integrate with MongoDB Atlas

MongoDB Atlas Vector Search, when integrated with Amazon Bedrock, can serve as a robust and scalable knowledge base to build generative AI applications and implement RAG workflows. By using the flexible document data model of MongoDB Atlas, organizations can represent and query complex knowledge entities and their relationships within Amazon Bedrock. The combination of MongoDB Atlas and Amazon Bedrock provides a powerful solution for building and maintaining a centralized knowledge repository.

To use MongoDB, you can create a cluster and vector search index. The native vector search capabilities embedded in an operational database simplify building sophisticated RAG implementations. MongoDB allows you to store, index, and query vector embeddings of your data without the need for a separate bolt-on vector database.

There are three pricing options available for MongoDB Atlas through AWS Marketplace: MongoDB Atlas (pay-as-you-go), MongoDB Atlas Enterprise, and MongoDB Atlas for Government. Refer to the MongoDB Atlas Vector Search documentation to set up a MongoDB vector database and add it to your knowledge base.

Integrate with Pinecone

Pinecone is a type of vector database from Pinecone Systems Inc. With Amazon Bedrock Knowledge Bases, you can integrate your enterprise data into Amazon Bedrock using Pinecone as the fully managed vector database to build generative AI applications. Pinecone is highly performant; it can speed through data in milliseconds. You can use its metadata filters and sparse-dense index support for top-notch relevance, achieving quick, accurate, and grounded results across diverse search tasks. Pinecone is enterprise ready; you can launch and scale your AI solution without needing to maintain infrastructure, monitor services, or troubleshoot algorithms. Pinecone adheres to the security and operational requirements of enterprises.

There are two pricing options available for Pinecone in AWS Marketplace: Pinecone Vector Database – Pay As You Go Pricing (serverless) and Pinecone Vector Database – Annual Commit (managed). Refer to the Pinecone documentation to set up a Pinecone vector database and add it to your knowledge base.

Integrate with Redis Enterprise Cloud

Redis Enterprise Cloud enables you to set up, manage, and scale a distributed in-memory data store or cache environment in the cloud to help applications meet low latency requirements. Vector search is one of the solution options available in Redis Enterprise Cloud, which solves for low latency use cases related to RAG, semantic caching, document search, and more. Amazon Bedrock natively integrates with Redis Enterprise Cloud vector search.

There are two pricing options available for Redis Enterprise Cloud through AWS Marketplace: Redis Cloud Pay As You Go Pricing and Redis Cloud – Annual Commits. Refer to the Redis Enterprise Cloud documentation to set up vector search and add it to your knowledge base.

Interact with Amazon Bedrock knowledge bases

Amazon Bedrock provides a common set of APIs to interact with knowledge bases:

  •  Retrieve API – Queries the knowledge base and retrieves information from it. This is a Bedrock Knowledge Base specific API, it helps with use cases where only vector-based searching of documents is needed without model inferences.
  • Retrieve and Generate API – Queries the knowledge base and uses an LLM to generate responses based on the retrieved results.

The following code snippets show how to use the Retrieve API from the OpenSearch Serverless vector database’s index and the Aurora pgvector table:

  1. Retrieve data from the OpenSearch Serverless vector database’s index:
    query = "What is Amazon's doing in the field of generative AI?"
    
    relevant_documents_os = bedrock_agent_runtime_client.retrieve(
        retrievalQuery= {
            'text': query
        },
        knowledgeBaseId=kb['knowledgeBaseId'],
        retrievalConfiguration= {
            'vectorSearchConfiguration': {
                'numberOfResults': 3 # will fetch top 3 documents which matches closely with the query.
            }
        }
    )
    relevant_documents_os["retrievalResults"]

  2. Retrieve data from the Aurora pgvector table:
    query = "What is Amazon's doing in the field of generative AI?"
    
    relevant_documents_rds = bedrock_agent_runtime_client.retrieve(
        retrievalQuery= {
            'text': query
        },
        knowledgeBaseId=rds_kb['knowledgeBaseId'],
        retrievalConfiguration= {
            'vectorSearchConfiguration': {
                'numberOfResults': 3 # will fetch top 3 documents which matches closely with the query.
            }
        }
    )
    
    relevant_documents_rds["retrievalResults"]

Clean up

When you’re done with this solution, clean up the resources you created:

  • Amazon Bedrock knowledge bases for OpenSearch Serverless and Aurora
  • OpenSearch Serverless collection
  • Aurora DB instance
  • S3 bucket
  • SageMaker Studio domain
  • Amazon Bedrock service role
  • SageMaker Studio domain role

Conclusion

In this post, we provided a high-level introduction to generative AI use cases and the use of RAG workflows to augment your organization’s internal or external knowledge stores. We discussed the importance of vector databases and RAG architectures to enable similarity search and why dense vector representations are beneficial. We also went over Amazon Bedrock Knowledge Bases, which provides common APIs, industry-leading governance, observability, and security to enable vector databases using different options like AWS native and partner products through AWS Marketplace. We also dived deep into a few of the vector database options with code examples to explain the implementation steps.

Try out the code examples in this post to implement your own RAG solution using Amazon Bedrock Knowledge Bases, and share your feedback and questions in the comments section.


About the Authors

Vishwa Gupta is a Senior Data Architect with AWS Professional Services. He helps customers implement generative AI, machine learning, and analytics solutions. Outside of work, he enjoys spending time with family, traveling, and trying new foods.

Isaac Privitera is a Principal Data Scientist with the AWS Generative AI Innovation Center, where he develops bespoke generative AI-based solutions to address customers’ business problems. His primary focus lies in building responsible AI systems, using techniques such as RAG, multi-agent systems, and model fine-tuning. When not immersed in the world of AI, Isaac can be found on the golf course, enjoying a football game, or hiking trails with his loyal canine companion, Barry.

Abhishek Madan is a Senior GenAI Strategist with the AWS Generative AI Innovation Center. He helps internal teams and customers in scaling generative AI, machine learning, and analytics solutions. Outside of work, he enjoys playing adventure sports and spending time with family.

Ginni Malik is a Senior Data & ML Engineer with AWS Professional Services. She assists customers by architecting enterprise data lake and ML solutions to scale their data analytics in the cloud.

Satish Sarapuri is a Sr. Data Architect, Data Lake at AWS. He helps enterprise-level customers build high-performance, highly available, cost-effective, resilient, and secure generative AI, data mesh, data lake, and analytics platform solutions on AWS through which customers can make data-driven decisions to gain impactful outcomes for their business, and helps them on their digital and data transformation journey. In his spare time, he enjoys spending time with his family and playing tennis.

Read More

Enable or disable ACL crawling safely in Amazon Q Business

Enable or disable ACL crawling safely in Amazon Q Business

Amazon Q Business recently added support for administrators to modify the default access control list (ACL) crawling feature for data source connectors.

Amazon Q Business is a fully managed, AI powered assistant with enterprise-grade security and privacy features. It includes over 40 data source connectors that crawl and index documents. By default, Amazon Q Business indexes ACL information attached to documents along with the documents themselves and uses this to filter chat responses based on the user’s document access. With this new feature, you can enable or disable ACL crawling as required by their business use case.

This post introduces the new ACL toggle feature for Amazon Q Business, which you can use to enable or disable ACL crawling. We’ll explore use cases for disabling ACLs and discuss how to safely enable or disable ACL crawling.

Overview of access control list crawling

Amazon Q Business data source connectors help crawl various data sources to collect and index content in Amazon Q Business for fast discovery and retrieval when answering user queries. These data sources often contain documents with different classifications such as public, internal public, private, and confidential. To provide fine-grained control over access rights, you can attach ACLs to documents, allowing you to specify different levels of access for various users or groups. To verify that Amazon Q Business respects access control policies and that users only receive responses for content they’re authorized to access, the data source connectors automatically crawl for access permissions associated with the content, user identifiers, and groups.

The preceding figure illustrates the Amazon Q Business data source crawler with ACL crawling enabled. As the connector retrieves content from the data source, it examines the associated ACL and compiles a list of users and groups with read permissions for each document. The connector also collects user identifiers, which are stored in the Amazon Q Business user store for quick matching during query execution. Both the ACL and content are optimized and stored in the Amazon Q Business index storage, enabling secure and efficient retrieval when answering user queries. For more information on the user store, see Understanding Amazon Q Business User Store.

When to disable ACL crawling?

ACL crawling builds a security-aware index that respects access control policies in the primary data source. This process helps maintain data privacy and access control required for regulatory compliance, making sure that sensitive information isn’t inadvertently exposed through user query results. It provides a scalable mechanism to handle large amounts of content while maintaining consistency between the actual access controls on the data and what’s discoverable through search. Because of these advantages, ACL crawling is strongly recommended for all data sources. However, there are some circumstances when you might need to disable it. The following are some reasons why you might disable ACL crawling.

Internally public content

Organizations often designate certain data sources as internally public, including HR policies, IT knowledge bases, and wiki pages. For instance, a company might allocate an entire Microsoft SharePoint site for policies accessible to all employees, classifying it as internal-public. In such cases, crawling ACLs for permissions that include all employees can be costly and create unnecessary overhead. Turning off ACL crawling might be advantageous in these scenarios.

Data source contains irreconcilable identities

Amazon Q Business requires all users to authenticate with an enterprise-approved identity provider (IdP). After successful authentication, Amazon Q Business uses the IdP-provided user identifier to match against the user identifier fetched from the data source during ACL crawling. This process validates user access to content before retrieving it for query responses.

However, because of legacy issues such as mergers and acquisitions, data source configuration limitations, or other constraints, the primary user identifier from the IdP might differ from the one in the data source. This discrepancy can prevent Amazon Q Business from retrieving relevant content from the index and answering user queries effectively.

In such cases, it might be necessary to disable ACL crawling and use alternative options. These include implementing attribute filters or building dedicated restricted applications with access limited to specific audiences and content. For more information on attribute filters, see Filtering chat responses using document attributes.

Use case-driven targeted deployments

As a fully managed service, Amazon Q Business can be quickly deployed in multiple instances for scoped down targeted use cases. Examples include an HR bot in Slack or an AI assistant for customer support agents in a contact center. Because these AI assistants might be deployed for a limited audience, and the indexed content might be generally available to all users with application access, ACL crawling can be turned off.

Note of caution

Amazon Q Business cannot enforce access controls if ACL crawling is disabled. When ACL crawling is disabled for a data source, indexed content in that source will be considered accessible to users with access to the Amazon Q Business application. Therefore, disabling ACL crawling should be done with caution and due diligence. The following are some recommended best practices:

  • Notify data source content owners and administrators of your intent to disable ACL crawling and obtain their approval beforehand.
  • If applicable, consider implementing alternative options such as attribute filtering to restrict content retrieval or deploying a scoped-down, use-case-driven deployment to a limited audience.
  • Maintain a decision document that clearly articulates the reasons for disabling ACL crawling, the scope of affected content, and precautions taken to prevent indexing of sensitive information.

Note: As a precaution, you cannot disable ACL crawling for an existing Amazon Q Business data source that already has ACL crawling enabled. To disable ACL crawling, you must delete the data source and recreate it. You can only disable ACL crawling during the data source creation process, and this requires an account administrator to grant permission for disabling ACL crawling when configuring the data source.

Procedures for configuring ACL crawling

Amazon Q Business ACL crawling helps protect your data. Amazon Q Business provides safeguards to help administrators and developers mitigate accidentally disabling ACL crawling. In this section, we will cover how you can allow or deny the ACL crawling disable feature, explore procedures to enable or disable ACL crawling, explain how to monitor logs for ACL crawling configuration changes, and troubleshoot common issues.

Personas for configuring ACL crawling

ACL crawling configuration typically involves multiple roles, depending on your organizational structure. To maximize safeguards, it’s recommended that these roles are filled by different individuals. For faster deployments, identify the necessary personnel within your organization before starting the project and ensure they collaborate to complete the configuration. Here are the common roles needed for ACL crawling configuration:

  1. AWS account administrator – An AWS account administrator is a user with full access to AWS services and the ability to manage IAM resources and permissions in the account. They can create and manage organizations, enabling centralized management of multiple AWS accounts.
  2. Amazon Q Business administrator – An Amazon Q Business administrator is typically a user or role responsible for managing and configuring the Amazon Q Business service. Their duties include creating and optimizing Amazon Q Business indexes, setting up guardrails, and tuning relevance. They also set up and maintain connections to various data sources that Amazon Q Business will index, such as Amazon Simple Storage Service (Amazon S3) buckets, SharePoint, Salesforce, and Confluence.

Prerequisites for ACL crawling

Process to disallow the option to disable ACL crawling

By default, the option to disable ACL crawling is enabled for an account. AWS account administrators can disallow this feature by setting up an account-level policy. It’s recommended to configure an explicit deny for production accounts by default. The following below shows the associated actions in relation to the personas involved in the configuration process.

Administrators can attach the IAM action qbusiness:DisableAclOnDataSource to the Amazon Q Business administrator user or role policy to deny or allow the option to disable ACL crawling. The example IAM policy code snippet that follows demonstrates how to set up an explicit deny.

{
    "Version": "2012-10-17",
    "Statement": [
        {
          "Effect": "Deny",
          "Action": [
                "qbusiness:DisableAclOnDataSource"
            ],
          "Resource": ["*"]
       }
    ]
}

Note that even if the option to disable ACL crawling is denied, the user interface might not gray out this option. However, if you attempt to create a data source with this option disabled, it will fail the validation check, and Amazon Q Business will not create the data source.

Process to disable ACL crawling for a data source connector

Before setting up a data source connector with ACL crawling disabled in your Amazon Q Business application deployment, make sure that you have no sensitive content in the data source or have implemented controls to help prevent accidental content exposure. Verify that the data source connector supports the option to disable ACL crawling. Notify information custodians, content owners, and data source administrators of your intent to disable ACL crawling and obtain their documented approvals, if necessary. If your account administrator has explicitly denied the option to disable ACL crawling, request temporary permission. After you have secured all approvals and exceptions, create a new data source with ACL crawling disabled and sync the data. With ACL crawling disabled, Amazon Q Business users will be able to discover knowledge and obtain answers from the indexed documents through this connector. Notify the account administrator to revert the account policy back to explicitly denying the disable ACL crawling option. The process and interaction between different roles are shown in the following chart.

The following is an overview of the procedure to create a data source with ACL crawling disabled using AWS Console:

  1. Navigate to the Amazon Q Business console.
  2. Select the Amazon Q Business application that you want to add a data source connector to.
  3. Choose Add data source in the Data sources section and select the desired connector.
  4. Update the connector configuration information. See Connecting Amazon Q Business data sources for configuration details.
  5. In the Authorization section, choose Disable ACLs and check the acknowledgment to accept the risks of disabling ACL crawling.
  6. Complete the remaining connector configuration and choose Save.
  7. Sync the data source.

Note: You cannot disable ACL crawling for an existing data source connector that was created with ACL crawling enabled. You must create a new data source connector instance with ACL disabled and delete the older instance that has ACL crawling enabled.

Process to enable ACL crawling for a data source connector

Creating a data source connector with ACL crawling enabled is recommended and doesn’t require additional allow listing from AWS account administrators. To enable ACL crawling, you follow steps similar to disabling ACLs as described in the previous section. When configuring the data source connector using the console, choose Enable ACLs in the Authorization section to create a connector with ACL crawling enabled. You can also enable ACL crawling at any time for an existing data source connector that was created with this option disabled. Sync the data source connector for the ACL enforcement to take effect. Amazon Q Business users can only query and obtain answers from documents to which they have access in the original data source.

It’s important to review that the data source administrator has set up the required permissions properly, making sure that the crawler has permission to crawl for ACLs in the data source before enabling ACL crawling. You can find the required permissions in the prerequisite section of the connector in Connecting Amazon Q Business data sources. The following shows the process for setting up a data source connector with ACL crawling enabled.

Logging and monitoring the ACL crawling configuration

Amazon Q Business uses AWS CloudTrail for logging API calls related to ACL crawling configuration. You can monitor the CloudTrail log for CreateDataSource and UpdateDataSource API calls to identify ACL crawling-related changes made to data source configuration. For a complete list of Amazon Q Business APIs that are logged to CloudTrail, see Logging Amazon Q Business API calls using AWS CloudTrail.

Administrators can configure Amazon CloudWatch alarms to generate automated alert notifications if ACL crawling is disabled for a data source connector, allowing them to initiate corrective action. For step-by-step instructions on setting up CloudWatch alarms based on CloudTrail events, see How do I use CloudWatch alarms to monitor CloudTrail events.

The example CloudWatch alarm code snippet that follows shows the filter pattern for identifying events related to disabling ACL crawling in a data source connector.

{
    ($.eventSource = qbusiness.amazonaws.com)
    && (
        ($.eventName = CreateDataSource)
        || ($.eventName = UpdateDataSource)
    )
    && ($.requestParameters.disableAclCrawl = true) 
}

Tips for troubleshooting

When configuring Amazon Q Business data source connectors, you might occasionally encounter issues. The following are some common errors and their possible resolutions.

Not authorized to disable ACL crawling

When creating a new data source connector with ACL crawling disabled, you might see an error message stating not authorized to perform: qbusiness:DisableAclOnDataSource as shown in the following image.

This error indicates that your administrator has explicitly denied the option to disable ACL crawling for your AWS account. Contact your administrator to allow-list this action for your account. For more details, see the Process to disable ACL crawling for a data source connector section earlier in this post.

Data source connection errors

Data source connectors might also fail to connect to your data source or crawl data. In such cases, verify that Amazon Q Business can reach the data source through the public internet or through a VPC private network. See Connecting Amazon Q Business data sources to make sure that your data source authentication has the permissions needed to crawl content and ACLs, if enabled.

Identity and ACL mismatch errors

Finally, after successfully syncing data with ACL crawling enabled, some users might still be unable to get answers to queries, even though the relevant documents were indexed. This issue commonly occurs when the user lacks access to the indexed content in the original data source, or when the user identity obtained from the data source doesn’t match the sign-in identity. To troubleshoot such ACL mismatch issues, examine the data source sync report. For more information, see Introducing document-level sync reports: Enhanced data sync visibility in Amazon Q Business.

Key considerations and recommendations

Given the impact that disabling ACL crawling can have on content security, consider these restrictions and best practices when disabling ACL crawling in Amazon Q Business data source connectors:

  • ACL crawling enablement is a one-way control mechanism. After it’s enabled, you cannot disable it. This helps prevent accidentally disabling ACL crawling in production environments.
  • Keep ACL crawling enabled by default and disable it only for the subset of data source connectors that require it.
  • If necessary, consider splitting the indexing of a data source by setting up multiple data source connectors and limiting ACL crawling disablement to a smaller content segment. Use the document Inclusion and Exclusion feature of data source connectors to define the indexing scope.
  • When ACL crawling is disabled because of irreconcilable identities, consider alternative options. These include implementing attribute filters, restricting access to the Amazon Q Business application, and setting up guardrails.
  • As a security best practice, AWS Organizations and account administrators should add a service control policy to explicitly deny the qbusiness:DisableAclOnDataSource permission for all accounts. Grant this permission only when requested by an Amazon Q Business administrator. After configuring a data source connector with ACL crawling disabled, revert to an explicit deny. Use a ticketing system to maintain a record of exception approvals. For more information, see <link>.
  • Currently, disabling ACL crawling is available for limited connectors, including ServiceNow, Confluence, SharePoint, Jira, Google Drive, OneDrive, Salesforce, Zendesk, GitHub, MS Teams, and Slack. For the latest list of connectors that support disabling ACL crawling, see Connecting Amazon Q Business data sources.

Clean up

To avoid incurring additional charges, make sure you delete any resources created in this post.

  1. To delete any data source created in Amazon Q Business, follow the instructions in Deleting an Amazon Q Business data source connector to delete the same.
  2. To delete any Amazon Q Business application created, follow the instructions in Deleting an application.

Conclusion

Amazon Q Business data source connector ACL crawling is an essential feature that helps organizations build, manage, and scale secure AI assistants. It plays a crucial role in enforcing regulatory and compliance policies and protecting sensitive content. With the introduction of a self-service feature to disable ACL crawling, Amazon Q Business now provides you more autonomy to choose deployment options that suit your organization’s business needs. To start building secure AI assistants with Amazon Q Business, explore the Getting started guide.


About the Authors

Rajesh Kumar Ravi, a Senior Solutions Architect at Amazon Web Services, specializes in building generative AI solutions using Amazon Q Business, Amazon Bedrock, and Amazon Kendra. He helps businesses worldwide implement these technologies to enhance efficiency, innovation, and competitiveness. An accomplished technology leader, Rajesh has experience developing innovative AI products, nurturing the builder community, and contributing to new ideas. Outside of work, he enjoys walking and short hiking trips.

Meenakshisundaram Thandavarayan works for AWS as an AI/ML Specialist. He has a passion to design, create, and promote human-centered data and analytics experiences. Meena focuses on developing sustainable systems that deliver measurable, competitive advantages for strategic customers of AWS. Meena is a connector and design thinker and strives to drive business to new ways of working through innovation, incubation, and democratization.

Amit Choudhary is a Product Manager for Amazon Q Business connectors. He loves to build products that make it easy for customers to use privacy-preserving technologies (PETs) such as differential privacy

Keerthi Kumar Kallur is a Software Development Engineer at AWS. He is part of the Amazon Q Business team and worked on various features with customers. In his spare time, he likes to do outdoor activities such as hiking and sports such as volleyball.

Read More

SK Telecom improves telco-specific Q&A by fine-tuning Anthropic’s Claude models in Amazon Bedrock

SK Telecom improves telco-specific Q&A by fine-tuning Anthropic’s Claude models in Amazon Bedrock

SK Telecom (SKT), South Korea’s leading telecommunications company serving 30 million customers, is at the forefront of AI innovation. In line with its AI Pyramid Strategy, which aims to unlock AI’s potential for anyone, anywhere, anytime, SKT has collaborated with the AWS Generative AI Innovation Center (GenAIIC) Custom Model Program to explore domain-trained models using Amazon Bedrock for telco-specific use cases.

This collaboration aligns with SKT’s vision of using AI expertise and strategic partnerships to develop innovative AI-based products and services. One such initiative focused on developing a custom solution for grounded question answering (Q&A) based on reference documents.

Retrieval Augmented Generation (RAG) is a popular technique for Q&A tasks, offering improved factual accuracy and knowledge grounding. However, RAG faces challenges with generating a response not matching preferred tone, style, and manners for telco use cases, as well as retrieving irrelevant documents, potentially leading to inaccurate responses. To address this, SKT and AWS GenAIIC aimed to use model customization to improve Anthropic Claude models on Amazon Bedrock in three key areas:

  • Providing concise and informative answers
  • Correctly referencing links from retrieved documents
  • Answering in a tone and style consistent with SKT and similar to ground truth answers

Additionally, the team explored boosting smaller model performance using synthetic data generated by bigger large language models (LLMs) for knowledge distillation and scenarios with limited labeled training data.

Amazon Bedrock is a fully managed service that offers a variety of LLMs and foundation models (FMs) along with capabilities such as Amazon Bedrock Knowledge Bases, Amazon Bedrock Agents, and Amazon Bedrock Guardrails that can expedite many generative AI use cases. Amazon Bedrock is the only fully managed service that provides you with the ability to fine-tune Claude models. Amazon Bedrock offers an intuitive and secure way of fine-tuning Anthropic’s Claude models and more. The fine-tuned Claude model can be deployed using Amazon Bedrock and can use the capabilities of Amazon Bedrock seamlessly, for example, Amazon Bedrock Knowledge Bases for the telco domain-specific RAG or Amazon Bedrock Agents for the agentic usage.

In this post, we share how SKT customizes Anthropic Claude models for telco-specific Q&A regarding technical telecommunication documents of SKT using Amazon Bedrock.

Solution overview

The team explored combinations of prompt optimization, customization (fine-tuning), and data augmentation with synthetic data. This multifaceted approach aimed to maximize the benefits of each technique for the grounded Q&A generation task.

In the following sections, we explore these methods in more detail.

Anthropic’s Claude customization with prompt optimization

Fine-tuning, which is available through Amazon Bedrock for various FMs, including Anthropic’s Claude, allows adaptation of pre-trained language models for specific use cases. It’s particularly effective for tailoring response style and format adherence.

The team first optimized the system prompt, implementing standardized guidelines for answer formatting and document citation based on Anthropic model prompting best practices. Key focus areas included:

  • Clear presentation of system commands
  • Consistent use of code block formatting
  • Context-based tailored responses

This prompt engineering, combined with fine-tuning, yielded substantial improvements:

  • Over 50% increase in ROUGE-3 score
  • Over 25% improvement in ROUGE-L score
  • Over 4% increase in embedding similarity score
  • Significant progress in accurate reference citation

The iterative enhancement process demonstrated cumulative benefits, with prompt updates alone showing 35–40 percent improvements in key metrics, and the final customized model achieving 50–60 percent gains in some metrics.

This progression clearly illustrates the cumulative benefits of model customization through RAG, prompt engineering, and fine-tuning, resulting in a model that significantly outperformed both the baseline and the prompt-updated versions in terms of ROUGE scores and citation accuracy. ROUGE score measures the similarity between ground truths and generated results by computing N-gram word overlaps. The following table summarizes these improvements.

LLM Prompt update Fine-tuning Relative improvement over baseline
ROUGE-3 ROUGE-L Citation accuracy
Anthropic’s Claude 3 Sonnet baseline baseline baseline
Anthropic’s Claude 3 Sonnet ✅ +38.30% +13.4% +52.94%
Anthropic’s Claude 3 Sonnet ✅ ✅ +58.1% +26.8% +70.59%

Synthetic data for fine-tuning

To address the challenge of limited high-quality labeled training data, the team explored synthetic data generation techniques. This approach also facilitates knowledge distillation from larger LLMs to smaller, more targeted models, offering benefits such as lower latency and cost.

The team conducted controlled experiments using:

  • A baseline set of 500 ground truth samples
  • An augmented set with 500 original over 1,500 synthetic samples
  • A larger original set of 2,000 samples

Synthetic data was generated using Anthropic’s Claude Sonnet 3, creating new question-answer pairs over the same retrieved documents used in ground truth examples.

The results were evaluated using both LLM-based comparison and human preference evaluation. Human evaluators blindly ranked model outputs, with scores assigned based on preference (Best: 4, Second: 3, Third: 2, Worst: 1). The following table shows the results of the human preference evaluation scores.

Rank Model Cumulative score
(best possible: 160)
1 Fine-tuned with 2,000 original samples 114
2 Fine-tuned with 500 original and 1,500 synthetic samples 112
3 Fine-tuned with 500 original samples 85
4 No fine-tuning (baseline) 84

Some key findings include:

  • Small training sets (500 samples) showed minimal improvement over baseline
  • Larger training sets (2,000 samples) scored considerably higher
  • Synthetically augmented data performed similarly to equivalent-sized original data

Although having a large volume of domain-specific training data is always ideal, many businesses have limited available datasets. In such scenarios, synthetic data can play a crucial role in place of original data. This demonstrates the potential of synthetic data for model customization.

Conclusion

SK Telecom’s collaboration with AWS GenAIIC showcases the company’s commitment to developing innovative AI solutions for telco challenges. By using Amazon Bedrock to customize Anthropic’s Claude models, SKT has achieved significant performance improvements for telco-specific, Korean language use cases without the need to build models from scratch. The proof of concept demonstrated significant improvements:

  • ~58% increase in ROUGE-3 score
  • ~27% increase in ROUGE-L score
  • Substantial improvement in returning correct reference links

This approach, combined with synthetic data generation techniques, aligns with SKT’s AI Pyramid Strategy, enabling faster testing and development of new approaches. As SKT continues to focus on key areas such as personal AI assistants, AI healthcare, and AI data centers, this collaboration with AWS represents a significant step in their AI evolution and long-term competitiveness in the global AI landscape.

For those interested in working with AWS on similar projects, visit Generative AI Innovation Center.


About the Authors

Sungmin Hong is a Senior Applied Scientist at AWS Generative AI Innovation Center where he helps expedite the variety of use cases of AWS customers. Before joining Amazon, Sungmin was a postdoctoral research fellow at Harvard Medical School. He holds Ph.D. in Computer Science from New York University. Outside of work, Sungmin enjoys hiking, reading and cooking.

Sujeong Cha is a Deep Learning Architect at the AWS Generative AI Innovation Center, where she specializes in model customization and optimization. She has extensive hands-on experience in solving customers’ business use cases by utilizing generative AI as well as traditional AI/ML solutions. Sujeong holds a M.S. degree in Data Science from New York University.

Arijit Ghosh Chowdhury is a Scientist with the AWS Generative AI Innovation Center, where he works on model customization and optimization. In his role, he works on applied research in fine-tuning and model evaluations to enable GenAI for various industries. He has a Master’s degree in Computer Science from the University of Illinois at Urbana Champaign, where his research focused on question answering, search and domain adaptation.

Yiyue Qian is an Applied Scientist II at the AWS Generative AI Innovation Center, where she supports providing generative AI solutions to AWS customers. In this role, she collaborates with a team of experts to develop innovative AI-driven models for AWS customers across various industries. Yiyue holds a Ph.D. in Computer Science from the University of Notre Dame, where her research focused on advanced machine learning and deep learning techniques.

Wei-Chih Chen is a Machine Learning Engineer at the AWS Generative AI Innovation Center, where he works on model customization and optimization for LLMs. He also builds tools to help his team tackle various aspects of the LLM development life cycle—including fine-tuning, benchmarking, and load-testing—that accelerating the adoption of diverse use cases for AWS customers. He holds an M.S. degree in Computer Science from UC Davis.

Hannah Marlowe is a Senior Manager of Model Customization at the AWS Generative AI Innovation Center. Her team specializes in helping customers develop differentiating Generative AI solutions using their unique and proprietary data to achieve key business outcomes. She holds a Ph.D in Physics from the University of Iowa, with a focus on astronomical X-ray analysis and instrumentation development. Outside of work, she can be found hiking, mountain biking, and skiing around the mountains in Colorado.

Seunghyeon Jeong (Steve) is a team leader of the Platform Application team at SKT. He is responsible for commercializing the Global Intelligence Platform (GIP), which provides AI models and tools. For most of his career, he has been a PM developing various mobile services such as mobile wallet, fashion streaming, and unified login services for SK. His team is expanding the delivery of models and features to make it easier for internal teams to apply AI, contributing to SKT’s AI Transformation. Before entering the AI space, he was a Product Manager, developing and operating various mobile services such as mobile wallet, fashion streaming, and unified login services for the US and Korea.

Sunwoo Lee (Lois) is the team leader of the Data Construction and Evaluation Team within SK Telecom’s Global AI Tech division. She oversees the design and construction of training data for language models, the model performance evaluation process, and its application to services. Her career has focused on NLP within IT, which is a great fit with her background in Linguistics and Korean language education. Alongside her world-class team, she continues to explore and solve fascinating problems such as how to optimize the design of data for language model training, which tasks and methods to implement for validating language model performance, and the best design of AI-human conversations.

Eric Davis is the vice president of the AI Tech Collaboration Group at SKT. Eric oversees tech collaborations with worldwide tech partners to customize large language models (LLMs) for the telecommunications domain. His teams are responsible for designing and building the datasets to tune LLMs, as well as benchmarking LLMs in general and for the telecommunications domain. Eric holds a Master of Science degree in Computer Science from Carnegie Mellon from the Language Technologies Institute and a Bachelor of Arts in Linguistics and Psychology from the University of California, Los Angeles.

Read More

Scaling Rufus, the Amazon generative AI-powered conversational shopping assistant with over 80,000 AWS Inferentia and AWS Trainium chips, for Prime Day

Scaling Rufus, the Amazon generative AI-powered conversational shopping assistant with over 80,000 AWS Inferentia and AWS Trainium chips, for Prime Day

Amazon Rufus is a shopping assistant experience powered by generative AI. It generates answers using relevant information from across Amazon and the web to help Amazon customers make better, more informed shopping decisions. With Rufus, customers can shop alongside a generative AI-powered expert that knows Amazon’s selection inside and out, and can bring it all together with information from across the web to help shoppers make more informed purchase decisions.

To meet the needs of Amazon customers at scale, Rufus required a low-cost, performant, and highly available infrastructure for inference. The solution needed the capability to serve multi-billion parameter large language models (LLMs) with low latency across the world to service its expansive customer base. Low latency makes sure users have a positive experience chatting with Rufus and can start getting responses in less than a second. To achieve this, the Rufus team is using multiple AWS services and AWS AI chips, AWS Trainium and AWS Inferentia.

Inferentia and Trainium are purpose-built chips developed by AWS that accelerate deep learning workloads with high performance and lower overall costs. With these chips, Rufus reduced its costs by 4.5 times lower than other evaluated solutions while maintaining low latency for its customers. In this post, we dive into the Rufus inference deployment using AWS chips and how this enabled one of the most demanding events of the year—Amazon Prime Day.

Solution overview

At its core, Rufus is powered by an LLM trained on Amazon’s product catalog and information from across the web. LLM deployment can be challenging, requiring you to balance factors such as model size, model accuracy, and inference performance. Larger models generally have better knowledge and reasoning capabilities but come at a higher cost due to more demanding compute requirements and increasing latency. Rufus would need to be deployed and scale to meet the tremendous demand of peak events like Amazon Prime Day. Considerations for this scale include how well it needs to perform, its environmental impact, and the cost of hosting the solution. To meet these challenges, Rufus used a combination of AWS solutions: Inferentia2 and Trainium, Amazon Elastic Container Service (Amazon ECS), and Application Load Balancer (ALB). In addition, the Rufus team partnered with NVIDIA to power the solution using NVIDIA’s Triton Inference Server, providing capabilities to host the model using AWS chips.

Rufus inference is a Retrieval Augmented Generation (RAG) system with responses enhanced by retrieving additional information such as product information from Amazon search results. These results are based on the customer query, making sure the LLM generates reliable, high-quality, and precise responses.

To make sure Rufus was best positioned for Prime Day, the Rufus team built a heterogeneous inference system using multiple AWS Regions powered by Inferentia2 and Trainium. Building a system across multiple Regions allowed Rufus to benefit in two key areas. First, it provided additional capacity that could be used during times of high demand, and second, it improved the overall resiliency of the system.

The Rufus team was also able to use both Inf2 and Trn1 instance types. Because Inf2 and Trn1 instance types use the same AWS Neuron SDK, the Rufus team was able to use both instances to serve the same Rufus model. The only configuration setting to adjust was the tensor parallelism degree (24 for Inf2, 32 for Trn1). Using Trn1 instances also led to an additional 20% latency reduction and throughput improvement compared to Inf2.

The following diagram illustrates the solution architecture.

To support real-time traffic routing across multiple Regions, Rufus built a novel traffic orchestrator. Amazon CloudWatch supported the underlying monitoring, helping the team adjust the traffic ratio across the different Regions in less than 15 minutes based on the traffic pattern changes. By using this type of orchestration, the Rufus team had the ability to direct requests to other Regions when needed, with a small trade-off of latency to the first token. Due to Rufus’s streaming architecture and the performant AWS network between Regions, the perceived latency was minimal for end-users.

These choices allowed Rufus to scale up over 80,000 Trainium and Inferentia chips across three Regions serving an average of 3 million tokens a minute while maintaining P99 less than 1 second latency to the first response for Prime Day customers. In addition, by using these purpose-built chips, Rufus achieved 54% better performance per watt than other evaluated solutions, which helped the Rufus team meet energy efficiency goals.

Optimizing inference performance and host utilization

Within each Region, the Rufus inference system used Amazon ECS, which managed the underlying Inferentia and Trainium powered instances. By managing the underlying infrastructure, the Rufus team only needed to bring their container and configuration by defining an ECS task. Within each container, an NVIDIA Triton Inference Server with a Python backend is used running vLLM with the Neuron SDK. vLLM is a memory-efficient inference and serving engine that is optimized for high throughput. The Neuron SDK makes it straightforward for teams to adopt AWS chips and supports many different libraries and frameworks such as PyTorch Lightning.

The Neuron SDK provides a straightforward LLM inference solution on Trainium and Inferentia hardware with optimized performance supporting a wide range of transformer-based LLM architectures. To reduce latency, Rufus has collaborated with the AWS Annapurna team to develop various optimizations such as INT8 (weight only) quantization, continuous batching with vLLM, resource, compute, and memory bandwidth in the Neuron compiler and runtime. These optimizations are currently deployed in Rufus production and are available to use in the Neuron SDK 2.18 and onward.

To reduce overall waiting time for customers to start seeing a response from Rufus, the team also developed an inference streaming architecture. With the high compute and memory load needed for LLM inference, the total time it takes to finish generating the full response for a customer query can take multiple seconds. With a streaming architecture, Rufus is able to return the tokens right after they’re generated. This optimization allows the customer to start consuming the response in less than 1 second. In addition, multiple services work together using gRPC connections to intelligently aggregate and enhance the streaming response in real time for customers.

As shown in the following figure, images and links are embedded in the response, which allow customers to engage and continue exploring with Rufus.

Scaling up

Although we have to maintain low latency for the best customer experience, it’s also crucial to scale the service throughput by achieving high hardware resource utilization. High hardware utilization makes sure accelerators don’t sit idle and needlessly increase costs. To optimize the inference system throughput, the team improved both single-host throughput as well as load balancing efficiency.

Load balancing for LLM inference is tricky due to following challenges. First, a single host can only handle a limited number of concurrent requests. Second, the end-to-end latency to complete one request can vary, spanning many seconds depending on the LLM response length.

To address the challenges, the team optimized throughput by considering both single-host throughput and throughput across many hosts using load balancing.

The team used the least outstanding requests (LOR) routing algorithm from ALB, increasing throughput by five times faster in comparison to an earlier baseline measurement. This allows each host to have enough time to process in-flight requests and stream back responses using a gRPC connection, without getting overwhelmed by multiple requests received at the same time. Rufus also collaborated with AWS and vLLM teams to improve single-host concurrency using vLLM integration with the Neuron SDK and NVIDIA Triton Inference Server.

Figure 1. ECS tasks scale horizontally hosting the Triton Inference Server and dependencies

Figure 1. ECS tasks scale horizontally hosting the Triton Inference Server and dependencies

With this integration, Rufus was able to benefit from a critical optimization: continuous batching. Continuous batching allows a single host to greatly increase throughput. In addition, continuous batching provides unique capabilities in comparison to other batch techniques, such as static batching. For example, when using static batching, the time to first token (TTFT) increases linearly with the number of requests in one batch. Continuous batching prioritizes the prefill stage for LLM inference, keeping TTFT under control even with more requests running at the same time. This helped Rufus provide a pleasant experience with low latency when generating the first response, and improve the single-host throughput to keep serving costs under control.

Conclusion

In this post, we discussed how Rufus is able to reliably deploy and serve its multi-billion-parameter LLM using the Neuron SDK with Inferentia2 and Trainium chips and AWS services. Rufus continues to evolve with advancements in generative AI and customer feedback and we encourage you to use Inferentia and Trainium.

Learn more about how we are innovating with generative AI across Amazon.


About the author

James Park is a Solutions Architect at Amazon Web Services. He works with Amazon.com to design, build, and deploy technology solutions on AWS, and has a particular interest in AI and machine learning. In his spare time, he enjoys seeking out new cultures, new experiences, and staying up to date with the latest technology trends.

RJ is an Engineer within Amazon. He builds and optimizes systems for distributed systems for training and works on optimizing adopting systems to reduce latency for ML Inference. Outside work, he is exploring using Generative AI for building food recipes.

Yang Zhou is a software engineer working on building and optimizing machine learning systems. His recent focus is enhancing the performance and cost efficiency of generative AI inference. Beyond work, he enjoys traveling and has recently discovered a passion for running long distances.

Adam (Hongshen) Zhao is a Software Development Manager at Amazon Stores Foundational AI. In his current role, Adam is leading Rufus Inference team to build GenAI inference optimization solutions and inference system at scale for fast inference at low cost. Outside work, he enjoys traveling with his wife and art creations.

Faqin Zhong is a software engineer at Amazon Stores Foundational AI, working on Large Language Model (LLM) inference infrastructure and optimizations. Passionate about Generative AI technology, Faqin collaborates with leading teams to drive innovations, making LLMs more accessible and impactful, ultimately enhancing customer experiences across diverse applications. Outside of work she enjoys cardio exercise and baking with her son.

Nicolas Trown is an engineer in Amazon Stores Foundational AI. His recent focus is lending his systems expertise across Rufus to aid Rufus Inference team and efficient utilization across the Rufus experience. Outside of work he enjoys spending time with his wife and day trips to nearby coast, Napa, and Sonoma areas.

Bing Yin is a director of science at Amazon Stores Foundational AI. He leads the effort to build LLMs that are specialized for shopping use cases and optimized for inference at Amazon scale. Outside of work, he enjoys running marathon races.

Read More

Exploring alternatives and seamlessly migrating data from Amazon Lookout for Vision

Exploring alternatives and seamlessly migrating data from Amazon Lookout for Vision

Amazon Lookout for Vision, the AWS service designed to create customized artificial intelligence and machine learning (AI/ML) computer vision models for automated quality inspection, will be discontinuing on October 31, 2025. New customers will not be able to access the service effective October 10, 2024, but existing customers will be able to use the service as normal until October 31, 2025. AWS will continue to support the service with security updates, bug fixes, and availability enhancements, but we do not plan to introduce new features for this service.

This post discusses some alternatives to Lookout for Vision and how you can export your data from Lookout for Vision to migrate to an alternate solution.

Alternatives to Lookout for Vision

If you’re interested in an alternative to Lookout for Vision, AWS has options for both buyers and builders.

For an out-of-the-box solution, the AWS Partner Network offers solutions from multiple partners. You can browse solutions on the Computer Vision for Quality Insights page in the AWS Solutions Library. These partner solutions include options for software, software as a service (SaaS) applications, managed solutions or custom implementations based on your needs. This approach provides a solution that addresses your use case without requiring you to have expertise in imaging, computer vision, AI, or application development. This typically provides the fastest time to value by taking advantage of the specialized expertise of the AWS Partners. The Solutions Library also has additional guidance to help you build solutions faster.

If you prefer to build your own solution, AWS offers AI tools and services to help you develop an AI-based computer vision inspection solution. Amazon SageMaker provides a set of tools to build, train, and deploy ML models for your use case with fully managed infrastructure, tools, and workflows. In addition to SageMaker enabling you to build your own models, Amazon SageMaker JumpStart offers built-in computer vision algorithms and pre-trained defect detection models that can be fine-tuned to your specific use case. This approach provides you the tools to accelerate your AI development while providing complete flexibility to build a solution that meets your exact requirements and integrates with your existing hardware and software infrastructure. This typically provides the lowest operating costs for a solution.

AWS also offers Amazon Bedrock, a fully managed service that offers a choice of high-performing generative AI foundation models (FMs), including models that can help build a defect detection model running in the cloud. This approach enables you to build a custom solution while using the power of generative AI to handle the custom computer vision model creation and some of the code generation to speed development, eliminating the need for full AI computer vision expertise. Amazon Bedrock provides the ability to analyze images for defects, compare performance of different models, and generate code for custom applications. This alternative is useful for use cases that don’t require low latency processing, providing faster time to value and lower development costs.

Migrating data from Lookout for Vision

To move existing data from Lookout for Vision to use in an alternative implementation, the Lookout for Vision SDK provides the capability to export a dataset from the service to an Amazon Simple Storage Service (Amazon S3) bucket. This procedure exports the training dataset, including manifest and dataset images, for a project to a destination Amazon S3 location that you specify. With the exported dataset and manifest file, you can use the same data that you used to create a Lookout for Vision model to create a model using SageMaker or Amazon Bedrock, or provide it to a partner to incorporate into their customizations for your use case.

Summary

Although Lookout for Vision is planned to shut down on October 31, 2025, AWS offers a powerful set of AI/ML services and solutions in the form of SageMaker tools to build custom models and generative AI with Amazon Bedrock to do customized inspection and generate code, in addition to a range of offerings from partners in the AWS Partner Network. Export tools enable you to effortlessly move your data from Lookout for Vision to an alternate solution if you so choose. You should explore these options to determine what works best for your specific needs.

For more details, refer to the following resources:


About the Author

Tim Westman is the Product Manager and Go-to-Market Lead for Edge Machine Learning, AWS. Tim leads the Product Management and Business Development for the Edge Machine Learning business at Amazon Web Services. In this role, he works with customers to help build computer vision solutions at the edge to solve complex operational challenges. Tim has more than 30 years of experience in sales, business development and product management roles for leading hardware and software companies, with the last 8 years specializing in AI and computer vision for IoT applications.

Read More

Unlock the knowledge in your Slack workspace with Slack connector for Amazon Q Business

Unlock the knowledge in your Slack workspace with Slack connector for Amazon Q Business

Amazon Q Business is a fully managed, generative AI-powered assistant that you can configure to answer questions, provide summaries, generate content, and complete tasks based on your enterprise data. Amazon Q Business offers over 40 built-in connectors to popular enterprise applications and document repositories, including Amazon Simple Storage Service (Amazon S3), Salesforce, Google Drive, Microsoft 365, ServiceNow, Gmail, Slack, Atlassian, and Zendesk and can help you create your generative AI solution with minimal configuration.

Nearly 100 thousand organizations use Slack to bring the right people together to securely collaborate with each other. A Slack workspace captures invaluable organizational knowledge in the form of the information that flows through it as the users communicate on it. Hence, it is valuable to make this knowledge quickly and securely available to the users.

In this post, we will demonstrate how to set up Slack connector for Amazon Q Business to sync communications from both public and private channels, reflective of user permissions. We will also guide you through the configurations needed on your Slack workspace. Additionally, you will learn how to configure the Amazon Q Business application and enable user authentication through AWS IAM Identity Center, which is a recommended service for managing a workforce’s access to AWS applications.

Data source overview

Amazon Q Business uses large language models (LLMs) to build a unified solution that connects multiple data sources. Typically, you’d need to use a natural language processing (NLP) technique called Retrieval Augmented Generation (RAG) for this. With RAG, generative AI enhances its responses by incorporating relevant information retrieved from a curated dataset. Amazon Q Business has a built-in managed RAG capability designed to reduce the undifferentiated heavy lifting involved in creating these systems. Typical of a RAG model, Amazon Q Business has two components: A retrieval component that retrieves relevant documents for the user query and a generation component that takes the query and the retrieved documents and then generates an answer to the query using an LLM.

A Slack workspace has multiple elements. It has public channels where workspace users can participate and private channels where only channel members can communicate with each other. Individuals can also directly communicate with each other in one-on-one conversations and in user groups. This communication is in the form of messages and threads of replies, with optional document attachments. Slack workspaces of active organizations are highly dynamic, with the content and collaboration evolving and growing in volume continuously.

The preceding figure shows the process flow of the solution. When you connect Amazon Q Business to a data source (in this case, Slack), what Amazon Q considers and crawls as a document varies by connector. For the Amazon Q Business Slack connector, each message, message attachment and channel post is considered a single document, However, Slack conversation threads that help you create organized discussions around specific messages are also considered and ingested as a single document, regardless of the number of participants or messages they contain.

Amazon Q Business crawls access control list (ACL) information attached to a document (user and group information) from your Slack instance. This information can be used to filter chat responses to the user’s document access level. The Slack connector supports token-based authentication. This could be a Slack bot user OAuth token or Slack user OAuth token. See the Slack connector overview to get the list of entities that are extracted, supported filters, sync modes, and file types.

User IDs (_user_id) exist in Slack on messages and channels where there are set access permissions. They are mapped from the user emails as the IDs in Slack.

To connect your data source connector to Amazon Q Business, you must give Amazon Q Business an IAM role that has the following permissions:

  • Permission to access the BatchPutDocument and BatchDeleteDocument operations to ingest documents.
  • Permission to access the User Store API operations to ingest user and group access control information from documents.
  • Permission to access your AWS Secrets Manager secret to authenticate your data source connector instance.
  • (Optional) If you’re using Amazon Virtual Private Cloud (Amazon VPC), permission to access your Amazon VPC.

Solution overview

In this solution, we will show you how to create a Slack workspace with users who perform various roles within the organization. We will then show you how to configure this workspace to define a set of scopes that are required by the Amazon Q Business Slack connector to index the user communication. This will be followed by the configuration of the Amazon Q Business application and a Slack data source. Based on the configuration, when the data source is synchronized, the connector either crawls and indexes the content from the workspace that was created on or before a specific date. The connector also collects and ingests ACL information for each indexed message and document. Thus, the search results of a query made by a user includes results only from those documents that the user is authorized to read.

Prerequisites

To build the Amazon Q Business connector for Slack, you need the following:

In Slack:

  • Create a Slack bot user OAuth token or Slack user OAuth token. You can choose either token to connect Amazon Q Business to your Slack data source. See the Slack documentation on access tokens for more information.
  • Note your Slack workspace team ID from your Slack workspace main page URL. For example, https://app.slack.com/client/T0123456789/... where T0123456789 is the team ID.
  • Add the OAuth scopes and read permissions.

In your AWS account:

  • Create an AWS Identity and Access Management (IAM) role for your data source and, if using the Amazon Q Business API, note the ARN of the IAM role.
  • Store your Slack authentication credentials in an AWS Secrets Manager secret and, if using the Amazon Q Business API, note the ARN of the secret.
  • Enable and configure an IAM Identity Center instance. Amazon Q Business integrates with IAM Identity Center as a gateway to manage user access to your Amazon Q Business application. We recommend enabling and pre-configuring an Identity Center instance before you begin to create your Amazon Q Business application. Identity Center is the recommended AWS service for managing human user access to AWS resources. Amazon Q Business supports both organization and account level Identity Center instances. See Setting up for Amazon Q Business for more information.

Configure your Slack workspace

You will create one user for each of the following roles: Administrator, Data scientist, Database administrator, Solutions architect and Generic.

User name Role
arnav_desai Admin
jane_doe Data Scientist
pat_candella DB Admin
mary_major Solutions Architect
john_stiles Generic User

To showcase the ACL propagation, you will create three public channels, #general, #customerwork, and #random, that any member can access including the Generic user. Also, one private channel, #anydepartment-project-private, that can be accessed only by the users arnav_desai, john_stiles, mary_major, and pat_candella.

To create a Slack app:

  1. Navigate to the Slack API Your Apps page and choose Create New App.
  2. Select From scratch. In the next screen, select the workspace to develop your app, and then choose Create an App.
  3. Give the Slack app a name and select a workspace to develop your app in. Then choose Create App.
  4. After you’ve created your app, select it and navigate to Features and choose OAuth & Permissions.
  5. Scroll down to Scopes > User Token Scopes and set the OAuth scope based on the user token scopes in Prerequisites for connecting Amazon Q Business to Slack.

Note: You can configure two types of scopes in a Slack workspace:

  1. Bot token scope: Only the messages to which it has been explicitly added are crawled by the bot token. It is employed to grant restricted access to specific messages only.
  2. User token scope: Only the data shared with the member is accessible to the user token, which acts as a representative of a Slack user.

For this example, so you can search on the conversations between users, you will use the user token scope.

  1. After the OAuth scope for yser token has been set up as described in the Slack prerequisites, scroll up to the section OAuth Tokens for your Workspace, and choose Install to Workspace, and then choose Allow.
  2. This will generate a user OAuth token. Copy this token to use when configuring the Amazon Q Business Slack connector.

Configure the data source using the Amazon Q Business Slack connector

In this section, you will create an Amazon Q Business application using the console.

To create an Amazon Q Business application

  1. In the AWS Management Console for Amazon Q Business, choose Create Application.
  2. Enter an Application Name, such as my-slack-workspace. Leave the Service access as the default value, and select AWS IAM Identity Center for Access Management . Enter a new Tag value as required and choose Create to the Amazon Q Business Application.
  3. Leave the default option of Use Native retriever selected for Retrievers, leave Enterprise as the Index provisioning and leave the default value of 1 as the Number of units. Each unit in Amazon Q Business index is 20,000 documents or 200 MB of extracted text (whichever comes first). Choose Next.
  4. Scroll down the list of available connectors and select Slack and then choose Next.

    1. Enter a Data source name and a Description to identify your data source and then enter the Slack workspace team ID to connect with Amazon Q Business.
    2. In the Authentication section, select Create and add a new secret.
    3. On the dialog box that appears, enter a Secret name followed by the User OAuth Slack token that was copied from the Slack workspace.
    4. For the IAM role, select Create a new service role (Recommended).
    5. In Sync scope, choose the following:
      • For select type of content to crawl, select All channels.
      • Select an appropriate date for Select crawl start date.
      • Leave the default value selected for Maximum file size as 50.
      • You can include specific Messages, such as bot messages or archived messages to sync.
      • Additionally, you can include up to 100 patterns to include or exclude filenames, types, or file paths to sync.

    6. For Sync mode, leave Full sync selected and for the Sync run schedule, select Run on demand.
    7. Leave the field mapping as is and choose Add data source.
    8. On the next page, choose Next.
  5. Add the five users you created earlier, who are a part of IAM Identity Center and the Slack workspace to the Amazon Q Business application. To add users to Identity Center, follow the instructions in Add users to your Identity Center directory. When done, choose Add groups and users and choose Assign.
  6. When a user is added, each user is assigned the default Q Business Pro For more information on different pricing tiers, see the Amazon Q Business pricing page.
  7. Choose Create application to finish creating the Amazon Q Business application.
  8. After the application and the data source are created, select the data source and then choose Sync now to start syncing documents from your data source.
  9. The sync process ingests the documents from your Slack workspace to your selections in the Slack connector configuration in Amazon Q Business. The following screenshot shows the results of a successful sync, indicated by the status of Completed.

Search with Amazon Q Business

Now, you’re ready to make a few queries in Amazon Q Business.

To search using Amazon Q Business:

  1. Navigate to the Web experience settings tab and click on the Deployed URL.
  2. For this demonstration, sign in as pat_candella who has the role of DB Admin.
  3. Enter the password for pat_candella and choose Sign in
  4. Upon successful sign-in, you will be signed in to Amazon Q Business.
  5. In the Slack workspace, there is a public channel, the #customerwork channel that all users are members of. The #customerwork Slack channel is being used to communicate about an upcoming customer engagement, as shown in the following figure.
  6. Post the first question to Amazon Q Business.
I am currently using Apache Kafka. Can you list high level steps involved in migration to Amazon MSK?

Note that the response includes citations that refer to the conversation as well as the content of the PDF that was attached to the conversation.

Security and privacy options with Slack data connector

Next, you will create a private channel called #anydepartment-project-private with four out of the five users—arnav_desai, john_stiles, mary_major and pat_candella—and verify that the messages exchanged in a private channel are not available to non-members like jane_doe. Note that after you create a new private channel, you need to manually re-run the sync on the data source.

The below screenshot shows the private slack channel with four out of five users and the slack conversation.

Testing security and privacy options with Slack data connector

  1. While signed in as pat_candella, who is part of the private #anydepartment-project-private channel, execute the following query:
    What is Amazon Kendra and which API do I use to query a Kendra index?

  2. Now, sign in as jane_doe, who is not a member of the #anydepartment-project-private channel and execute the same query.
  3. Amazon Q Business prevents jane_doe from getting insights from information within the private channels that they aren’t part of, based on the synced ACL information.

Indexing aggregated Slack threads

Slack organizes conversations into threads, which can involve multiple users and messages. The Amazon Q Business Slack connector treats each thread as a single document, regardless of the number of participants or messages it contains. This approach allows Amazon Q Business to ingest entire conversation threads as individual units, maximizing the amount of data that can be processed within a single index unit. As a result, you can efficiently incorporate more comprehensive conversational context into your Amazon Q Business system.

The figure that follows shows a conversation between pat_candella and jane_doe that includes six messages in a thread. The Slack connector aggregates this message thread as a single message, thus maximizing the use of an index unit.

Because the conversation thread is aggregated as a single document within the Amazon Q Business index, you can ask questions that pertain to a single conversation thread as shown in the following figure.

Troubleshooting the sync process

  • Why isn’t Amazon Q Business answering any of my questions?

If you aren’t getting answers to your questions from Amazon Q Business, verify the following:

  • Permissions – Document ACLs indexed by Amazon Q Business may not allow you to query certain data entities as demonstrated in our example. If this is the case, please reach out to your Slack workspace administrator to make sure that your user has access to required documents and repeat the sync process.
  • Data connector sync – A failed data source sync may prevent the documents from being indexed, meaning that Amazon Q Business would be unable to answer questions about the documents that failed to sync. Please refer to the official documentation to troubleshoot data source connectors.
  • I’m receiving access errors on Amazon Q Business application. What causes this?

See Troubleshooting Amazon Q Business identity and access to diagnose and fix common issues that you might encounter when working with Amazon Q and IAM.

  • How can I sync documents without ACLs?

Amazon Q Business supports crawling ACLs for document security by default. Turning off ACLs and identity crawling are no longer supported. If you want to index documents without ACLs, ensure that the documents are marked as public in your data source. Please refer to the official documentation, How Amazon Q Business connector for crawls Slack ACLs.

  • My connector is unable to sync. How can I monitor data source sync progress?

Amazon Q Business provides visibility into the data sync operations. Learn more about this feature in the AWS Machine Learning blog.

Additionally, as the sync process runs, you can monitor progress or debug failures by monitoring the Amazon CloudWatch logs that can be accessed from the Details section of the Sync run history.

A sample query to determine which documents or messages were indexed from a specific slack channel, C12AB34578, and logStream of SYNC_RUN_HISTORY_REPORT/xxxxxxxxxxxxxxxxxxxxxxxx would look like the following:

fields LogLevel, DocumentId, DocumentTitle, CrawlAction, ConnectorDocumentStatus.Status as ConnectorDocumentStatus, ErrorMsg, CrawlStatus.Status as CrawlStatus, SyncStatus.Status as SyncStatus, IndexStatus.Status as IndexStatus, SourceUri, Acl, Metadata, HashedDocumentId, @timestamp

| filter @logStream like 'SYNC_RUN_HISTORY_REPORT/xxxxxxxxxxxxxxxxxxxxxxxx' and Metadata like /"stringValue":"C12AB34578"/

| sort @timestamp desc

| limit 10000

Choosing Run query displays the list of messages as the Amazon Q Business Index sync runs, as shown in the following figure.

Cleanup

To delete an Amazon Q Business application, you can use the console or the DeleteApplication API operation.

To delete an Amazon Q Business application using the console

  1. Sign in to the Amazon Q Business console.
  2. Select the respective the Amazon Q Business Application and choose
  3. Choose Delete
  4. In the dialog box that opens, enter Delete to confirm deletion, and then choose Delete.
  5. You are returned to the service console while your application is deleted. When the deletion process is complete, the console displays a message confirming successful deletion.

To delete the IAM Identity Center instance, see Delete your IAM Identity Center instance.

Conclusion

This blog post provides a step-by-step guide on setting up the Slack connector for Amazon Q Business, enabling you to seamlessly integrate data from your Slack workspace. Moreover, we highlighted the importance of data privacy and security, demonstrating how the connector adheres to the ACLs within your Slack workspace. This feature helps ensure that private channel conversations remain confidential and inaccessible to individuals who aren’t members of those channels. By following these steps and understanding the built-in security measures, you can use the power of Amazon Q Business while maintaining the integrity and privacy of your Slack workspace.

To learn more about the Amazon Q Business connector for Slack, see Connecting Slack to Amazon Q Business. You can automate all the showcased console operations through Amazon Q Business API’s, the AWS CLI and other applicable AWS SDKs.

If you choose to converse with Amazon Q Business using Slack direct messages (DMs) to ask questions and get answers based on company data or to get help creating new content such as email drafts, summarize attached files, and perform tasks, see Deploy a Slack gateway for Amazon Q, your business expert for information about how to bring Amazon Q, your business expert, to users in Slack.


About the Authors

Akshara Shah is a Senior Solutions Architect at Amazon Web Services. She provides strategic technical guidance to help customers design and build cloud solutions. She is currently focused on machine learning and AI technologies.

Roshan Thomas is a Senior Solutions Architect at Amazon Web Services. He is based in Melbourne, Australia and works closely with enterprise customers to accelerate their journey in the cloud. He is passionate about technology and helping customers architect and build solutions on AWS.

Read More

Transitioning off Amazon Lookout for Metrics 

Transitioning off Amazon Lookout for Metrics 

Amazon Lookout for Metrics is a fully managed service that uses machine learning (ML) to detect anomalies in virtually any time-series business or operational metrics—such as revenue performance, purchase transactions, and customer acquisition and retention rates—with no ML experience required. The service, which was launched in March 2021, predates several popular AWS offerings that have anomaly detection, such as Amazon OpenSearch, Amazon CloudWatch, AWS Glue Data Quality, Amazon Redshift ML, and Amazon QuickSight.

After careful consideration, we have made the decision to end support for Amazon Lookout for Metrics, effective October 10, 2025. In addition, as of today, new customer sign-ups are no longer available. Existing customers will be able to use the service as usual until October 10, 2025, when we will end support for Amazon Lookout for Metrics.

In this post, we provide an overview of the alternate AWS services that offer anomaly detection capabilities for customers to consider transitioning their workloads to.

AWS services with anomaly detection capabilities

We recommend customers use Amazon OpenSearch, Amazon CloudWatch, Amazon Redshift ML, Amazon QuickSight, or AWS Glue Data Quality services for their anomaly detection use cases as an alternative to Amazon Lookout for Metrics. These AWS services offer generally available, ML-powered anomaly detection capabilities that can be used out of the box without requiring any ML expertise. Following is a brief overview of each service.

Using Amazon OpenSearch for anomaly detection

Amazon OpenSearch Service features a highly performant, integrated anomaly detection engine that enables the real-time identification of anomalies in streaming data as well as in historical data. You can pair anomaly detection with built-in alerting in OpenSearch to send notifications when there is an anomaly. To start using OpenSearch for anomaly detection you first must index your data into OpenSearch, from there you can enable anomaly detection in OpenSearch Dashboards. To learn more, see the documentation.

Using Amazon CloudWatch for anomaly detection

Amazon CloudWatch supports creating anomaly detectors on specific Amazon CloudWatch Log Groups by applying statistical and ML algorithms to CloudWatch metrics. Anomaly detection alarms can be created based on a metric’s expected value. These types of alarms don’t have a static threshold for determining alarm state. Instead, they compare the metric’s value to the expected value based on the anomaly detection model. To start using CloudWatch anomaly detection, you first must ingest data into CloudWatch and then enable anomaly detection on the log group.

Using Amazon Redshift ML for anomaly detection

Amazon Redshift ML makes it easy to create, train, and apply machine learning models using familiar SQL commands in Amazon Redshift data warehouses. Anomaly detection can be done on your analytics data through Redshift ML by using the included XGBoost model type, local models, or remote models with Amazon SageMaker. With Redshift ML, you don’t have to be a machine learning expert and you pay only for the training cost of the SageMaker models. There are no additional costs to using Redshift ML for anomaly detection. To learn more, see the documentation.

Using Amazon QuickSight for anomaly detection

Amazon QuickSight is a fast, cloud-powered, business intelligence service that delivers insights to everyone in the organization. As a fully managed service, QuickSight lets customers create and publish interactive dashboards that include ML insights. QuickSight supports a highly performant, integrated anomaly detection engine that uses proven Amazon technology to continuously run ML-powered anomaly detection across millions of metrics to discover hidden trends and outliers in customers’ data. This tool allows customers to get deep insights that are often buried in the aggregates and not scalable with manual analysis. With ML-powered anomaly detection, customers can find outliers in their data without the need for manual analysis, custom development, or ML domain expertise. To learn more, see the documentation.

Using Amazon Glue Data Quality for anomaly detection

Data engineers and analysts can use AWS Glue Data Quality to measure and monitor their data. AWS Glue Data Quality uses a rule-based approach that works well for known data patterns and offers ML-based recommendations to help you get started. You can review the recommendations and augment rules from over 25 included data quality rules. To capture unanticipated, less obvious data patterns, you can enable anomaly detection. To use this feature, you can write rules or analyzers and then turn on anomaly detection in AWS Glue ETL. AWS Glue Data Quality collects statistics for columns specified in rules and analyzers, applies ML algorithms to detect anomalies, and generates visual observations explaining the detected issues. Customers can use recommended rules to capture the anomalous patterns and provide feedback to tune the ML model for more accurate detection. To learn more, see the blog post, watch the introductory video, or see the documentation.

Using Amazon SageMaker Canvas for anomaly detection (a beta feature)

The Amazon SageMaker Canvas team plans to provide support for anomaly detection use cases in Amazon SageMaker Canvas. We’ve created an AWS CloudFormation template-based solution to give customers early access to the underlying anomaly detection feature. Customers can use the CloudFormation template to bring up an application stack that receives time-series data from an Amazon Managed Streaming for Apache Kafka (Amazon MSK) streaming source and performs near-real-time anomaly detection in the streaming data. To learn more about the beta offering, see Anomaly detection in streaming time series data with online learning using Amazon Managed Service for Apache Flink.

Frequently asked questions

  1. What is the cutoff point for current customers?

We created an allow list of account IDs that have used Amazon Lookout for Metrics in the last 30 days and have active Amazon Lookout for Metrics resources, including detectors, within the service. If you are an existing customer and are having difficulties using the service, please reach out to us via AWS Customer Support for help.

  1. How will access change before the sunset date?

Current customers can do all the things they could previously. The only change is that non-current customers cannot create any new resources in Amazon Lookout for Metrics.

  1. What happens to my Amazon Lookout for Metrics resources after the sunset date?

After October 10, 2025, all references to AWS Lookout for Metrics models and resources will be deleted from Amazon Lookout for Metrics. You will not be able to discover or access Amazon Lookout for Metrics from your AWS Management Console and applications that call the Amazon Lookout for Metrics API will no longer work.

  1. Will I be billed for Amazon Lookout for Metrics resources remaining in my account after October 10, 2025?

Resources created by Amazon Lookout for Metrics internally will be deleted after October 10, 2025. Customers will be responsible for deleting the input data sources created by them, such as Amazon Simple Storage Service (Amazon S3) buckets, Amazon Redshift clusters, and so on.

  1. How do I delete my Amazon Lookout for Metrics resources?
  1. How can I export anomalies data before deleting the resources?

Anomalies data for each measure can be downloaded for a detector by using the Amazon Lookout for Metrics APIs for a particular detector. Exporting Anomalies explains how to connect to a detector, query for anomalies, and download them into a format for later use.

Conclusion

In this blog post, we have outlined methods to create anomaly detectors using alternates such as Amazon OpenSearch, Amazon CloudWatch, and a CloudFormation template-based solution.

Resource links:


About the Author

Nirmal Kumar is Sr. Product Manager for the Amazon SageMaker service. Committed to broadening access to AI/ML, he steers the development of no-code and low-code ML solutions. Outside work, he enjoys travelling and reading non-fiction.

Read More

Efficient Pre-training of Llama 3-like model architectures using torchtitan on Amazon SageMaker

Efficient Pre-training of Llama 3-like model architectures using torchtitan on Amazon SageMaker

This post is co-written with Less Wright and Wei Feng from Meta

Pre-training large language models (LLMs) is the first step in developing powerful AI systems that can understand and generate human-like text. By exposing models to vast amounts of diverse data, pre-training lays the groundwork for LLMs to learn general language patterns, world knowledge, and reasoning capabilities. This foundational process enables LLMs to perform a wide range of tasks without task-specific training, making them highly versatile and adaptable. Pre-training is essential for building a strong base of knowledge, which can then be refined and specialized through fine-tuning, transfer learning, or few-shot learning approaches.

In this post, we collaborate with the team working on PyTorch at Meta to showcase how the torchtitan library accelerates and simplifies the pre-training of Meta Llama 3-like model architectures. We showcase the key features and capabilities of torchtitan such as FSDP2, torch.compile integration, and FP8 support that optimize the training efficiency. We pre-train a Meta Llama 3 8B model architecture using torchtitan on Amazon SageMaker on p5.48xlarge instances, each equipped with 8 Nvidia H100 GPUs. We demonstrate a 38.23% performance speedup in the training throughput compared to the baseline without applying the optimizations (as shown in the following figure). Amazon SageMaker Model Training reduces the time and cost to train and tune machine learning (ML) models at scale without the need to manage infrastructure. You can take advantage of the highest-performing ML compute infrastructure currently available, and SageMaker can automatically scale infrastructure up or down, from one to thousands of GPUs.

To learn more, you can find our complete code sample on GitHub.

Introduction to torchtitan

torchtitan is a reference architecture for large-scale LLM training using native PyTorch. It aims to showcase PyTorch’s latest distributed training features in a clean, minimal code base. The library is designed to be simple to understand, use, and extend for different training purposes, with minimal changes required to the model code when applying various parallel processing techniques.

torchtitan offers several key features, including FSDP2 with per-parameter sharding, tensor parallel processing, selective layer and operator activation checkpointing, and distributed checkpointing. It supports pre-training of Meta Llama 3-like and Llama 2-like model architectures of various sizes and includes configurations for multiple datasets. The library provides straightforward configuration through TOML files and offers performance monitoring through TensorBoard. In the following sections, we highlight some of the key features of torchtitan.

Transitioning from FSDP1 to FSDP2

FSDP1 and FSDP2 are two approaches to fully sharded data parallel training. FSDP1 uses flat-parameter sharding, which flattens all parameters to 1D, concatenates them into a single tensor, pads it, and then chunks it across workers. This method offers bounded padding and efficient unsharded storage, but might not always allow optimal sharding for individual parameters. FSDP2, on the other hand, represents sharded parameters as DTensors sharded on dim-0, handling each parameter individually. This approach enables easier manipulation of parameters, for example per-weight learning rate, communication-free sharded state dicts, and simpler meta-device initialization. The transition from FSDP1 to FSDP2 reflects a shift towards more flexible and efficient parameter handling in distributed training, addressing limitations of the flat-parameter approach while potentially introducing new optimization opportunities.

torchtitan support for torch.compile

torch.compile is a key feature in PyTorch that significantly boosts model performance with minimal code changes. Through its just-in-time (JIT) compilation, it analyzes and transforms PyTorch code into more efficient kernels. torchtitan supports torch.compile, which delivers substantial speedups, especially for large models and complex architectures, by using techniques like operator fusion, memory planning, and automatic kernel selection. This is enabled by setting compile = true in the model’s TOML configuration file.

torchtitan support for FP8 linear operations

torchtitan provides support for FP8 (8-bit floating point) computation that significantly reduces memory footprint and enhances performance in LLM training. FP8 has two formats, E4M3 and E5M2, each optimized for different aspects of training. E4M3 offers higher precision, making it ideal for forward propagation, whereas E5M2, with its larger dynamic range, is better suited for backpropagation. When operating at a lower precision, FP8 has no impact on model accuracy, which we demonstrate by convergence comparisons of the Meta Llama 3 8B pre-training at 2,000 steps. FP8 support on torchtitan is through the torchao library, and we enable FP8 by setting enable_float8_linear = true in the model’s TOML configuration file.

torchtitan support for FP8 all-gather

This feature enables efficient communication of FP8 tensors across multiple GPUs, significantly reducing network bandwidth compared to bfloat16 all-gather operations. FP8 all-gather performs float8 casting before the all-gather operation, reducing the message size. Key to its efficiency is the combined absolute maximum (AMAX) AllReduce, which calculates AMAX for all float8 parameters in a single operation after the optimizer step, avoiding multiple small all-reduces. Similar to FP8 support, this also has no impact on model accuracy, which we demonstrate by convergence comparisons of the Meta Llama 3 8B pre-training.

Pre-training Meta Llama 3 8B with torchtitan on Amazon SageMaker

SageMaker training jobs offer several key advantages that enhance the pre-training process of Meta Llama 3-like model architectures with torchtitan. It provides a fully managed environment that simplifies large-scale distributed training across multiple instances, which is crucial for efficiently pre-training LLMs. SageMaker supports custom containers, which allows seamless integration of the torchtitan library and its dependencies, so all necessary components are readily available.

The built-in distributed training capabilities of SageMaker streamline the setup of multi-GPU and multi-node jobs, reducing the complexity typically associated with such configurations. Additionally, SageMaker integrates with TensorBoard, enabling real-time monitoring and visualization of training metrics and providing valuable insights into the pre-training process. With these features, researchers and practitioners can focus more on model development and optimization rather than infrastructure management, ultimately accelerating the iterative process of creating and refining custom LLMs.

Solution overview

In the following sections, we walk you through how to prepare a custom image with the torchtitan library, then configure a training job estimator function to launch a Meta Llama 3 8B model pre-training with the c4 dataset (Colossal Clean Crawled Corpus) on SageMaker. The c4 dataset is a large-scale web text corpus that has been cleaned and filtered to remove low-quality content. It is frequently used for pre-training language models.

Prerequisites

Before you begin, make sure you have the following requirements in place:

Build the torchtitan custom image

SageMaker BYOC (Bring Your Own Container) allows you to use custom Docker containers to train and deploy ML models. Typically, SageMaker provides built-in algorithms and preconfigured environments for popular ML frameworks. However, there may be cases where you have unique or proprietary algorithms, dependencies, or specific requirements that aren’t available in the built-in options, necessitating custom containers. In this case, we need to use the nightly versions of torch, torchdata, and the torchao package to train with FP8 precision.

We use the Amazon SageMaker Studio Image Build convenience package, which offers a command line interface (CLI) to simplify the process of building custom container images directly from SageMaker Studio notebooks. This tool eliminates the need for manual setup of Docker build environments, streamlining the workflow for data scientists and developers. The CLI automatically manages the underlying AWS services required for image building, such as Amazon Simple Storage Service (Amazon S3), AWS CodeBuild, and Amazon Elastic Container Registry (Amazon ECR), allowing you to focus on your ML tasks rather than infrastructure setup. It offers a simple command interface, handles packaging of Dockerfiles and container code, and provides the resulting image URI for use in SageMaker training and hosting.

Before getting started, make sure your AWS Identity and Access Management (IAM) execution role has the required IAM permissions and policies to use the Image Build CLI. For more information, see Using the Amazon SageMaker Studio Image Build CLI to build container images from your Studio notebooks. We have provided the Jupyter notebook to build the custom container in the GitHub repo.

Complete the following steps to build the custom image:

  1. Install the Image Build package with the following command:
! pip install sagemaker-studio-image-build
  1. To extend the pre-built image, you can use the included deep learning libraries and settings without having to create an image from scratch:
FROM 763104351884.dkr.ecr.${REGION}.amazonaws.com/pytorch-training:2.3.0-gpu-py311-cu121-ubuntu20.04-sagemaker
  1. Next, specify the libraries to install. You need the nightly versions of torch, torchdata, and the torchao libraries:
RUN pip install --pre torch --force-reinstall --index-url https://download.pytorch.org/whl/nightly/cu121

RUN pip install --pre torchdata --index-url https://download.pytorch.org/whl/nightly

#install torchtitan dependencies
RUN pip install --no-cache-dir 
datasets>=2.19.0 
tomli>=1.1.0 
tensorboard 
sentencepiece 
tiktoken 
blobfile 
tabulate

#install torchao package for FP8 support
RUN pip install --pre torchao --index-url https://download.pytorch.org/whl/nightly/cu121
#Display installed packages for reference
RUN pip freeze
  1. Use the Image Build CLI to build and push the image to Amazon ECR:

!sm-docker build --repository torchtitan:latest . You’re now ready to use this image for pre-training models with torchtitan in SageMaker.

Prepare your dataset (optional)

By default, the torchtitan library uses the allenai/c4 “en” dataset in its training configuration. This is streamed directly during training using the HuggingFaceDataset class. However, you may want to pre-train the Meta Llama 3 models on your own dataset residing in Amazon S3. For this purpose, we have prepared a sample Jupyter notebook to download the allenai/c4 “en” dataset from the Hugging Face dataset hub to an S3 bucket. We use the SageMaker InputDataConfiguration to load the dataset to our training instances in the later section. You can download the dataset with a SageMaker processing job available in the sample Jupyter notebook.

Launch your training with torchtitan

Complete the following steps to launch your training:

  1. Import the necessary SageMaker modules and retrieve your work environment details, such as AWS account ID and AWS Region. Make sure to upgrade the SageMaker SDK to the latest version. This might require a SageMaker Studio kernel restart.
%pip install --upgrade "sagemaker>=2.224"
%pip install sagemaker-experiments

import os
import boto3
import sagemaker
from sagemaker import get_execution_role
from sagemaker.pytorch import PyTorch

role = get_execution_role()
print(f"SageMaker Execution Role: {role}")

client = boto3.client("sts")
account = client.get_caller_identity()["Account"]
print(f"AWS account: {account}")

session = boto3.session.Session()
region = session.region_name
print(f"AWS region: {region}")

sm_boto_client = boto3.client("sagemaker")
sagemaker_session = sagemaker.session.Session(boto_session=session)

default_bucket = sagemaker_session.default_bucket()
print("Default bucket for this session: ", default_bucket)
  1. Clone the torchtitan repository and prepare the training environment. Create a source directory and move the necessary dependencies from the torchtitan directory. This step makes sure you have all the required files for your training process.
git clone https://github.com/pytorch/torchtitan.git
mkdir torchtitan/src
!mv  torchtitan/torchtitan/ torchtitan/train_configs/ torchtitan/train.py  torchtitan/src/
  1. Use the following command to download the Meta Llama 3 tokenizer, which is essential for preprocessing your dataset. Provide your Hugging Face token.
    python torchtitan/src/torchtitan/datasets/download_tokenizer.py --repo_id meta-llama/Meta-Llama-3-8B --tokenizer_path "original" --hf_token="YOUR_HF_TOKEN"

One of the key advantages of torchtitan is its straightforward configuration through TOML files. We modify the Meta Llama-3-8b TOML configuration file to enable monitoring and optimization features.

  1. Enable TensorBoard profiling for better insights into the training process:
[metrics]
log_freq = 10
enable_tensorboard = true
save_tb_folder = "/opt/ml/output/tensorboard"
  1. Enable torch.compile for improved performance:
compile = true
  1. Enable FP8 for more efficient computations:
float8]
enable_float8_linear = true
  1. Activate FP8 all-gather for optimized distributed training:
enable_fsdp_float8_all_gather= true
precompute_float8_dynamic_scale_for_fsdp = true
  1. To monitor the training progress, set up TensorBoard output. This allows you to visualize the training metrics in real time, providing valuable insights into how the model is learning.
from sagemaker.debugger import TensorBoardOutputConfig

LOG_DIR="/opt/ml/output/tensorboard"
tensorboard_output_config = TensorBoardOutputConfig(
s3_output_path=f"s3://sagemaker-{region}-{account}/tensorboard/",
container_local_output_path=LOG_DIR
)
  1. Set up the data channels for SageMaker training. Create TrainingInput objects that point to the preprocessed dataset in Amazon S3, so your model has access to the training data it needs.
#update the path below the s3 dataset path from running the previous Jupyter Notebook from Step 2
training_dataset_location = "<PATH-TO-DATASET>" 

s3_train_bucket = training_dataset_location

if s3_train_bucket != None:
   train = sagemaker.inputs.TrainingInput(s3_train_bucket, distribution="FullyReplicated", s3_data_type="S3Prefix")
   data_channels = {"train": train}

  1. With all the pieces in place, you’re ready to create the SageMaker PyTorch estimator. This estimator encapsulates all the configurations, including the custom container, hyperparameters, and resource allocations.

import os

from time import gmtime, strftime

hyperparameters = {
   "config_file": "train_configs/llama3_8b.toml"
}
timestamp = strftime("%Y-%m-%d-%H-%M", gmtime())


estimator = PyTorch(
   base_job_name=f'llama3-8b-{timestamp}',
   entry_point="train.py",
   image_uri="<PATH-TO-IMAGE-URI>",
   source_dir=os.path.join(os.getcwd(), "src"),
   role=role,
   instance_type="ml.p5.48xlarge",
   volume_size=800,
   instance_count=4,
   hyperparameters=hyperparameters,
   use_spot_instances = False,
   sagemaker_session=sagemaker_session,
   tensorboard_output_config=tensorboard_output_config,
   distribution={
   'torch_distributed': {'enabled': True},
   },
  
)
  1. Initiate the model training on SageMaker:

estimator.fit(inputs=data_channels)

Performance numbers

The following table summarizes the performance numbers for the various training runs with different optimizations.

Setup Configuration TOML Configuration

Throughput

(Tokens per Second)

Speedup Over

Baseline

LLama3 – 8B pre-training on 4 x p5.48xlarge instances

(32 NVIDIA H100 GPUs)

Baseline Default Configuration 6475
torch.compile compile = true 7166 10.67%
FP8 linear

compile = true

enable_float8_linear = true

8624 33.19%
FP8 all-gather

compile = true

enable_float8_linear = true

enable_fsdp_float8_all_gather= true

precompute_float8_dynamic_scale_for_fsdp = true

8950 38.23%

The performance results show clear optimization progress in Meta Llama 3 8B pre-training. torch.compile() delivered an 10.67% speedup, and FP8 linear operations tripled this to 33%. Adding FP8 all-gather further increased the speedup to 38.23% over the baseline. This progression demonstrates how combining optimization strategies significantly enhances training efficiency.

The following figure illustrates the stepwise performance gains for Meta Llama 3 8B pre-training on torchtitan with the optimizations.

These optimizations didn’t affect the model’s training quality. The loss curves for all optimization levels, including the baseline, torch.compile(), FP8 linear, and FP8 all-gather configurations, remained consistent throughout the training process, as shown in the following figure.

Loss curves with different configurations

The following table showcases the consistent loss value with the different configurations.

Configuration Loss After 2,000 Steps
Baseline 3.602
Plus torch.compile 3.601
Plus FP8 3.612
Plus FP8 all-gather 3.607

Clean up

After you complete your training experiments, clean up your resources to avoid unnecessary charges. You can start by deleting any unused SageMaker Studio resources. Next, remove the custom container image from Amazon ECR by deleting the repository you created. If you ran the optional step to use your own dataset, delete the S3 bucket where this data was stored.

Conclusion

In this post, we demonstrated how to efficiently pre-train Meta Llama 3 models using the torchtitan library on SageMaker. With torchtitan’s advanced optimizations, including torch.compile, FP8 linear operations, and FP8 all-gather, we achieved a 38.23% acceleration in Meta Llama 3 8B pre-training without compromising the model’s accuracy.

SageMaker simplified the large-scale training by offering seamless integration with custom containers, effortless scaling across multiple instances, built-in support for distributed training, and integration with TensorBoard for real-time monitoring.

Pre-training is a crucial step in developing powerful and adaptable LLMs that can effectively tackle a wide range of tasks and applications. By combining the latest PyTorch distributed training features in torchtitan with the scalability and flexibility of SageMaker, organizations can use their proprietary data and domain expertise to create robust and high-performance AI models. Get started by visiting the GitHub repository for the complete code example and optimize your LLM pre-training workflow.

Special thanks

Special thanks to Gokul Nadathur (Engineering Manager at Meta), Gal Oshri (Principal Product Manager Technical at AWS) and Janosch Woschitz (Sr. ML Solution Architect at AWS) for their support to the launch of this post.


About the Authors

Roy Allela is a Senior AI/ML Specialist Solutions Architect at AWS.He helps AWS customers—from small startups to large enterprises—train and deploy foundation models efficiently on AWS. He is passionate about computational optimization problems and improving the performance of AI workloads.

Kanwaljit Khurmi is a Principal Solutions Architect at Amazon Web Services. He works with AWS customers to provide guidance and technical assistance, helping them improve the value of their solutions when using AWS. Kanwaljit specializes in helping customers with containerized and machine learning applications.

Trevor Harvey is a Principal Specialist in Generative AI at Amazon Web Services (AWS) and an AWS Certified Solutions Architect – Professional. He serves as a voting member of the PyTorch Foundation Governing Board, where he contributes to the strategic advancement of open-source deep learning frameworks. At AWS, Trevor works with customers to design and implement machine learning solutions and leads go-to-market strategies for generative AI services.

Less Wright is an AI/Partner Engineer in PyTorch. He works on Triton/CUDA kernels (Accelerating Dequant with SplitK work decomposition); paged, streaming, and quantized optimizers; and PyTorch Distributed (PyTorch FSDP).

Wei Feng is a Software Engineer on the PyTorch distributed team. He has worked on float8 all-gather for FSDP2, TP (Tensor Parallel) in TorchTitan, and 4-bit quantization for distributed QLoRA in TorchTune. He is also a core maintainer of FSDP2.

Read More

Time series forecasting with Amazon SageMaker AutoML

Time series forecasting with Amazon SageMaker AutoML

Time series forecasting is a critical component in various industries for making informed decisions by predicting future values of time-dependent data. A time series is a sequence of data points recorded at regular time intervals, such as daily sales revenue, hourly temperature readings, or weekly stock market prices. These forecasts are pivotal for anticipating trends and future demands in areas such as product demand, financial markets, energy consumption, and many more.

However, creating accurate and reliable forecasts poses significant challenges because of factors such as seasonality, underlying trends, and external influences that can dramatically impact the data. Additionally, traditional forecasting models often require extensive domain knowledge and manual tuning, which can be time-consuming and complex.

In this blog post, we explore a comprehensive approach to time series forecasting using the Amazon SageMaker AutoMLV2 Software Development Kit (SDK). SageMaker AutoMLV2 is part of the SageMaker Autopilot suite, which automates the end-to-end machine learning workflow from data preparation to model deployment. Throughout this blog post, we will be talking about AutoML to indicate SageMaker Autopilot APIs, as well as Amazon SageMaker Canvas AutoML capabilities. We’ll walk through the data preparation process, explain the configuration of the time series forecasting model, detail the inference process, and highlight key aspects of the project. This methodology offers insights into effective strategies for forecasting future data points in a time series, using the power of machine learning without requiring deep expertise in model development. The code for this post can be found in the GitHub repo.

The following diagram depicts the basic AutoMLV2 APIs, all of which are relevant to this post. The diagram shows the workflow for building and deploying models using the AutoMLV2 API. In the training phase, CSV data is uploaded to Amazon S3, followed by the creation of an AutoML job, model creation, and checking for job completion. The deployment phase allows you to choose between real-time inference via an endpoint or batch inference using a scheduled transform job that stores results in S3.

Basic AutoMLV2 API's

1. Data preparation

The foundation of any machine learning project is data preparation. For this project, we used a synthetic dataset containing time series data of product sales across various locations, focusing on attributes such as product code, location code, timestamp, unit sales, and promotional information. The dataset can be found in an Amazon-owned, public Amazon Simple Storage Service (Amazon S3) dataset.

When preparing your CSV file for input into a SageMaker AutoML time series forecasting model, you must ensure that it includes at least three essential columns (as described in the SageMaker AutoML V2 documentation):

  1. Item identifier attribute name: This column contains unique identifiers for each item or entity for which predictions are desired. Each identifier distinguishes the individual data series within the dataset. For example, if you’re forecasting sales for multiple products, each product would have a unique identifier.
  2. Target attribute name: This column represents the numerical values that you want to forecast. These could be sales figures, stock prices, energy usage amounts, and so on. It’s crucial that the data in this column is numeric because the forecasting models predict quantitative outcomes.
  3. Timestamp attribute name: This column indicates the specific times when the observations were recorded. The timestamp is essential for analyzing the data in a chronological context, which is fundamental to time series forecasting. The timestamps should be in a consistent and appropriate format that reflects the regularity of your data (for example, daily or hourly).

All other columns in the dataset are optional and can be used to include additional time-series related information or metadata about each item. Therefore, your CSV file should have columns named according to the preceding attributes (item identifier, target, and timestamp) as well as any other columns needed to support your use case For instance, if your dataset is about forecasting product demand, your CSV might look something like this:

  • Product_ID (item identifier): Unique product identifiers.
  • Sales (target): Historical sales data to be forecasted.
  • Date (timestamp): The dates on which sales data was recorded.

The process of splitting the training and test data in this project uses a methodical and time-aware approach to ensure that the integrity of the time series data is maintained. Here’s a detailed overview of the process:

Ensuring timestamp integrity

The first step involves converting the timestamp column of the input dataset to a datetime format using pd.to_datetime. This conversion is crucial for sorting the data chronologically in subsequent steps and for ensuring that operations on the timestamp column are consistent and accurate.

Sorting the data

The sorted dataset is critical for time series forecasting, because it ensures that data is processed in the correct temporal order. The input_data DataFrame is sorted based on three columns: product_code, location_code, and timestamp. This multi-level sort guarantees that the data is organized first by product and location, and then chronologically within each product-location grouping. This organization is essential for the logical partitioning of data into training and test sets based on time.

Splitting into training and test sets

The splitting mechanism is designed to handle each combination of product_code and location_code separately, respecting the unique temporal patterns of each product-location pair. For each group:

  • The initial test set is determined by selecting the last eight timestamps (yellow + green below). This subset represents the most recent data points that are candidates for testing the model’s forecasting ability.
  • The final test set is refined by removing the last four timestamps from the initial test set, resulting in a test dataset that includes the four timestamps immediately preceding the latest data (green below). This strategy ensures the test set is representative of the near-future periods the model is expected to predict, while also leaving out the most recent data to simulate a realistic forecasting scenario.
  • The training set comprises the remaining data points, excluding the last eight timestamps (blue below). This ensures the model is trained on historical data that precedes the test period, avoiding any data leakage and ensuring that the model learns from genuinely past observations.

This process is visualized in the following figure with an arbitrary value on the Y axis and the days of February on the X axis.

Time series data split

The test dataset is used to evaluate the performance of the trained model and compute various loss metrics, such as mean absolute error (MAE) and root-mean-squared error (RMSE). These metrics quantify the model’s accuracy in forecasting the actual values in the test set, providing a clear indication of the model’s quality and its ability to make accurate predictions. The evaluation process is detailed in the “Inference: Batch, real-time, and asynchronous” section, where we discuss the comprehensive approach to model evaluation and conditional model registration based on the computed metrics.

Creating and saving the datasets

After the data for each product-location group is categorized into training and test sets, the subsets are aggregated into comprehensive training and test DataFrames using pd.concat. This aggregation step combines the individual DataFrames stored in train_dfs and test_dfs lists into two unified DataFrames:

  • train_df for training data
  • test_df for testing data

Finally, the DataFrames are saved to CSV files (train.csv for training data and test.csv for test data), making them accessible for model training and evaluation processes. This saving step not only facilitates a clear separation of data for modelling purposes but also enables reproducibility and sharing of the prepared datasets.

Summary

This data preparation strategy meticulously respects the chronological nature of time series data and ensures that the training and test sets are appropriately aligned with real-world forecasting scenarios. By splitting the data based on the last known timestamps and carefully excluding the most recent periods from the training set, the approach mimics the challenge of predicting future values based on past observations, thereby setting the stage for a robust evaluation of the forecasting model’s performance.

2. Training a model with AutoMLV2

SageMaker AutoMLV2 reduces the resources needed to train, tune, and deploy machine learning models by automating the heavy lifting involved in model development. It provides a straightforward way to create high-quality models tailored to your specific problem type, be it classification, regression, or forecasting, among others. In this section, we delve into the steps to train a time series forecasting model with AutoMLV2.

Step 1: Define the tine series forecasting configuration

The first step involves defining the problem configuration. This configuration guides AutoMLV2 in understanding the nature of your problem and the type of solution it should seek, whether it involves classification, regression, time-series classification, computer vision, natural language processing, or fine-tuning of large language models. This versatility is crucial because it allows AutoMLV2 to adapt its approach based on the specific requirements and complexities of the task at hand. For time series forecasting, the configuration includes details such as the frequency of forecasts, the horizon over which predictions are needed, and any specific quantiles or probabilistic forecasts. Configuring the AutoMLV2 job for time series forecasting involves specifying parameters that would best use the historical sales data to predict future sales.

The AutoMLTimeSeriesForecastingConfig is a configuration object in the SageMaker AutoMLV2 SDK designed specifically for setting up time series forecasting tasks. Each argument provided to this configuration object tailors the AutoML job to the specifics of your time series data and the forecasting objectives.

time_series_config = AutoMLTimeSeriesForecastingConfig(
    forecast_frequency='W',
    forecast_horizon=4,
    item_identifier_attribute_name='product_code',
    target_attribute_name='unit_sales',
    timestamp_attribute_name='timestamp',
    ...
)

The following is a detailed explanation of each configuration argument used in your time series configuration:

  • forecast_frequency
    • Description: Specifies how often predictions should be made.
    • Value ‘W’: Indicates that forecasts are expected on a weekly basis. The model will be trained to understand and predict data as a sequence of weekly observations. Valid intervals are an integer followed by Y (year), M (month), W (week), D (day), H (hour), and min (minute). For example, 1D indicates every day and 15min indicates every 15 minutes. The value of a frequency must not overlap with the next larger frequency. For example, you must use a frequency of 1H instead of 60min.
  • forecast_horizon
    • Description: Defines the number of future time-steps the model should predict.
    • Value 4: The model will forecast four time-steps into the future. Given the weekly frequency, this means the model will predict the next four weeks of data from the last known data point.
  • forecast_quantiles
    • Description: Specifies the quantiles at which to generate probabilistic forecasts.
    • Values [p50,p60,p70,p80,p90]: These quantiles represent the 50th, 60th, 70th, 80th, and 90th percentiles of the forecast distribution, providing a range of possible outcomes and capturing forecast uncertainty. For instance, the p50 quantile (median) might be used as a central forecast, while the p90 quantile provides a higher-end forecast, where 90% of the actual data is expected to fall below the forecast, accounting for potential variability.
  • filling
    • Description: Defines how missing data should be handled before training; specifying filling strategies for different scenarios and columns.
    • Value filling_config: This should be a dictionary detailing how to fill missing values in your dataset, such as filling missing promotional data with zeros or specific columns with predefined values. This ensures the model has a complete dataset to learn from, improving its ability to make accurate forecasts.
  • item_identifier_attribute_name
    • Description: Specifies the column that uniquely identifies each time series in the dataset.
    • Value ’product_code’: This setting indicates that each unique product code represents a distinct time series. The model will treat data for each product code as a separate forecasting problem.
  • target_attribute_name
    • Description: The name of the column in your dataset that contains the values you want to predict.
    • Value unit_sales: Designates the unit_sales column as the target variable for forecasts, meaning the model will be trained to predict future sales figures.
  • timestamp_attribute_name
    • Description: The name of the column indicating the time point for each observation.
    • Value ‘timestamp’: Specifies that the timestamp column contains the temporal information necessary for modeling the time series.
  • grouping_attribute_names
    • Description: A list of column names that, in combination with the item identifier, can be used to create composite keys for forecasting.
    • Value [‘location_code’]: This setting means that forecasts will be generated for each combination of product_code and location_code. It allows the model to account for location-specific trends and patterns in sales data.

The configuration provided instructs the SageMaker AutoML to train a model capable of weekly sales forecasts for each product and location, accounting for uncertainty with quantile forecasts, handling missing data, and recognizing each product-location pair as a unique series. This detailed setup aims to optimize the forecasting model’s relevance and accuracy for your specific business context and data characteristics.

Step 2: Initialize the AutoMLV2 job

Next, initialize the AutoMLV2 job by specifying the problem configuration, the AWS role with permissions, the SageMaker session, a base job name for identification, and the output path where the model artifacts will be stored.

automl_sm_job = AutoMLV2(
    problem_config=time_series_config,
    role=role,
    sagemaker_session=sagemaker_session,
    base_job_name='time-series-forecasting-job',
    output_path=f's3://{bucket}/{prefix}/output'
)

Step 3: Fit the model

To start the training process, call the fit method on your AutoMLV2 job object. This method requires specifying the input data’s location in Amazon S3 and whether SageMaker should wait for the job to complete before proceeding further. During this step, AutoMLV2 will automatically pre-process your data, select algorithms, train multiple models, and tune them to find the best solution.

automl_sm_job.fit(
    inputs=[AutoMLDataChannel(s3_data_type='S3Prefix', s3_uri=train_uri, channel_type='training')],
    wait=True,
    logs=True
)

Please note that model fitting may take several hours, depending on the size of your dataset and compute budget. A larger compute budget allows for more powerful instance types, which can accelerate the training process. In this situation, provided you’re not running this code as part of the provided SageMaker notebook (which handles the order of code cell processing correctly), you will need to implement some custom code that monitors the training status before retrieving and deploying the best model.

3. Deploying a model with AutoMLV2

Deploying a machine learning model into production is a critical step in your machine learning workflow, enabling your applications to make predictions from new data. SageMaker AutoMLV2 not only helps build and tune your models but also provides a seamless deployment experience. In this section, we’ll guide you through deploying your best model from an AutoMLV2 job as a fully managed endpoint in SageMaker.

Step 1: Identify the best model and extract name

After your AutoMLV2 job completes, the first step in the deployment process is to identify the best performing model, also known as the best candidate. This can be achieved by using the best_candidate method of your AutoML job object. You can either use this method immediately after fitting the AutoML job or specify the job name explicitly if you’re operating on a previously completed AutoML job.

# Option 1: Directly after fitting the AutoML job
best_candidate = automl_sm_job.best_candidate()

# Option 2: Specifying the job name directly
best_candidate = automl_sm_job.best_candidate(job_name='your-auto-ml-job-name')

best_candidate_name = best_candidate['CandidateName']

Step 2: Create a SageMaker model

Before deploying, create a SageMaker model from the best candidate. This model acts as a container for the artifacts and metadata necessary to serve predictions. Use the create_model method of the AutoML job object to complete this step.

endpoint_name = f"ep-{best_candidate_name}-automl-ts"

# Create a SageMaker model from the best candidate
automl_sm_model = automl_sm_job.create_model(name=best_candidate_name, candidate=best_candidate)

4. Inference: Batch, real-time, and asynchronous

For deploying the trained model, we explore batch, real-time, and asynchronous inference methods to cater to different use cases.

The following figure is a decision tree to help you decide what type of endpoint to use. The diagram outlines a decision-making process for selecting between batch, asynchronous, or real-time inference endpoints. Starting with the need for immediate responses, it guides you through considerations like the size of the payload and the computational complexity of the model. Depending on these factors, you can choose a faster option with lower computational requirements or a slower batch process for larger datasets.

Decisioin tree for selecting between batch, asynchronous, or real-time inference endpoints

Batch inference using SageMaker pipelines

  • Usage: Ideal for generating forecasts in bulk, such as monthly sales predictions across all products and locations.
  • Process: We used SageMaker’s batch transform feature to process a large dataset of historical sales data, outputting forecasts for the specified horizon.

The inference pipeline used for batch inference demonstrates a comprehensive approach to deploying, evaluating, and conditionally registering a machine learning model for time series forecasting using SageMaker. This pipeline is structured to ensure a seamless flow from data preprocessing, through model inference, to post-inference evaluation and conditional model registration. Here’s a detailed breakdown of its construction:

  • Batch tranform step
    • Transformer Initialization: A Transformer object is created, specifying the model to use for batch inference, the compute resources to allocate, and the output path for the results.
    • Transform step creation: This step invokes the transformer to perform batch inference on the specified input data. The step is configured to handle data in CSV format, a common choice for structured time series data.
  • Evaluation step
    • Processor setup: Initializes an SKLearn processor with the specified role, framework version, instance count, and type. This processor is used for the evaluation of the model’s performance.
    • Evaluation processing: Configures the processing step to use the SKLearn processor, taking the batch transform output and test data as inputs. The processing script (evaluation.py) is specified here, which will compute evaluation metrics based on the model’s predictions and the true labels.
    • Evaluation strategy: We adopted a comprehensive evaluation approach, using metrics like mean absolute error (MAE) and root-means squared error (RMSE) to quantify the model’s accuracy and adjusting the forecasting configuration based on these insights.
    • Outputs and property files: The evaluation step produces an output file (evaluation_metrics.json) that contains the computed metrics. This file is stored in Amazon S3 and registered as a property file for later access in the pipeline.
  • Conditional model registration
    • Model metrics setup: Defines the model metrics to be associated with the model package, including statistics and explainability reports sourced from specified Amazon S3 URIs.
    • Model registration: Prepares for model registration by specifying content types, inference and transform instance types, model package group name, approval status, and model metrics.
    • Conditional registration step: Implements a condition based on the evaluation metrics (for example, MAE). If the condition (for example, MAE is greater than or equal to threshold) is met, the model is registered; otherwise, the pipeline concludes without model registration.
  • Pipeline creation and runtime
    • Pipeline definition: Assembles the pipeline by naming it and specifying the sequence of steps to run: batch transform, evaluation, and conditional registration.
    • Pipeline upserting and runtime: The pipeline.upsert method is called to create or update the pipeline based on the provided definition, and pipeline.start() runs the pipeline.

The following figure is an example of the SageMaker Pipeline directed acyclic graph (DAG).

SageMaker Pipeline directed acyclic graph (DAG) for this problem.

This pipeline effectively integrates several stages of the machine learning lifecycle into a cohesive workflow, showcasing how Amazon SageMaker can be used to automate the process of model deployment, evaluation, and conditional registration based on performance metrics. By encapsulating these steps within a single pipeline, the approach enhances efficiency, ensures consistency in model evaluation, and streamlines the model registration process—all while maintaining the flexibility to adapt to different models and evaluation criteria.

Inferencing with Amazon SageMaker Endpoint in (near) real-time

But what if you want to run inference in real-time or asynchronously? SageMaker real-time endpoint inference offers the capability to deliver immediate predictions from deployed machine learning models, crucial for scenarios demanding quick decision making. When an application sends a request to a SageMaker real-time endpoint, it processes the data in real time and returns the prediction almost immediately. This setup is optimal for use cases that require near-instant responses, such as personalized content delivery, immediate fraud detection, and live anomaly detection.

  • Usage: Suited for on-demand forecasts, such as predicting next week’s sales for a specific product at a particular location.
  • Process: We deployed the model as a SageMaker endpoint, allowing us to make real-time predictions by sending requests with the required input data.

Deployment involves specifying the number of instances and the instance type to serve predictions. This step creates an HTTPS endpoint that your applications can invoke to perform real-time predictions.

# Deploy the model to a SageMaker endpoint
predictor = automl_sm_model.deploy(initial_instance_count=1, endpoint_name=endpoint_name, instance_type='ml.m5.xlarge')

The deployment process is asynchronous, and SageMaker takes care of provisioning the necessary infrastructure, deploying your model, and ensuring the endpoint’s availability and scalability. After the model is deployed, your applications can start sending prediction requests to the endpoint URL provided by SageMaker.

While real-time inference is suitable for many use cases, there are scenarios where a slightly relaxed latency requirement can be beneficial. SageMaker Asynchronous Inference provides a queue-based system that efficiently handles inference requests, scaling resources as needed to maintain performance. This approach is particularly useful for applications that require processing of larger datasets or complex models, where the immediate response is not as critical.

  • Usage: Examples include generating detailed reports from large datasets, performing complex calculations that require significant computational time, or processing high-resolution images or lengthy audio files. This flexibility makes it a complementary option to real-time inference, especially for businesses that face fluctuating demand and seek to maintain a balance between performance and cost.
  • Process: The process of using asynchronous inference is straightforward yet powerful. Users submit their inference requests to a queue, from which SageMaker processes them sequentially. This queue-based system allows SageMaker to efficiently manage and scale resources according to the current workload, ensuring that each inference request is handled as promptly as possible.

Clean up

To avoid incurring unnecessary charges and to tidy up resources after completing the experiments or running the demos described in this post, follow these steps to delete all deployed resources:

  1. Delete the SageMaker endpoints: To delete any deployed real-time or asynchronous endpoints, use the SageMaker console or the AWS SDK. This step is crucial as endpoints can accrue significant charges if left running.
  2. Delete the SageMaker Pipeline: If you have set up a SageMaker Pipeline, delete it to ensure that there are no residual executions that might incur costs.
  3. Delete S3 artifacts: Remove all artifacts stored in your S3 buckets that were used for training, storing model artifacts, or logging. Ensure you delete only the resources related to this project to avoid data loss.
  4. Clean up any additional resources: Depending on your specific implementation and additional setup modifications, there may be other resources to consider, such as roles or logs. Check your AWS Management Console for any resources that were created and delete them if they are no longer needed.

Conclusion

This post illustrates the effectiveness of Amazon SageMaker AutoMLV2 for time series forecasting. By carefully preparing the data, thoughtfully configuring the model, and using both batch and real-time inference, we demonstrated a robust methodology for predicting future sales. This approach not only saves time and resources but also empowers businesses to make data-driven decisions with confidence.

If you’re inspired by the possibilities of time series forecasting and want to experiment further, consider exploring the SageMaker Canvas UI. SageMaker Canvas provides a user-friendly interface that simplifies the process of building and deploying machine learning models, even if you don’t have extensive coding experience.

Visit the SageMaker Canvas page to learn more about its capabilities and how it can help you streamline your forecasting projects. Begin your journey towards more intuitive and accessible machine learning solutions today!


About the Authors

Nick McCarthy is a Senior Machine Learning Engineer at AWS, based in London. He has worked with AWS clients across various industries including healthcare, finance, sports, telecoms and energy to accelerate their business outcomes through the use of AI/ML. Outside of work he loves to spend time travelling, trying new cuisines and reading about science and technology. Nick has a Bachelors degree in Astrophysics and a Masters degree in Machine Learning.

Davide Gallitelli is a Senior Specialist Solutions Architect for AI/ML in the EMEA region. He is based in Brussels and works closely with customers throughout Benelux. He has been a developer since he was very young, starting to code at the age of 7. He started learning AI/ML at university, and has fallen in love with it since then.

Read More