Easily deploy and manage hundreds of LoRA adapters with SageMaker efficient multi-adapter inference

Easily deploy and manage hundreds of LoRA adapters with SageMaker efficient multi-adapter inference

The new efficient multi-adapter inference feature of Amazon SageMaker unlocks exciting possibilities for customers using fine-tuned models. This capability integrates with SageMaker inference components to allow you to deploy and manage hundreds of fine-tuned Low-Rank Adaptation (LoRA) adapters through SageMaker APIs. Multi-adapter inference handles the registration of fine-tuned adapters with a base model and dynamically loads them from GPU memory, CPU memory, or local disk in milliseconds, based on the request. This feature provides atomic operations for adding, deleting, or updating individual adapters across a SageMaker endpoint’s running instances without affecting performance or requiring a redeployment of the endpoint.

The efficiency of LoRA adapters allows for a wide range of hyper-personalization and task-based customization which had previously been too resource-intensive and costly to be feasible. For example, marketing and software as a service (SaaS) companies can personalize artificial intelligence and machine learning (AI/ML) applications using each of their customer’s images, art style, communication style, and documents to create campaigns and artifacts that represent them. Similarly, enterprises in industries like healthcare or financial services can reuse a common base model with task-based adapters to efficiently tackle a variety of specialized AI tasks. Whether it’s diagnosing medical conditions, assessing loan applications, understanding complex documents, or detecting financial fraud, you can simply swap in the appropriate fine-tuned LoRA adapter for each use case at runtime. This flexibility and efficiency unlocks new opportunities to deploy powerful, customized AI across your organization. With this new efficient multi-adapter inference capability, SageMaker reduces the complexity of deploying and managing the adapters that power these applications.

In this post, we show how to use the new efficient multi-adapter inference feature in SageMaker.

Problem statement

You can use powerful pre-trained foundation models (FMs) without needing to build your own complex models from scratch. However, these general-purpose models might not always align with your specific needs or your unique data. To make these models work for you, you can use Parameter-Efficient Fine-Tuning (PEFT) techniques like LoRA.

The benefit of PEFT and LoRA is that it lets you fine-tune models quickly and cost-effectively. These methods are based on the idea that only a small part of a large FM needs updating to adapt it to new tasks or domains. By freezing the base model and just updating a few extra adapter layers, you can fine-tune models much faster and cheaper, while still maintaining high performance. This flexibility means you can quickly customize pre-trained models at low cost to meet different requirements. When inferencing, the LoRA adapters can be loaded dynamically at runtime to augment the results from the base model for best performance. You can create a library of task-specific, customer-specific, or domain-specific adapters that can be swapped in as needed for maximum efficiency. This allows you to build AI tailored exactly to your business.

Although fine-tuned LoRA adapters can effectively address targeted use cases, managing these adapters can be challenging at scale. You can use open-source libraries, or the AWS managed Large Model Inference (LMI) deep learning container (DLC) to dynamically load and unload adapter weights. Current deployment methods use fixed adapters or Amazon Simple Storage Service (Amazon S3) locations, making post-deployment changes impossible without updating the model endpoint and adding unnecessary complexity. This deployment method also makes it impossible to collect per-adapter metrics, making the evaluation of their health and performance a challenge.

Solution overview

In this solution, we show how to use efficient multi-adapter inference in SageMaker to host and manage multiple LoRA adapters with a common base model. The approach is based on an existing SageMaker capability, inference components, where you can have multiple containers or models on the same endpoint and allocate a certain amount of compute to each container. With inference components, you can create and scale multiple copies of the model, each of which retains the compute that you have allocated. With inference components, deploying multiple models that have specific hardware requirements becomes a much simpler process, allowing for the scaling and hosting of multiple FMs. An example deployment would look like the following figure.

This feature extends inference components to a new type of component, inference component adapters, which you can use to allow SageMaker to manage your individual LoRA adapters at scale while having a common inference component for the base model that you’re deploying. In this post, we show how to create, update, and delete inference component adapters and how to call them for inference. You can envision this architecture as the following figure.

IC and Adapters

Prerequisites

To run the example notebooks, you need an AWS account with an AWS Identity and Access Management (IAM) role with permissions to manage resources created. For details, refer to Create an AWS account.

If this is your first time working with Amazon SageMaker Studio, you first need to create a SageMaker domain. Additionally, you may need to request a service quota increase for the corresponding SageMaker hosting instances. In this example, you host the base model and multiple adapters on the same SageMaker endpoint, so you will use an ml.g5.12xlarge SageMaker hosting instance.

In this example, you learn how to deploy a base model (Meta Llama 3.1 8B Instruct) and LoRA adapters on an SageMaker real-time endpoint using inference components. You can find the example notebook in the GitHub repository.

import sagemaker
import boto3
import json

role = sagemaker.get_execution_role() # execution role for the endpoint
sess = sagemaker.session.Session() # sagemaker session for interacting with different AWS APIs
bucket = sess.default_bucket() # bucket to house artifacts
region = sess._region_name

sm_client = boto3.client(service_name='sagemaker')
sm_rt_client = boto3.client(service_name='sagemaker-runtime')

Download the base model from the Hugging Face model hub. Because Meta Llama 3.1 8B Instruct is a gated model, you will need a Hugging Face access token and to submit a request for model access on the model page. For more details, see Accessing Private/Gated Models.

from huggingface_hub import snapshot_download

model_name = sagemaker.utils.name_from_base("llama-3-1-8b-instruct")

HF_TOKEN = "<<YOUR_HF_TOKEN>>"
model_id = "meta-llama/Llama-3.1-8B-Instruct"
model_id_pathsafe = model_id.replace("/","-")
local_model_path = f"./models/{model_id_pathsafe}"
s3_model_path = f"s3://{bucket}/models/{model_id_pathsafe}"

snapshot_download(repo_id=model_id, use_auth_token=HF_TOKEN, local_dir=local_model_path, allow_patterns=[".json", ".safetensors"])

Copy your model artifact to Amazon S3 to improve model load time during deployment:

!aws s3 cp —recursive {local_model_path} {s3_model_path}

Select one of the available LMI container images for hosting. Efficient adapter inference capability is available in 0.31.0-lmi13.0.0 and higher.

inference_image_uri = "763104351884.dkr.ecr.us-west-2.amazonaws.com/djl-inference:0.31.0-lmi13.0.0-cu124"

Create a container environment for the hosting container. LMI container parameters can be found in the LMI Backend User Guides.

The parameters OPTION_MAX_LORAS and OPTION_MAX_CPU_LORAS control how adapters move between GPU, CPU, and disk. OPTION_MAX_LORAS sets a limit on the number of adapters concurrently stored in GPU memory, with excess adapters offloaded to CPU memory.  OPTION_MAX_CPU_LORAS determines how many adapters are staged in CPU memory, offloading excess adapters to local SSD storage.

In the following example, 30 adapters can live in GPU memory and 70 adapters in CPU memory before going to local storage.

env = {
    "HF_MODEL_ID": f"{s3_model_path}",
    "OPTION_ROLLING_BATCH": "lmi-dist",
    "OPTION_MAX_ROLLING_BATCH_SIZE": "16",
    "OPTION_TENSOR_PARALLEL_DEGREE": "max",
    "OPTION_ENABLE_LORA": "true",
    "OPTION_MAX_LORAS": "30",
    "OPTION_MAX_CPU_LORAS": "70",
    "OPTION_DTYPE": "fp16",
    "OPTION_MAX_MODEL_LEN": "6000"
}

With your container image and environment defined, you can create a SageMaker model object that you will use to create an inference component later:

model_name = sagemaker.utils.name_from_base("llama-3-1-8b-instruct")

create_model_response = sm_client.create_model(
    ModelName = model_name,
    ExecutionRoleArn = role,
    PrimaryContainer = {
        "Image": inference_image_uri,
        "Environment": env,
    },
)

Set up a SageMaker endpoint

To create a SageMaker endpoint, you need an endpoint configuration. When using inference components, you don’t specify a model in the endpoint configuration. You load the model as a component later on.

endpoint_config_name = f"{model_name}"
variant_name = "AllTraffic"
instance_type = "ml.g5.12xlarge"
model_data_download_timeout_in_seconds = 900
container_startup_health_check_timeout_in_seconds = 900

initial_instance_count = 1

sm_client.create_endpoint_config(
    EndpointConfigName = endpoint_config_name,
    ExecutionRoleArn = role,
    ProductionVariants = [
        {
            "VariantName": variant_name,
            "InstanceType": instance_type,
            "InitialInstanceCount": initial_instance_count,
            "ModelDataDownloadTimeoutInSeconds": model_data_download_timeout_in_seconds,
            "ContainerStartupHealthCheckTimeoutInSeconds": container_startup_health_check_timeout_in_seconds,
            "RoutingConfig": {"RoutingStrategy": "LEAST_OUTSTANDING_REQUESTS"},
        }
    ]
)

Create the SageMaker endpoint with the following code:

create_endpoint_response = sm_client.create_endpoint(
    EndpointName = endpoint_name, EndpointConfigName = endpoint_config_name
)

With your endpoint created, you can now create the inference component for the base model. This will be the base component that the adapter components you create later will depend on.

Notable parameters here are ComputeResourceRequirements. These are a component-level configuration that determine the amount of resources that the component needs (memory, vCPUs, accelerators). The adapters will share these resources with the base component.

base_inference_component_name = f"base-{model_name}"

variant_name = "AllTraffic"

initial_copy_count = 1
min_memory_required_in_mb = 32000
number_of_accelerator_devices_required = 4

sm_client.create_inference_component(
    InferenceComponentName = base_inference_component_name,
    EndpointName = endpoint_name,
    VariantName = variant_name,
    Specification={
        "ModelName": model_name,
        "StartupParameters": {
            "ModelDataDownloadTimeoutInSeconds": model_data_download_timeout_in_seconds,
            "ContainerStartupHealthCheckTimeoutInSeconds": container_startup_health_check_timeout_in_seconds,
        },
        "ComputeResourceRequirements": {
            "MinMemoryRequiredInMb": min_memory_required_in_mb,
            "NumberOfAcceleratorDevicesRequired": number_of_accelerator_devices_required,
        },
    },
    RuntimeConfig={
        "CopyCount": initial_copy_count,
    },
)

 In this example, you create a single adapter, but you could host up to hundreds of them per endpoint. They will need to be compressed and uploaded to Amazon S3.

The adapter package has the following files at the root of the archive with no sub-folders.

Adapter Files

For this example, an adapter was fine-tuned using QLoRA and Fully Sharded Data Parallel (FSDP) on the training split of the ECTSum dataset. Training took 21 minutes on an ml.p4d.24xlarge and cost approximately $13 using current on-demand pricing.

For each adapter you are going to deploy, you need to specify an InferenceComponentName, an ArtifactUrl with the S3 location of the adapter archive, and a BaseInferenceComponentName to create the connection between the base model inference component and the new adapter inference components. You repeat this process for each additional adapter.

ic_ectsum_name = f"adapter-ectsum-{base_inference_component_name}"
adapter_s3_uri = "<<S3_PATH_FOR_YOUR_ADAPTER>>

sm_client.create_inference_component(
    InferenceComponentName = adapter_ic1_name,
    EndpointName = endpoint_name,
    Specification={
        "BaseInferenceComponentName": inference_component_name,
        "Container": {
            "ArtifactUrl": adapter_s3_uri
        },
    },
)

Use the deployed adapter

First, you build a prompt to invoke the model for earnings summarization, filling in the source text with a random item from the ECTSum dataset. Then you store the ground truth summary from the item for comparison later.

from datasets import load_dataset
dataset_name = "mrSoul7766/ECTSum"

test_dataset = load_dataset(dataset_name, trust_remote_code=True, split="test")

test_item = test_dataset.shuffle().select(range(1))

prompt =f"""
    <|begin_of_text|><|start_header_id|>system<|end_header_id|>
    You are an AI assistant trained to summarize earnings calls.
    Provide a concise summary of the call, capturing the key points and overall context.
    Focus on quarter over quarter revenue, earnings per share, changes in debt, highlighted risks, and growth opportunities.
    <|eot_id|><|start_header_id|>user<|end_header_id|>
    Summarize the following earnings call:

    {test_item["text"]}
    <|eot_id|><|start_header_id|>assistant<|end_header_id|>"""

ground_truth_response = test_item["summary"]

To test the base model, specify the EndpointName for the endpoint you created earlier and the name of the base inference component as InferenceComponentName, along with your prompt and other inference parameters in the Body parameter:

component_to_invoke = base_inference_component_name

response_model = sm_rt_client.invoke_endpoint(
    EndpointName = endpoint_name,
    InferenceComponentName = component_to_invoke,
    Body = json.dumps(
        {
            "inputs": prompt,
            "parameters": {"max_new_tokens": 100, "temperature":0.9}
        }
    ),
    ContentType = "application/json",
)

base_model_reponse = response_model["Body"].read().decode("utf8")["generated_text"]

To invoke the adapter, use the adapter inference component name in your invoke_endpoint call:

component_to_invoke = adapter_ic1_name

response_model = sm_rt_client.invoke_endpoint(
    EndpointName = endpoint_name,
    InferenceComponentName = component_to_invoke,
    Body = json.dumps(
        {
            "inputs": prompt,
            "parameters": {"max_new_tokens": 100, "temperature":0.9}
        }
    ),
    ContentType = "application/json",
)

adapter_reponse = response_model["Body"].read().decode("utf8")["generated_text"]

Compare outputs

Compare the outputs of the base model and adapter to ground truth. While the base model might appear subjectively better in this test, the adapter’s response is actually much closer to the ground truth response. This will be proven with metrics in the next section.

Ground Truth:

q3 non-gaap earnings per share $3.71.
q3 sales rose 15.4 percent to $747 million.
bio rad laboratories - now anticipates 2021 non-gaap currency-neutral revenue growth between 12 to 13 percent.
sees 2021 estimated non-gaap operating margin of about 19.5 percent.
qtrly gaap income per share $129.96.

----------------------------------

Base Model Response:

Here is a summary of the earnings call:

**Key Points:**

* Revenue: $747 million, up 15.4% year-over-year (13.8% on a currency-neutral basis)
* Earnings per share: $129.96, up from $3 per share in Q3 2020
* Gross margin: 58.6% on a GAAP basis, 57.9% on a non-GAAP basis
* Operating income: $156.8 million, up from $109.6 million in Q3 2020
* Net income: $3.928

----------------------------------

Adapter Model Response:

                Here is a concise summary of the call:

                q3 revenue $747.6 million versus refinitiv ibes estimate of $753.9 million.
q3 earnings per share $3.71.
sees fy earnings per share $11.85 to $12.05.
sees fy 2021 non-gaap revenue growth to be 12% to 13%.
sees fy 2021 non-gaap gross margin to be 57.5% to 57.8%.
sees fy 2021 non-gaap operating margin to be 19.5%.

To validate the true adapter performance, you can use a tool like fmeval to run an evaluation of summarization accuracy. This will calculate the METEOR, ROUGE, and BertScore metrics for the adapter vs. the base model. Doing so against the test split of ECTSum yields the following results.

Testing Score Text

The fine-tuned adapter shows a 59% increase in METEOR score, 159% increase in ROUGE score, and 8.6% increase in BertScore.

The following diagram shows the frequency distribution of scores for the different metrics, with the adapter consistently scoring better more often in all metrics.

Testing Scores

We observed an end-to-end latency difference of up to 10%  between base model invocation and the adapter in our tests. If the adapter is loaded from CPU memory or disk, it will incur an additional cold start delay for the first load to GPU. But depending on your container configurations and instance type chosen, these values may vary.

Update an existing adapter

Because adapters are managed as inference components, you can update them on a running endpoint. SageMaker handles the unloading and deregistering of the old adapter and loading and registering of the new adapter onto every base inference component on all the instances that it is running on for this endpoint. To update an adapter inference component, use the update_inference_component API and supply the existing inference component name and the Amazon S3 path to the new compressed adapter archive.

You can train a new adapter, or re-upload the existing adapter artifact to test this functionality.

update_inference_component_response = sm_client.update_inference_component(
    InferenceComponentName = adapter_ic1_name,
    Specification={
        "Container": {
            "ArtifactUrl": new_adapter_s3_uri
        },
    },
)

Remove adapters

If you need to delete an adapter, call the delete_inference_component API with the inference component name to remove it:

sess = sagemaker.session.Session()
sess.delete_inference_component(adapter_ic1_name, wait = True)

Deleting the base model inference component will automatically delete the base inference component and any associated adapter inference components:

sess.delete_inference_component(base_inference_component_name, wait = True)

Pricing

SageMaker multi-adapter inference is generally available in AWS Regions US East (N. Virginia, Ohio), US West (Oregon), Asia Pacific (Jakarta, Mumbai, Seoul, Singapore, Sydney, Tokyo), Canada (Central), Europe (Frankfurt, Ireland, London, Stockholm), Middle East (UAE), and South America (São Paulo), and is available at no extra cost.

Conclusion

The new efficient multi-adapter inference feature in SageMaker opens up exciting possibilities for customers with fine-tuning use cases. By allowing the dynamic loading of fine-tuned LoRA adapters, you can quickly and cost-effectively customize AI models to your specific needs. This flexibility unlocks new opportunities to deploy powerful, customized AI across organizations in industries like marketing, healthcare, and finance. The ability to manage these adapters at scale through SageMaker inference components makes it effortless to build tailored generative AI solutions.


About the Authors

Dmitry Soldatkin is a Senior Machine Learning Solutions Architect at AWS, helping customers design and build AI/ML solutions. Dmitry’s work covers a wide range of ML use cases, with a primary interest in generative AI, deep learning, and scaling ML across the enterprise. He has helped companies in many industries, including insurance, financial services, utilities, and telecommunications. He has a passion for continuous innovation and using data to drive business outcomes. Prior to joining AWS, Dmitry was an architect, developer, and technology leader in data analytics and machine learning fields in the financial services industry.

Giuseppe Zappia is a Principal AI/ML Specialist Solutions Architect at AWS, focused on helping large enterprises design and deploy ML solutions on AWS. He has over 20 years of experience as a full stack software engineer, and has spent the past 5 years at AWS focused on the field of machine learning.

Ram Vegiraju is an ML Architect with the Amazon SageMaker Service team. He focuses on helping customers build and optimize their AI/ML solutions on Amazon SageMaker. In his spare time, he loves traveling and writing.

Read More

Improve the performance of your Generative AI applications with Prompt Optimization on Amazon Bedrock

Improve the performance of your Generative AI applications with Prompt Optimization on Amazon Bedrock

Prompt engineering refers to the practice of writing instructions to get the desired responses from foundation models (FMs). You might have to spend months experimenting and iterating on your prompts, following the best practices for each model, to achieve your desired output. Furthermore, these prompts are specific to a model and task, and performance isn’t guaranteed when they are used with a different FM. This manual effort required for prompt engineering can slow down your ability to test different models.

Today, we are excited to announce the availability of Prompt Optimization on Amazon Bedrock. With this capability, you can now optimize your prompts for several use cases with a single API call or a click of a button on the Amazon Bedrock console.

In this post, we discuss how you can get started with this new feature using an example use case in addition to discussing some performance benchmarks.

Solution overview

At the time of writing, Prompt Optimization for Amazon Bedrock supports Prompt Optimization for Anthropic’s Claude 3 Haiku, Claude 3 Sonnet, Claude 3 Opus, and Claude-3.5-Sonnet models, Meta’s Llama 3 70B and Llama 3.1 70B models, Mistral’s Large model and Amazon’s Titan Text Premier model. Prompt Optimizations can result in significant improvements for Generative AI tasks. Some example performance benchmarks for several tasks were conducted and are discussed.

In the following sections, we demonstrate how to use the Prompt Optimization feature. For our use case, we want to optimize a prompt that looks at a call or chat transcript, and classifies the next best action.

Use automatic prompt optimization

To get started with this feature, complete the following steps:

  1. On the Amazon Bedrock console, choose Prompt management in the navigation pane.
  2. Choose Create prompt.
  3. Enter a name and optional description for your prompt, then choose Create.

  1. For User message, enter the prompt template that you want to optimize.

For example, we want to optimize a prompt that looks at a call or chat transcript and classifies the next best action as one of the following:

  • Wait for customer input
  • Assign agent
  • Escalate

The following screenshot shows what our prompt looks like in the prompt builder.

  1. In the Configurations pane, for Generative AI resource, choose Models and choose your preferred model. For this example, we use Anthropic’s Claude 3.5 Sonnet.
  2. Choose Optimize.

A pop-up appears that indicates that your prompt is being optimized.

When optimization is complete, you should see a side-by-side view of the original and the optimized prompt for your use case.

  1. Add values to your test variables (in this case, transcript) and choose Run.

You can then see the output from the model in the desired format.

As we can see in this example, the prompt is more explicit, with clear instructions on how to process the original transcript provided as a variable. This results in the correct classification, in the required output format. Once a prompt has been optimized, it can be deployed into an application by creating a version which creates a snapshot of its configuration. Multiple versions can be stored to enable switching between different use-case prompt configurations. See prompt management for more details on prompt version control and deployment.

Performance benchmarks

We ran the Prompt Optimization feature on several open source datasets. We are excited to share the improvements seen in a few important and common use cases that we see our customers working with:

  • Summarization (XSUM)
  • RAG-based dialog continuation (DSTC)
  • Function calling (GLAIVE)

To measure performance improvement with respect to the baseline prompts, we use ROUGE-2 F1 for the summarization use case, HELM-F1 for the dialog continuation use case, and HELM-F1 and JSON matching for function calling. We saw a performance improvement of 18% on the summarization use case, 8% on dialog completion, and 22% on function calling benchmarks. The following table contains the detailed results.

Use Case Original Prompt Optimized Prompt Performance Improvement
Summarization First, please read the article below.
{context}
 Now, can you write me an extremely short abstract for it?
<task>
Your task is to provide a concise 1-2 sentence summary of the given text that captures the main points or key information.
</task><context>
{context}
</context><instructions>
Please read the provided text carefully and thoroughly to understand its content. Then, generate a brief summary in your own words that is much shorter than the original text while still preserving the core ideas and essential details. The summary should be concise yet informative, capturing the essence of the text in just 1-2 sentences.
</instructions><result_format>
Summary: [WRITE YOUR 1-2 SENTENCE SUMMARY HERE]
</result_format>
18.04%
Dialog continuation Functions available:
{available_functions}
Examples of calling functions:
Input:
Functions: [{"name": "calculate_area", "description": "Calculate the area of a shape", "parameters": {"type": "object", "properties": {"shape": {"type": "string", "description": "The type of shape (e.g. rectangle, triangle, circle)"}, "dimensions": {"type": "object", "properties": {"length": {"type": "number", "description": "The length of the shape"}, "width": {"type": "number", "description": "The width of the shape"}, "base": {"type": "number", "description": "The base of the shape"}, "height": {"type": "number", "description": "The height of the shape"}, "radius": {"type": "number", "description": "The radius of the shape"}}}}, "required": ["shape", "dimensions"]}}]
Conversation history: USER: Can you calculate the area of a rectangle with a length of 5 and width of 3?
Output:
{"name": "calculate_area", "arguments": {"shape": "rectangle", "dimensions": {"length": 5, "width": 3}}}Input:
Functions: [{"name": "search_books", "description": "Search for books based on title or author", "parameters": {"type": "object", "properties": {"search_query": {"type": "string", "description": "The title or author to search for"}}, "required": ["search_query"]}}]
Conversation history: USER: I am looking for books by J.K. Rowling. Can you help me find them?
Output:
{"name": "search_books", "arguments": {"search_query": "J.K. Rowling"}}Input:
Functions: [{"name": "calculate_age", "description": "Calculate the age based on the birthdate", "parameters": {"type": "object", "properties": {"birthdate": {"type": "string", "format": "date", "description": "The birthdate"}}, "required": ["birthdate"]}}]
Conversation history: USER: Hi, I was born on 1990-05-15. Can you tell me how old I am today?
Output:
{"name": "calculate_age", "arguments": {"birthdate": "1990-05-15"}}
Current chat history:
{conversation_history}
Respond to the last message. Call a function if necessary.

Task: Respond to the user's message in the given conversation by calling appropriate functions if necessary.

Instructions:
1. Review the list of available functions:
<available_functions>
{available_functions}
</available_functions>

2. Study the examples of how to call these functions:
<fewshot_examples>

<example>
H:
<context>Functions: [{"name": "calculate_area", "description": "Calculate the area of a shape", "parameters": {"type": "object", "properties": {"shape": {"type": "string", "description": "The type of shape (e.g. rectangle, triangle, circle)"}, "dimensions": {"type": "object", "properties": {"length": {"type": "number", "description": "The length of the shape"}, "width": {"type": "number", "description": "The width of the shape"}, "base": {"type": "number", "description": "The base of the shape"}, "height": {"type": "number", "description": "The height of the shape"}, "radius": {"type": "number", "description": "The radius of the shape"}}}}, "required": ["shape", "dimensions"]}}]</context>
<question>USER: Can you calculate the area of a rectangle with a length of 5 and width of 3?</question>
A:
<output>{"name": "calculate_area", "arguments": {"shape": "rectangle", "dimensions": {"length": 5, "width": 3}}}</output>
</example>

<example>
H:
<context>Functions: [{"name": "search_books", "description": "Search for books based on title or author", "parameters": {"type": "object", "properties": {"search_query": {"type": "string", "description": "The title or author to search for"}}, "required": ["search_query"]}}]</context>
<question>USER: I am looking for books by J.K. Rowling. Can you help me find them?</question>
A:
<output>{"name": "search_books", "arguments": {"search_query": "J.K. Rowling"}}</output>
</example>

<example>
H:
<context>Functions: [{"name": "calculate_age", "description": "Calculate the age based on the birthdate", "parameters": {"type": "object", "properties": {"birthdate": {"type": "string", "format": "date", "description": "The birthdate"}}, "required": ["birthdate"]}}]</context>
<question>USER: Hi, I was born on 1990-05-15. Can you tell me how old I am today?</question>
A:
<output>{"name": "calculate_age", "arguments": {"birthdate": "1990-05-15"}}</output>
</example>

</fewshot_examples>

3. Carefully read the current conversation history:
<conversation_history>
{conversation_history}
</conversation_history>

4. Analyze the last message from the user and determine if any of the available functions need to be called to provide an appropriate response.

5. If a function call is necessary, follow the format demonstrated in the examples to invoke the relevant function with the required arguments.

6. If no function call is needed, provide a direct response to the user's message.

7. Your response should be concise, relevant, and tailored to the specific context of the conversation.

8. Enclose your final response in <response></response> tags, without any additional preamble or explanation.

Provide your response immediately after these instructions, following the specified format.

8.23%
Function Calling

## Retrieved knowledge snippets

Domain: Restaurant
Entity: THE COPPER KETTLE
Review: My friend Mark took me to the copper kettle to celebrate my promotion. I decided to treat myself to Shepherds Pie. It was not as flavorful as I'd have liked and the consistency was just runny, but the servers were awesome and I enjoyed the view from the patio. I may come back to try the strawberries and cream come time for Wimbledon..
Highlight: It was not as flavorful as I'd have liked and the consistency was just runny, but the servers were awesome and I enjoyed the view from the patio.

Domain: Restaurant
Entity: THE COPPER KETTLE
Review: Last week, my colleagues and I visited THE COPPER KETTLE that serves British cuisine. We enjoyed a nice view from inside of the restaurant. The atmosphere was enjoyable and the restaurant was located in a nice area. However, the food was mediocre and was served in small portions.
Highlight: We enjoyed a nice view from inside of the restaurant.

## Dialog

user: Howdy, I'm looking for a British restaurant for breakfast.

agent: There are several British restaurants available. Would you prefer a moderate or expensive price range?

user: Moderate price range please.

agent: Five restaurants match your criteria. Four are in Centre area and one is in the West. Which area would you prefer?

user: I would like the Center of town please.

agent: How about The Copper Kettle?

user: Do they offer a good view?

agent:

Given the above information from a retrieval augmented generation (RAG) system, respond with the appropriate answer to the question from the user.

<task_description>
You are an advanced question-answering system that utilizes information from a retrieval augmented generation (RAG) system to provide accurate and relevant responses to user queries.
</task_description><instructions>
1. Carefully review the provided context information:
<context>
Domain: Restaurant
Entity: THE COPPER KETTLE
Review: My friend Mark took me to the copper kettle to celebrate my promotion. I decided to treat myself to Shepherds Pie. It was not as flavorful as I'd have liked and the consistency was just runny, but the servers were awesome and I enjoyed the view from the patio. I may come back to try the strawberries and cream come time for Wimbledon..
Highlight: It was not as flavorful as I'd have liked and the consistency was just runny, but the servers were awesome and I enjoyed the view from the patio.Domain: Restaurant
Entity: THE COPPER KETTLE
Review: Last week, my colleagues and I visited THE COPPER KETTLE that serves British cuisine. We enjoyed a nice view from inside of the restaurant. The atmosphere was enjoyable and the restaurant was located in a nice area. However, the food was mediocre and was served in small portions.
Highlight: We enjoyed a nice view from inside of the restaurant.
</context>2. Analyze the user's question:
<question>
user: Howdy, I'm looking for a British restaurant for breakfast.agent: There are several British restaurants available. Would you prefer a moderate or expensive price range?user: Moderate price range please.agent: Five restaurants match your criteria. Four are in Centre area and one is in the West. Which area would you prefer?user: I would like the Center of town please.agent: How about The Copper Kettle?user: Do they offer a good view?

agent:
</question>

3. Leverage the context information and your knowledge to generate a concise and accurate answer to the user's question.

4. Ensure your response directly addresses the specific query while incorporating relevant details from the context.

5. Provide your answer in a clear and easy-to-understand manner, without any unnecessary preamble or explanation.
</instructions>

<output_format>
Answer: [Insert your concise answer here]
</output_format>

<example>
Context:
The Eiffel Tower is a wrought-iron lattice tower on the Champ de Mars in Paris, France. It is named after the engineer Gustave Eiffel, whose company designed and built the tower. Constructed from 1887 to 1889 as the centerpiece of the 1889 World's Fair, it was initially criticized by some of France's leading artists and intellectuals for its design, but it has become a global cultural icon of France and one of the most recognizable structures in the world.

Question: What is the Eiffel Tower?

Answer: The Eiffel Tower is a wrought-iron lattice tower in Paris, France, named after its designer Gustave Eiffel, and constructed as the centerpiece of the 1889 World's Fair.
</example>

22.03%

The consistent improvements across different tasks highlight the robustness and effectiveness of Prompt Optimization in enhancing prompt performance for various natural language processing (NLP) tasks. This shows Prompt Optimization can save you considerable time and effort while achieving better outcomes by testing models with optimized prompts implementing the best practices for each model.

Conclusion

Prompt Optimization on Amazon Bedrock empowers you to effortlessly enhance your prompt’s performance across a wide range of use cases with just a single API call or a few clicks on the Amazon Bedrock console. The substantial improvements demonstrated on open-source benchmarks for tasks like summarization, dialog continuation, and function calling underscore this new feature’s capability to streamline the prompt engineering process significantly. Prompt Optimization on Amazon Bedrock enables you to easily test many different models for your generative-AI application, following the best prompt engineering practices for each model. The reduced manual effort, will greatly accelerate the development of generative-AI applications in your organization.

We encourage you to try out Prompt Optimization with your own use cases and reach out to us for feedback and collaboration.


About the Authors

Shreyas Subramanian is a Principal Data Scientist and helps customers by using generative AI and deep learning to solve their business challenges using AWS services. Shreyas has a background in large-scale optimization and ML and in the use of ML and reinforcement learning for accelerating optimization tasks.

Chris Pecora is a Generative AI Data Scientist at Amazon Web Services. He is passionate about building innovative products and solutions while also focusing on customer-obsessed science. When not running experiments and keeping up with the latest developments in generative AI, he loves spending time with his kids.

Zhengyuan Shen is an Applied Scientist at Amazon Bedrock, specializing in foundational models and ML modeling for complex tasks including natural language and structured data understanding. He is passionate about leveraging innovative ML solutions to enhance products or services, thereby simplifying the lives of customers through a seamless blend of science and engineering. Outside work, he enjoys sports and cooking.

Shipra Kanoria is a Principal Product Manager at AWS. She is passionate about helping customers solve their most complex problems with the power of machine learning and artificial intelligence. Before joining AWS, Shipra spent over 4 years at Amazon Alexa, where she launched many productivity-related features on the Alexa voice assistant.

Read More

Get the Power of GeForce-Powered Gaming in the Cloud Half Off With Black Friday Deal

Get the Power of GeForce-Powered Gaming in the Cloud Half Off With Black Friday Deal

Turn Black Friday into Green Thursday with a new deal on GeForce NOW Ultimate and Performance memberships this week. For a limited time, get 50% off new Ultimate or Performance memberships for the first three months to experience the power of GeForce RTX-powered gaming at a fraction of the cost.

The giving continues for GeForce NOW members: SteelSeries is offering a 30% discount exclusively to all GeForce NOW members on Stratus+ or Nimbus+ controllers, perfect for gaming anytime, anywhere when paired with GeForce NOW on Android and iOS devices. To redeem the discount, opt in to GeForce NOW rewards and look out for an email with details. Enjoy this exclusive offer on its own — it can’t be combined with other SteelSeries promotions.

It’s not a GFN Thursday without new games — this week, six are joining the over 2,000 titles in the GeForce NOW library.

Plus, the Steam Autumn Sale is happening now, featuring stellar discounts on GeForce NOW-supported games. Snag beloved publishers’ top titles, including Nightingale from Inflexion Games, Remnant and Remnant II from Arc Games, and Cult of the Lamb and The Plucky Squire from Devolver — and even more from publishers Frost Giant Studios, Metric Empire, tinyBuild, Torn Banner Studios and Tripwire. The sale runs through Wednesday, Dec. 4.

Stuff Your Stockings

This holiday season, GeForce NOW is spreading cheer to gamers everywhere with an irresistible Black Friday offer. Those looking to try out the cloud gaming service can now level up their gaming with 50% off new Ultimate and Performance memberships for the first three months. It’s the perfect time for gamers to treat themselves or a buddy to GeForce RTX-powered gaming without having to upgrade any hardware.

Black Friday Deal on GeForce NOW
Thankful for cloud gaming discounts.

Lock in all the perks of the newly enhanced Performance membership, now featuring crisp 1440p streaming, at half off for the next three months. Or go all out with the Ultimate tier — delivering the same premium experience GeForce RTX 4080 GPU owners enjoy — now available at the regular monthly cost of a Performance membership.

With a GeForce NOW membership, gamers can stream over 2,000 PC games from popular digital gaming stores with longer gaming sessions and real-time ray tracing for supported titlgames across nearly all devices. Performance members can stream at up to 1440p at 60 frames per second, and Ultimate members can stream up to 4K at 120 fps or 1080p at 240 fps.

Don’t let this festive deal slip away — give the gift of gaming this holiday season with GeForce NOW’s Black Friday sale. Whether battling winter bosses or exploring snowy landscapes, do it with exceptional performance at an exceptional price.

Elevating New Games

In addition, members can look for the following:

  • New Arc Line (New release on Steam, Nov. 26)
  • MEGA MAN X DiVE Offline Demo (Steam)
  • PANICORE (Steam)
  • Resident Evil 7 Teaser: Beginning Hour Demo (Steam)
  • Slime Rancher (Steam)
  • Sumerian Six (Steam)

What are you planning to play this weekend? Let us know on X or in the comments below.

Read More

Search enterprise data assets using LLMs backed by knowledge graphs

Search enterprise data assets using LLMs backed by knowledge graphs

Enterprises are facing challenges in accessing their data assets scattered across various sources because of increasing complexities in managing vast amount of data. Traditional search methods often fail to provide comprehensive and contextual results, particularly for unstructured data or complex queries.

Search solutions in modern big data management must facilitate efficient and accurate search of enterprise data assets that can adapt to the arrival of new assets. Customers want to search through all of the data and applications across their organization, and they want to see the provenance information for all of the documents retrieved. The application needs to search through the catalog and show the metadata information related to all of the data assets that are relevant to the search context. To accomplish all of these goals, the solution should include the following features:

  • Provide connections between related entities and data sources
  • Consolidate fragmented data cataloging systems that contain metadata
  • Provide reasoning behind the search outputs

In this post, we present a generative AI-powered semantic search solution that empowers business users to quickly and accurately find relevant data assets across various enterprise data sources. In this solution, we integrate large language models (LLMs) hosted on Amazon Bedrock backed by a knowledge base that is derived from a knowledge graph built on Amazon Neptune to create a powerful search paradigm that enables natural language-based questions to integrate search across documents stored in Amazon Simple Storage Service (Amazon S3), data lake tables hosted on the AWS Glue Data Catalog, and enterprise assets in Amazon DataZone.

Foundation models (FMs) on Amazon Bedrock provide powerful generative models for text and language tasks. However, FMs lack domain-specific knowledge and reasoning capabilities. Knowledge graphs available on Neptune provide a means to represent interconnected facts and entities with inferencing and reasoning abilities for domains. Equipping FMs with structured reasoning abilities using domain-specific knowledge graphs harnesses the best of both approaches. This allows FMs to retain their inductive abilities while grounding their language understanding and generation in well-structured domain knowledge and logical reasoning. In the context of enterprise data asset search powered by a metadata catalog hosted on services such Amazon DataZone, AWS Glue, and other third-party catalogs, knowledge graphs can help integrate this linked data and also enable a scalable search paradigm that integrates metadata that evolves over time.

Solution overview

The solution integrates with your existing data catalogs and repositories, creating a unified, scalable semantic layer across the entire data landscape. When users ask questions in plain English, the search is not just for keywords; it comprehends the query’s intent and context, relating it to relevant tables, documents, and datasets across your organization. This semantic understanding enables more accurate, contextual, and insightful search results, making the entire company’s data as accessible and simple to search as using a consumer search engine, but with the depth and specificity your business demands. This significantly enhances decision-making, efficiency, and innovation throughout your organization by unlocking the full potential of your data assets. The following video shows the sample working solution.

Using graph data processing and the integration of natural language-based search on embedded graphs, these hybrid systems can unlock powerful insights from complex data structures.

The solution presented in this post consists of an ingestion pipeline and a search application UI that the user can submit queries to in natural language while searching for data assets.

The following diagram illustrates the end-to-end architecture, consisting of the metadata API layer, ingestion pipeline, embedding generation workflow, and frontend UI.

The ingestion pipeline (3) ingests metadata (1) from services (2), including Amazon DataZone, AWS Glue, and Amazon Athena, to a Neptune database after converting the JSON response from the service APIs into an RDF triple format. The RDF is converted into text and loaded into an S3 bucket, which is accessed by Amazon Bedrock (4) as the source of the knowledge base. You can extend this solution to include metadata from third-party cataloging solutions as well. The end-users access the application, which is hosted on Amazon CloudFront (5).

A state machine in AWS Step Functions defines the workflow of the ingestion process by invoking AWS Lambda functions, as illustrated in the following figure.

The functions perform the following actions:

  1. Read metadata from services (Amazon DataZone, AWS Glue, and Athena) in JSON format. Enhance the JSON format metadata to JSON-LD format by adding context, and load the data to an Amazon Neptune Serverless database as RDF triples. The following is an example of RDF triples in N-triples file format:
    <arn:aws:glue:us-east-1:440577664410:table/default/market_sales_table#sales_qty_sold>
    <http://www.w3.org/2000/01/rdf-schema#label> "sales_qty_sold" .
    <arn:aws:glue:us-east-1:440577664410:table/sampleenv_pub_db/mkt_sls_table#disnt> 
    <http://www.w3.org/2000/01/rdf-schema#label> "disnt" .
    <arn:aws:glue:us-east-1:440577664410:table/sampleenv_pub_db/mkt_sls_table> 
    <http://www.amazonaws.com/datacatalog/hasColumn> 
    <arn:aws:glue:us-east-1:440577664410:table/sampleenv_pub_db/mkt_sls_table#item_id> .
    <arn:aws:glue:us-east-1:440577664410:table/sampledata_pub_db/raw_customer> 
    <http://www.w3.org/2000/01/rdf-schema#label> "raw_customer" .

    For more details about RDF data format, refer to the W3C documentation.

  2. Run SPARQL queries in the Neptune database to populate additional triples from inference rules. This step enriches the metadata by using the graph inferencing and reasoning capabilities. The following is a SPARQL query that inserts new metadata inferred from existing triples:
    PREFIX xsd: <http://www.w3.org/2001/XMLSchema#>
    INSERT
      {
        ?asset <http://www.amazonaws.com/datacatalog/exists_in_aws_account> ?account
      }
    WHERE
      {
        ?asset <http://www.amazonaws.com/datacatalog/isTypeOf> "GlueTableAssetType" .
        ?asset <http://www.amazonaws.com/datacatalog/catalogId> ?account .
      }

  3. Read triples from the Neptune database and convert them into text format using an LLM hosted on Amazon Bedrock. This solution uses Anthropic’s Claude 3 Haiku v1 for RDF-to-text conversion, storing the resulting text files in an S3 bucket.

Amazon Bedrock Knowledge Bases is configured to use the preceding S3 bucket as a data source to create a knowledge base. Amazon Bedrock Knowledge Bases creates vector embeddings from the text files using the Amazon Titan Text Embeddings v2 model.

A Streamlit application is hosted in Amazon Elastic Container Service (Amazon ECS) as a task, which provides a chatbot UI for users to submit queries against the knowledge base in Amazon Bedrock.

Prerequisites

The following are prerequisites to deploy the solution:

  • Capture the user pool ID and application client ID, which will be required while launching the CloudFormation stack for building the web application.
  • Create an Amazon Cognito user (for example, username=test_user) for your Amazon Cognito user pool that will be used to log in to the application. An email address must be included while creating the user.

Prepare the test data

A sample dataset is needed for testing the functionalities of the solution. In your AWS account, prepare a table using Amazon DataZone and Athena completing Step 1 through Step 8 in Amazon DataZone QuickStart with AWS Glue data. This will create a table and capture its metadata in the Data Catalog and Amazon DataZone.

To test how the solution is combining metadata from different data catalogs, create another table only in the Data Catalog, not in Amazon DataZone. On the Athena console, open the query editor and run the following query to create a new table:

CREATE TABLE raw_customer AS SELECT 203 AS cust_id, 'John Doe' AS cust_name

Deploy the application

Complete the following steps to deploy the application:

  1. To launch the CloudFormation template, choose Launch Stack or download the template file (yaml) and launch the CloudFormation stack in your AWS account.
  2. Modify the stack name or leave as default, then choose Next.
  3. In the Parameters section, input the Amazon Cognito user pool ID (CognitoUserPoolId) and application client ID (CognitoAppClientId). This is required for successful deployment of the stacks.
  4. Review and update other AWS CloudFormation parameters if required. You can use the default values for all the parameters and continue with the stack deployment.
    The following table lists the default parameters for the CloudFormation template.

    Parameter Name Description Default Value
    EnvironmentName Unique name to distinguish different web applications in the same AWS account (min length 1 and max length 4). dev
    S3DataPrefixKB S3 object prefix where the knowledge base source documents (metadata files) should be stored. knowledge_base
    Cpu CPU configuration of the ECS task. 512
    Memory Memory configuration of the ECS task. 1024
    ContainerPort Port for the ECS task host and container. 80
    DesiredTaskCount Number of desired ECS task count. 1
    MinContainers Minimum containers for auto scaling. Should be less than or equal to DesiredTaskCount. 1
    MaxContainers Maximum containers for auto scaling. Should be greater than or equal to DesiredTaskCount. 3
    AutoScalingTargetValue CPU utilization target percentage for ECS task auto scaling. 80
  5. Launch the stack.

The CloudFormation stack creates the required resources to launch the application by invoking a series of nested stacks. It deploys the following resources in your AWS account:

  • An S3 bucket to save metadata details from AWS Glue, Athena, and Amazon DataZone, and its corresponding text data
  • An additional S3 bucket to store code, artifacts, and logs related to the deployment
  • A virtual private cloud (VPC), subnets, and network infrastructure
  • An Amazon OpenSearch Serverless index
  • An Amazon Bedrock knowledge base
  • A data source for the knowledge base that connects to the S3 data bucket provisioned, with an event rule to sync the data
  • A Lambda function that watches for objects dropped under the S3 prefix configured as parameter S3DataPrefixKB and starts an ingestion job using Amazon Bedrock Knowledge Bases APIs, which will read data from Amazon S3, chunk it, convert the chunks into embeddings using the Amazon Titan Embeddings model, and store these embeddings in OpenSearch Serverless
  • An serverless Neptune database to store the RDF triples
  • A State Functions state machine that invokes a series of Lambda functions that read from the different AWS services, generate RDF triples, and convert them to text documents
  • An ECS cluster and service to host the Streamlit web application

After the CloudFormation stack is deployed, a Step Functions workflow will run automatically that orchestrates the metadata extract, transform, and load (ETL) job, and stores the final results in Amazon S3. View the execution status and details of the workflow by fetching the state machine Amazon Resource Name (ARN) from the CloudFormation stack. If AWS Lake Formation is enabled for the AWS Glue databases and tables in the account, complete the following steps after the CloudFormation stack is deployed to update the permission and extract the metadata details from AWS Glue and update the metadata details to load to the knowledge base:

  1. Add a role to the AWS Glue Lambda function that grants access to the AWS Glue database.
  2. Fetch the state machine ARN from the CloudFormation stack.
  3. Run the state machine with default input values to extract the metadata details and write to Amazon S3.

You can search for the application stack name <MainStackName>-deploy-<EnvironmentName> (for example, mm-enterprise-search-deploy-dev) on the AWS CloudFormation console. Locate the web application URL in the stack outputs (CloudfrontURL). Launch the web application by choosing the URL link.

Use the application

You can access the application from a web browser using the domain name of the Amazon CloudFront distribution created in the deployment steps. Log in using a user credential that exists in the Amazon Cognito user pool.

Now you can submit a query using a text input. The AWS account used in this example contains sample tables related to sales and marketing. We ask the question, “How to query sales data?” The answer includes metadata on the table mkt_sls_table that was created in the previous steps.

We ask another question: “How to get customer names from sales data?” In the previous steps, we created the raw_customer table, which wasn’t published as a data asset in Amazon DataZone. The table only exists in the Data Catalog. The application returns an answer that combines metadata from Amazon DataZone and AWS Glue.

This powerful solution opens up exciting possibilities for enterprise data discovery and insights. We encourage you to deploy it in your own environment and experiment with different types of queries across your data assets. Try combining information from multiple sources, asking complex questions, and see how the semantic understanding improves your search experience.

Clean up

The total cost of running this setup is less than $10 per day. However, we recommend deleting the CloudFormation stack after use because the deployed resources incur costs. Deleting the main stack also deletes all the nested stacks except the VPC because of dependency. You also need to delete the VPC from the Amazon VPC console.

Conclusion

In this post, we presented a comprehensive and extendable multimodal search solution of enterprise data assets. The integration of LLMs and knowledge graphs shows that by combining the strengths of these technologies, organizations can unlock new levels of data discovery, reasoning, and insight generation, ultimately driving innovation and progress across a wide range of domains.

To learn more about LLM and knowledge graph use cases, refer to the following resources:


About the Authors

Sudipta Mitra is a Generative AI Specialist Solutions Architect at AWS, who helps customers across North America use the power of data and AI to transform their businesses and solve their most challenging problems. His mission is to enable customers achieve their business goals and create value with data and AI. He helps architect solutions across AI/ML applications, enterprise data platforms, data governance, and unified search in enterprises.

Gi Kim is a Data & ML Engineer with the AWS Professional Services team, helping customers build data analytics solutions and AI/ML applications. With over 20 years of experience in solution design and development, he has a background in multiple technologies, and he works with specialists from different industries to develop new innovative solutions using his skills. When he is not working on solution architecture and development, he enjoys playing with his dogs at a beach under the San Francisco Golden Gate Bridge.

Surendiran Rangaraj is a Data & ML Engineer at AWS who helps customers unlock the power of big data, machine learning, and generative AI applications for their business solutions. He works closely with a diverse range of customers to design and implement tailored strategies that boost efficiency, drive growth, and enhance customer experiences.

Read More

Embodied AI Chess with Amazon Bedrock

Embodied AI Chess with Amazon Bedrock

Generative AI continues to transform numerous industries and activities, with one such application being the enhancement of chess, a traditional human game, with sophisticated AI and large language models (LLMs). Using the Custom Model Import feature in Amazon Bedrock, you can now create engaging matches between foundation models (FMs) fine-tuned for chess gameplay, combining classical strategy with generative AI capabilities.


Amazon Bedrock provides managed access to leading FMs from Anthropic, Meta, Mistral AI, AI21 Labs, Cohere, Stability AI, and Amazon, enabling developers to build sophisticated AI-powered applications. These models demonstrate remarkable capabilities in understanding complex game patterns, strategic decision-making, and adaptive learning. With the Custom Model Import feature, you can now seamlessly deploy your customized chess models fine-tuned on specific gameplay styles or historical matches, eliminating the need to manage infrastructure while enabling serverless, on-demand inference. This capability allows you to experiment on fascinating matchups between:

  • Base FMs vs. custom fine-tuned models
  • Custom fine-tuned models trained on distinct grandmaster playing styles

In this post, we demonstrate Embodied AI Chess with Amazon Bedrock, bringing a new dimension to traditional chess through generative AI capabilities. Our setup features a smart chess board that can detect moves in real time, paired with two robotic arms executing those moves. Each arm is controlled by different FMs—base or custom. This physical implementation allows you to observe and experiment with how different generative AI models approach complex gaming strategies in real-world chess matches.

Solution overview

The chess demo uses a broad spectrum of AWS services to create an interactive and engaging gaming experience. The following architecture diagram illustrates the service integration and data flow in the demo.

Connected Edge Intelligence Chess with Amazon Bedrock - Architecture

On the frontend, AWS Amplify hosts a responsive React TypeScript application while providing secure user authentication through Amazon Cognito using the Amplify SDK. This authentication layer connects users to backend services through GraphQL APIs, managed by AWS AppSync, allowing for real-time data synchronization and game state management.

The application’s core backend functionality is handled by a combination of Unit and Pipeline Resolvers. Whereas Unit Resolvers manage lightweight operations such as game state management, creation, and deletion, the critical move-making processes are orchestrated through Pipeline Resolvers. These resolvers queue moves for processing by AWS Step Functions, providing reliable and scalable game flow management.

For generative AI-powered gameplay, Amazon Bedrock integration enables access to both FMs and custom fine-tuned models. The FMs fine-tuned using Amazon SageMaker are then imported into Amazon Bedrock through the Custom Model Import feature, making them available alongside FMs for on-demand access during gameplay. More details on fine-tuning and importing a fine-tuned FM into Amazon Bedrock can be found in the blog post Import a question answering fine-tuned model into Amazon Bedrock as a custom model.

The execution of chess moves on the board is coordinated by a custom component called Chess Game Manager, running on AWS IoT Greengrass. This component bridges the gap between the cloud infrastructure and the physical hardware.

When processing a move, the Step Functions workflow publishes a move request to an AWS IoT Core topic and pauses, awaiting confirmation. The Chess Game Manager component consumes the message, and implements a three-phase validation system to make sure moves are executed accurately. First, it validates the intended move with the smart chessboard, which can detect piece positions. Second, it sends requests to the two robotic arms to physically move the chess pieces. Finally, it confirms with the smart chessboard that the pieces are in their correct positions after the move. This third-phase validation by the smart chessboard is the concept of “trust but verify” in Embodied AI, where the physical state of something may be different from what is shown in a dashboard. Therefore, after the state of the move is registered, the Step Functions workflow continues. After a move has been confirmed, the component publishes a response message back to AWS IoT Core, on a separate topic, which signals the Step Functions workflow to continue.

The demo offers a few gameplay options. Players can choose from the following list of opponents:

  • Generative AI models available on Amazon Bedrock
  • Custom fine-tuned models deployed to Amazon Bedrock
  • Chess engines
  • Human opponents
  • Random moves

An infrastructure as code (IaC) approach was taken when constructing this project. You will use the AWS Cloud Deployment Kit (AWS CDK) when building the components for deployment into any AWS account. After you download the code base, you can deploy the project following the instructions outlined in the GitHub repo.

Prerequisites

This post assumes you have the following:

Chess with fine-tuned models

Traditional approaches to chess AI have focused on handcrafted rules and search algorithms. These methods, though effective, often struggle to capture the nuanced decision-making and long-term strategic thinking characteristic of human grandmasters. More recently, reinforcement learning (RL) has shown promise in mastering chess by allowing AI agents to learn through self-play and trial and error. RL models can discover strategies and evaluate board positions, but they often require extensive computational resources and training time—typically several weeks to months of continuous learning to reach grandmaster-level play.

Fine-tuning generative AI FMs offers a compelling alternative by learning the underlying patterns and principles of chess in just a few days using standard GPU instances, making it a more resource-efficient approach for developing specialized chess AI. The fine-tuning process significantly reduces the time and computational resources needed because the model already understands basic patterns and structures, allowing it to focus on learning chess-specific strategies and tactics.

Prepare the dataset

This section dives into the process of preparing a high-quality dataset for fine-tuning a chess-playing model, focusing on extracting valuable insights from games played by grandmasters and world championship games.

At the heart of our dataset lies the Portable Game Notation (PGN), a standard chess format that records every aspect of a chess game. PGN includes Forsyth–Edwards Notation (FEN), which captures the exact position of pieces on the board at any given moment. Together, these formats store both the moves played and important game details like player names and dates, giving our model comprehensive data to learn from.

Dataset preparation consists of the following key steps:

  • Data acquisition – We begin by downloading a collection of games in PGN format from publicly available PGN files on the PGN mentor program website. We used the games played by Magnus Carlsen, a renowned chess grandmaster. You can download a similar dataset using the following commands:
# Download games zip file to the target directory - You may choose a different set of games – replace filename with the name of the file you want to download
curl -o /data/filename.zip https://www.pgnmentor.com/players/filename.zip

# Unzip the file in the target directory 
unzip filename.zip
  • Filtering for success – To train a model focused on winning strategies, we filter the games to include only games where the player emerged victorious. This allows the model to learn from successful games.
  • PGN to FEN conversion – Each move in a PGN file represents a transition in the chessboard state. To capture these states effectively, we convert PGN notation to FEN format. This conversion process involves iterating through the moves in the PGN, updating the board state accordingly, and generating the corresponding FEN for each move.

The following is a sample game in a PGN file:

[Event “Titled Tue DDth MMM Late”]
[Site “chess.com INT”]
[Date “YYYY.MM.DD”]
[Round “10”]
[White “Player 1 last name,Player 1 first name”]
[Black “Player 2 last name, Player 2 first name “]
[Result “0-1”]
[WhiteElo “2xxx”]
[BlackElo “2xxx”]
[ECO “A00”]1.e4 c5 2.d4 cxd4 3.c3 Nc6 4.cxd4 d5 5.exd5 Qxd5 6.Nf3 e5 7.Nc3 Bb4 8.Bd2 Bxc3 9.Bxc3 e4 10.Nd2 Nf6 11.Bc4 Qg5 12.Qb3 O-O 13.O-O-O Bg4 14.h4 Bxd1 15.Rxd1 Qf5 16.g4 Nxg4 17.Rg1 Nxf2 18.d5 Ne5 19.Rg5 Qd7 20.Bxe5 f5 21.d6+  1-0

The following are sample JSON records with FEN, capturing next move and next color to move. We followed two approaches for the JSON record creation. For models that have good understanding of FEN format, we used a more concise record:

{
    "move": "d4",
    "fen": "rnbqkbnr/pp1ppppp/8/2p5/4P3/8/PPPP1PPP/RNBQKBNR w KQkq - 0 2",
    "nxt_color": "WHITE"
}

For models with limited understanding of FEN format, we used a more detailed record:

{
    "move": "d4",
    "fen": "rnbqkbnr/pp1ppppp/8/2p5/4P3/8/PPPP1PPP/RNBQKBNR w KQkq - 0 2",
    "nxt_color": "WHITE",
    "move_history": "e4, c5"
}

The records include the following parameters:

  • move – A valid next move for the given FEN state.
  • fen – The current board position in FEN.
  • nxt_color – Which color has the next turn to move.
  • move_history – The history of game moves performed until the current board state.

For each game in the PGN file, multiple records similar to the preceding examples are created to capture the FEN, next move, and next move color.

  • Move validation – We validate the legality of each move captured in the records in the preceding format. This step maintains data integrity and prevents the model from learning incorrect or impossible chess moves.
  • Dataset splitting – We split the processed dataset into two parts: a training set and an evaluation set. The training set is used to train the model, and the evaluation set is used to assess the model’s performance on unseen data. This splitting helps us understand how well the model generalizes to new chess positions.

By following these steps, we create a comprehensive and refined dataset that enables our chess AI to learn from successful games, understand legal moves, and grasp the nuances of strategic chess play. This approach to data preparation creates the foundation for fine-tuning a model that can play chess at a high level.

Fine-tune a model

With our refined dataset prepared from successful games and legal moves, we now proceed to fine-tune a model using Amazon SageMaker JumpStart. The fine-tuning process requires clear instructions through a structured prompt template. Here again, based on the FM, we followed two approaches.

For fine-tuning an FM that understands FEN format, we used a more concise prompt template:

template = {
    "prompt": (
        "<s>[INST] You are a chess engine. Given a chess position in FEN notation and the color to move, provide the next best valid move in SAN (Standard Algebraic Notation) format to progress towards winning the game of chess. Your response must be a single move wrapped in <move></move> tags.nn"
        "Chess Position (FEN): {fen}n"
        "Color to Move: {nxt_color} [/INST]"
    ),
    "completion": " <move>{move}</move> </s>"
}

Alternatively, for models with limited FEN knowledge, we provide a prompt template similar to the following:

template = {
    "prompt": (
        "<s>[INST]nYou are a chess engine that provides the next best valid move in SAN format based on:n- FEN position where:n  Black pieces: p=pawn, r=rook, n=knight, b=bishop, q=queen, k=king (lowercase)n  White pieces: P=pawn, R=rook, N=knight, B=bishop, Q=queen, K=king (uppercase)n  Numbers 1-8 indicate consecutive empty squaresn- Color to moven- Move historynnAnalyze these inputs to recommend a legal move that progresses toward winning. Respond with a single move in <move></move> tags.nn"
        "Chess Position (FEN): {fen}n"
        "Color to Move: {nxt_color}n"
        "Move History: {move_history}n"
    ),
    "completion": " <move>{move}</move> </s>"
}

Training and evaluation datasets along with the template.json file created using one of the preceding templates are then uploaded to an Amazon Simple Storage Service (Amazon S3) bucket so they are ready for the fine-tuning job that will be submitted using SageMaker JumpStart.

Now that the dataset is prepared and our model is selected, we submit a SageMaker training job with the following code:

estimator = JumpStartEstimator(
    model_id=model_id,
    model_version=model_version,
    environment={"accept_eula": "true"},  
    disable_output_compression=True,
    instance_type="ml.g5.24xlarge"
)
# By default, instruction tuning is set to false. 
estimator.set_hyperparameters(instruction_tuned=True, epoch="3", max_input_length="1024")
estimator.fit({"training": train_test_data_location})

Let’s break down the preceding code, and look at some important sections:

  • estimator – this is the SageMaker object used to accept all training parameters, while launching and orchestrating the training job.
  • model_id – This is the SageMaker JumpStart model ID for the LLM that you need to fine-tune.
  • accept_eula – This EULA varies from provider to provider and must be accepted when deploying or fine-tuning models from SageMaker JumpStart.
  • instance_type – This is the compute instance the fine-tuning job will take place on. In this case, it’s a g5.24xlarge. This specific instance contains 4 NVIDIA A10G GPUs with 96 GiB of GPU memory. When deciding on an instance type, select the one that best balances your computational needs with your budget to maximize value.
  • fit – The .fit method is the actual line of code that launches the SageMaker training job. All of the algorithm metrics and instance usage metrics can be viewed in Amazon CloudWatch logs, which are directly integrated with SageMaker.

When the SageMaker training job is complete, the model artifacts will be stored in an S3 bucket specified either by the user or the system default.

The notebook we use for fine-tuning one of the models can be accessed in the following GitHub repo.

Challenges and best practices for fine-tuning

In this section, we discuss common challenges and best practices for fine-tuning.

Automated Optimizations with SageMaker JumpStart

Fine-tuning an LLM for chess move prediction using SageMaker presents unique opportunities and challenges. We used SageMaker JumpStart to do the fine-tuning because it provides automated optimizations for different model sizes when fine-tuning for chess applications. SageMaker JumpStart automatically applies appropriate quantization techniques and resource allocations based on model size. For example:

  • 3B–7B models – Enables FSDP with full precision training
  • 13B models – Configures FSDP with optional 8-bit quantization
  • 70B models – Automatically implements 8-bit quantization and disables FSDP for stability

This means if you create a SageMaker JumpStart Estimator without explicitly specifying the int8_quantization parameter, it will automatically use these default values based on the model size you’re working with. This design choice is made because larger models (like 70B) require significant computational resources, so quantization is enabled by default to reduce the memory footprint during training.

Data preparation and format

Dataset identification and preparation can be a challenge. We used readily available PGN datasets from world championships and grandmaster matches to streamline the data preparation process for chess LLM fine-tuning, significantly reducing the complexity of dataset curation.

Choosing the right chess format that produces optimal results with an LLM is critical for successful results post-fine-tuning. We discovered that Standard Algebraic Notation (SAN) significantly outperforms Universal Chess Interface (UCI) format in terms of training convergence and model performance.

Prompt consistency

Using consistent prompt templates during fine-tuning helps the model learn the expected input-output patterns more effectively, and Amazon Bedrock Prompt Management provide robust tools to create and manage these templates systematically. We recommend using the prompt template suggestions provided by the model providers for improved performance.

Model size and resource allocation

Successful LLM training requires a good balance of cost management through multiple approaches, with instance selection being a primary aspect. You can start with the following recommended instance and work your way up, depending on the quality and time available for training.

Model Size Memory Requirements Recommended Instance and Quantization
3B – 7B 24 GB Fits on g5.2xlarge with QLoRA 4-bit quantization
8B -13B 48 GB Requires g5.4xlarge with efficient memory management
70B 400 GB Needs g5.48xlarge or p4d.24xlarge with multi-GPU setup

Import the fine-tuned model into Amazon Bedrock

After the model is fine-tuned and the model artifacts are in the designated S3 bucket, it’s time to import it to Amazon Bedrock using Custom Model Import.

The following section outlines two ways to import the model: using the SDK or the Amazon Bedrock console.

The following is a code snippet showing how the model can be imported using the SDK:

create_model_import_job_resp = br_client.create_model_import_job(
        jobName=rivchess_imp_jb_nm,
        importedModelName=rivchess_model_nm,
        roleArn=role_arn,
        modelDataSource=rivchess_model_src)

In the code snippet, a create model import job is submitted to import the fine-tuned model into Amazon Bedrock. The parameters in the job are as follows:

  • JobName – The name of the import job so it may be identified using the SDK or Amazon Bedrock console
  • ImportedModelName – The name of the imported model, which will be used to invoke inference using the SDK and identify said model on the Amazon Bedrock console
  • roleArn – The role with the correct permissions to import a model onto Amazon Bedrock
  • modelDataSource – The S3 bucket in which the model artifacts were stored in, upon the completed training job

To use the Amazon Bedrock console, complete the following steps:

  1. On the Amazon Bedrock console, under Foundation models in the navigation pane, choose Imported models.

  1. Choose Import model.

  1. Provide the following information:
    1. For Model name, enter a name for your model.
    2. For Import job name¸ enter a name for your import job.
    3. For Model import settings, select Amazon S3 bucket and enter your bucket location.
    4. Create an IAM role or use an existing one.
  2. Choose Import.

After the job is submitted, the job will populate the queue on the Imported models page.

When the model import job is complete, the model may now be called for inference using the Amazon Bedrock console or SDK.

Test the fine-tuned model to play chess

To test the fine-tuned model that is imported into Amazon Bedrock, we use the AWS SDK for Python (Boto3) library to invoke the imported model. We simulated the fine-tuned model against the Stockfish library for a game of up to 50 moves or when the game is won either by the fine-tuned model or by Stockfish.

The Stockfish Python library requires the appropriate version of the executable to be downloaded from the Stockfish website. We also use the chess Python library to visualize the status of the board. This is basically simulating a chess player at a particular Elo rating. An Elo rating represents a player’s strength as a numerical value.

Stockfish and chess Python libraries are GPL-3.0 licensed chess engines, and any usage, modification, or distribution of these libraries must comply with the GPL 3.0 license terms. Review the license agreements before using the Stockfish and chess Python libraries.

The first step is to install the chess and Stockfish libraries:

!pip install chess stockfish —upgrade —quiet

We then initialize the Stockfish library. The path to the command line executable needs to be provided:

stockfish = Stockfish(path='/home/sagemaker-user/riv2024-chess/stockfish/stockfish-ubuntu-x86-64-sse41-popcnt')
stockfish.update_engine_parameters({"Hash": 2048, "UCI_Chess960": "true"})
stockfish.set_elo_rating(1350)
fen_state = stockfish.get_fen_position()

We set the Elo rating, using Stockfish API methods (set_elo_rating). Additional configuration can be provided by following the Stockfish Python Library documentation.

We initialize the chess Python library similarly with equivalent code to the Stockfish Python library initialization. Further configuration can be provided to the chess library following the chess Python library documentation.

board = chess.Board()
board.reset_board()
board.chess960 = True
stockfish.set_fen_position(board.fen())

Upon initialization, we initiate the fine-tuned model imported into Amazon Bedrock against the Stockfish library. In the following code, the first move is performed by Stockfish. Then the fine-tuned model is invoked using the Amazon Bedrock invoke_model API wrapped in a helper function by providing the FEN position of the chess board currently. We continue playing each side until one side wins or when a total of 50 moves are played. We check if each move proposed by the fine-tuned model is legal or not. We continue to invoke the fine-tuned model up to five times if the proposed move is an illegal move.

while True:

    sfish_move = stockfish.get_best_move()
    try:
        move_color = 'WHITE' if board.turn else 'BLACK'
        uci_move = board.push_san(sfish_move).uci()
        stockfish.set_fen_position(board.fen())
        move_count += 1
        move_list.append(f"{sfish_move}")
        print(f'SF Move  - {sfish_move} | {move_color} | Is Move Legal: {stockfish.is_fen_valid(board.fen())} | FEN: {board.fen()} | Move Count: {move_count}')
    except (chess.InvalidMoveError, chess.IllegalMoveError) as e:
        print(f"Stockfish Error for {move_color}: {e}")
        print(f"### Move Count: {move_count} ###")
        print(f'Moves list - {s.join(move_list)}')
        break

    if board.is_checkmate():
        print("Stockfish won!")
        print(f"### Move Count: {move_count} ###")
        print(f'Moves list - {s.join(move_list)}')
        break

    if board.is_stalemate():
        print("Draw!")
        print(f"### Move Count: {move_count} ###")
        print(f'Moves list - {s.join(move_list)}')
        break

    next_turn = 'WHITE' if board.turn else 'BLACK'
    llm_next_move = get_llm_next_move(board.fen(), next_turn, None)
    if llm_next_move is None:
        print("Failed to get a move from LLM. Ending the game.")
        break

    ill_mov_cnt = 0
    while True:
        try:
            is_llm_move_legal = True
            prev_fen = board.fen()
            uci_move = board.push_san(llm_next_move).uci()
            is_llm_move_legal = stockfish.is_fen_valid(board.fen())
            if is_llm_move_legal:
                print(f'LLM Move - {llm_next_move} | {next_turn} | Is Move Legal: {stockfish.is_fen_valid(board.fen())} | FEN: {board.fen()} | Move Count: {move_count}')
                stockfish.set_fen_position(board.fen())
                move_count += 1
                move_list.append(f"{llm_next_move}")
                break
            else:
                board.pop()
                print('Popping board and retrying LLM Next Move!!!')
                llm_next_move = get_llm_next_move(board.fen(), next_turn, llm_next_move, s.join(move_list))
        except (chess.AmbiguousMoveError, chess.IllegalMoveError, chess.InvalidMoveError) as e:
            print(f"LLM Error #{ill_mov_cnt}: {llm_next_move} for {next_turn} is illegal move!!! for {prev_fen}  | FEN: {board.fen()}")
            if ill_mov_cnt == 5:
                print(f"{ill_mov_cnt} illegal moves so far, exiting....")
                break
            ill_mov_cnt += 1
            llm_next_move = get_llm_next_move(board.fen(), next_turn, llm_next_move)

        if board.is_checkmate():
            print("LLM won!")
            print(f"### Move Count: {move_count} ###")
            print(f'Moves list - {s.join(move_list)}')
            break

        if board.is_stalemate():
            print("Draw!")
            print(f"### Move Count: {move_count} ###")
            print(f'Moves list - {s.join(move_list)}')
            break
    if move_count == 50:
        print("Played 50 moves hence quitting!!!!")
        break
board

We observe and measure the effectiveness of the model by counting the number of successful legal moves its able to successfully propose.

The notebook we use for testing the fine-tuned model can be accessed from the following GitHub repo.

Deploy the project

You can initiate the deployment of the project using instructions outlined in the GitHub repo, starting with the following command:

pnpm cdk deploy

This will initiate an AWS CloudFormation stack to run. After the stack is successfully deployed to your AWS account, you can begin setting up user access. Navigate to the newly created Amazon Cognito user pool, where you can create your own user account for logging in to the application. After creating your account, you can add yourself to the admin group to gain administrative privileges within the application.

After you complete the user setup, navigate to Amplify, where your chess application should now be visible. You’ll find a published URL for your hosted demo—simply choose this link to access the application. Use the login credentials you created in the Amazon Cognito user pool to access and explore the application.

After you’re logged in with admin privileges, you’ll be automatically directed to the /admin page. You can perform the following actions on this page:

  • Create a session (game instance) by selecting from various gameplay options.
  • Start the game from the admin panel.
  • Choose the session to load the necessary cookie data.
  • Navigate to the participants screen to view and test the game. The interface is intuitive, but following these steps in order will provide proper game setup and functionality.

Set up the AWS IoT Core resources

Configuring the solution for IoT gameplay follows a similar process to the previous section—you’ll still need to deploy the UI stack. However, this deployment includes an additional IoT flag that signals the stack to deploy the AWS IoT rules in charge of handling game requests and responses. The specific deployment steps are outlined in this section.

Follow the steps from before, but add the following flag when deploying:

pnpm cdk deploy -c iotDevice=true

This will deploy the solution, adding a critical step to the Step Functions workflow, which publishes a move request message to the topic of an AWS IoT rule and then waits for a response.

Users will need to configure an IoT edge device to consume game requests from this topic. This involves setting up a device capable of publishing and subscribing to topics using the MQTT protocol, processing move requests, and sending success messages back to the topic of the AWS IoT rule that is waiting for responses, which then feeds back into the Step Functions workflow. Although the configuration is flexible and can be customized to your needs, we recommend using AWS IoT Greengrass on your edge device. AWS IoT Greengrass is an open source edge runtime and cloud service for building, deploying, and managing device software. This enables secure topic communication between your IoT devices and the AWS Cloud, allowing you to perform edge verifications such as controlling the robotic arms and synchronizing with the physical board before publishing either a success or failure message back to the cloud.

Setting up a Greengrass Core Device and Client Devices

To setup an AWS IoT Greengrass V2 core device, you can deploy the Chess Game Manager component to it, by following the instructions in the GitHub repo for Greengrass Component. The component contains a recipe, where you’ll need to define the configuration that is required for your IoT devices. The default configuration contains a list of topics used to process game requests and responses, to perform board validations and notifications of new moves, and to coordinate move requests and responses from the robotic arms. You also need to update the names of the client devices that will connect to the component, these client devices must be registered as AWS IoT Things on AWS IoT Core.

Users will also need to have a client application that controls the robotic arms, and a client application that fetches information from the smart chess board. Both client applications need to connect and communicate with the Greengrass core device running the Chess Game Manager component. In our demo, we tested with two separate robotic arms client applications, for the first one we used a pair of CR10A arms from Dobot Robotics, and communicated with the robotic arms using its TCP-IP-CR-Python-V4 SDK; For the second one we used a pair of RO1 arms from Standard Bots, using its Standard bots API. For the smart chess board client application, we used a DGT Smart Board, the board comes with a USB cable that allows us to fetch piece move updates using serial communication.

Preventing illegal moves

When using FMs in Amazon Bedrock to generate the next move, the system employs a retry mechanism that makes three distinct attempts with the generative AI model, each providing more context than the last:

  • First attempt – The model is prompted to predict the next best move based on the current board state.
  • Second attempt – If the first move was illegal, the model is informed of its failure and prompted to try again, including the context of why the previous attempt failed.
  • Third attempt – If still unsuccessful, the model is provided with information on previous illegal moves, with an explanation of past failures. However, this attempt includes a list of all legal moves available. The model is then prompted to select from this list the next logical move.

If all three generative AI attempts fail, the system automatically falls back to a chess engine for a guaranteed valid move.

For the custom imported fine-tuned models in Amazon Bedrock, the system employs a retry mechanism that makes five distinct attempts with the model. It all five attempts fail, the system automatically falls back to a chess engine for a guaranteed move.

During chess evaluation tests, models that underwent fine-tuning with over 100,000 training records demonstrated notable effectiveness. These enhanced models prevailed in 80% of their matches against base versions, and the remaining 20% ended in draws.

Clean up

To clean up and remove all deployed resources, run the following command from the AWS CLI:

pnpm cdk destroy

To clean up the imported models in Amazon Bedrock, use the following code:

aws bedrock delete-imported-model 
   --model-identifier <your-model-name> 
   --region <your aws region>

You can also delete the imported models by going to the Amazon Bedrock console and selecting the imported model on the Imported models page.

To clean up the imported models in the S3 bucket, use the following commands after replacing the values corresponding to your environment:

# Delete a single model file

aws s3 rm s3://bucket-name/path/to/model/file

# Delete multiple model files in a directory

aws s3 rm s3://bucket-name/models/ --recursive

# Delete specific model files using include/exclude patterns

aws s3 rm s3://bucket-name/ --recursive --exclude "*" --include "model*.tar.gz"

This code uses the following parameters:

  • –recursive – Required when deleting multiple files or directories
  • –dryrun – Tests the deletion command without actually removing files

Conclusion

This post demonstrated how you can fine-tune FMs to create Embodied AI Chess, showcasing the seamless integration of cloud services, IoT capabilities, and physical robotics. With the AWS comprehensive suite of services, including Amazon Bedrock Custom Model Import, Amazon S3, AWS Amplify, AWS AppSync, AWS Step Functions, AWS IoT Core, and AWS IoT Greengrass, developers can create immersive chess experiences that bridge the digital and physical realms.

Give this solution a try and let us know your feedback in the comments.

References

More information is available at the following resources:


About the Authors

Channa Samynathan is a Senior Worldwide Specialist Solutions Architect for AWS Edge AI & Connected Products, bringing over 28 years of diverse technology industry experience. Having worked in over 26 countries, his extensive career spans design engineering, system testing, operations, business consulting, and product management across multinational telecommunication firms. At AWS, Channa uses his global expertise to design IoT applications from edge to cloud, educate customers on the value proposition of AWS, and contribute to customer-facing publications.

Dwaragha Sivalingam is a Senior Solutions Architect specializing in generative AI at AWS, serving as a trusted advisor to customers on cloud transformation and AI strategy. With seven AWS certifications including ML Specialty, he has helped customers in many industries, including insurance, telecom, utilities, engineering, construction, and real estate. A machine learning enthusiast, he balances his professional life with family time, enjoying road trips, movies, and drone photography.

Daniel Sánchez is a senior generative AI strategist based in Mexico City with over 10 years of experience in cloud computing, specializing in machine learning and data analytics. He has worked with various developer groups across Latin America and is passionate about helping companies accelerate their businesses using the power of data.

Jay Pillai is a Principal Solutions Architect at AWS. In this role, he functions as the Lead Architect, helping partners ideate, build, and launch Partner Solutions. As an Information Technology Leader, Jay specializes in artificial intelligence, generative AI, data integration, business intelligence, and user interface domains. He holds 23 years of extensive experience working with several clients across supply chain, legal technologies, real estate, financial services, insurance, payments, and market research business domains.

Mohammad Tahsin is an AI/ML Specialist Solutions Architect at Amazon Web Services. He lives for staying up to date with the latest technologies in AI/ML and helping guide customers to deploy bespoke solutions on AWS. Outside of work, he loves all things gaming, digital art, and cooking.

Nicolai van der Smagt is a Senior Solutions Architect at AWS. Since joining in 2017, he’s worked with startups and global customers to build innovative solutions using AI on AWS. With a strong focus on real-world impact, he helps customers bring generative AI projects from concept to implementation. Outside of work, Nicolai enjoys boating, running, and exploring hiking trails with his family.

Patrick O’Connor is a WorldWide Prototyping Engineer at AWS, where he assists customers in solving complex business challenges by developing end-to-end prototypes in the cloud. He is a creative problem-solver, adept at adapting to a wide range of technologies, including IoT, serverless tech, HPC, distributed systems, AI/ML, and generative AI.

Paul Vincent is a Principal Prototyping Architect on the AWS Prototyping and Cloud Engineering (PACE) team. He works with AWS customers to bring their innovative ideas to life. Outside of work, he loves playing drums and piano, talking with others through Ham radio, all things home automation, and movie nights with the family.

Rupinder Grewal is a Senior AI/ML Specialist Solutions Architect with AWS. He currently focuses on serving of models and MLOps on Amazon SageMaker. Prior to this role, he worked as a Machine Learning Engineer building and hosting models. Outside of work, he enjoys playing tennis and biking on mountain trails.

Sam Castro is a Sr. Prototyping Architect on the AWS Prototyping and Cloud Engineering (PACE) team. With a strong background in software delivery, IoT, serverless technologies, and generative AI, he helps AWS customers solve complex challenges and explore innovative solutions. Sam focuses on demystifying technology and demonstrating the art of the possible. In his spare time, he enjoys mountain biking, playing soccer, and spending time with friends and family.

Tamil Jayakumar is a Specialist Solutions Architect & Prototyping Engineer with AWS specializing in IoT, robotics, and generative AI. He has over 14 years of proven experience in software development, creating minimum viable products (MVPs) and end-to-end prototypes. He is a hands-on technologist, passionate about solving technology challenges using innovative solutions both on software and hardware, aligning business needs to IT capabilities.

Read More

Efficiently train models with large sequence lengths using Amazon SageMaker model parallel

Efficiently train models with large sequence lengths using Amazon SageMaker model parallel

Large language models (LLMs) have witnessed an unprecedented surge in popularity, with customers increasingly using publicly available models such as Llama, Stable Diffusion, and Mistral. Across diverse industries—including healthcare, finance, and marketing—organizations are now engaged in pre-training and fine-tuning these increasingly larger LLMs, which often boast billions of parameters and larger input sequence length. Although these advancements offer remarkable capabilities, they also present significant challenges. Longer sequence lengths and the sheer number of trainable parameters demand innovative approaches to model development and deployment. To maximize performance and optimize training, organizations frequently need to employ advanced distributed training strategies.

In this post, we demonstrate how the Amazon SageMaker model parallel library (SMP) addresses this need through support for new features such as 8-bit floating point (FP8) mixed-precision training for accelerated training performance and context parallelism for processing large input sequence lengths, expanding the list of its existing features.

We guide you through a step-by-step implementation, demonstrating how to accelerate workloads with FP8 and work with longer sequence lengths using context parallelism, with minimal code changes to your existing training workflow.

The implementation of these new SMP features promises several advantages for customers working with LLMs. First, it can lead to lower costs to convergence, allowing for more efficient use of resources during the training process. This results in reduced time to market, allowing organizations to deploy their optimized models more quickly and gain a competitive edge. Second, it enables training with larger dataset records, expanding the scope and complexity of tasks that can be tackled.

The following sections take a deeper look into this.

Business challenge

Businesses today face a significant challenge when training LLMs efficiently and cost-effectively. As models grow larger and more complex, organizations are using fine-tuning and continuous pre-training strategies to train these models with domain-specific data, using larger sequence lengths that can range from 8K to 128K tokens. These longer sequence lengths allow models to better understand long-range dependencies in text, generate more globally coherent outputs, and handle tasks requiring analysis of lengthy documents.

Although there exist various strategies such as Fully Shared Data Parallelism (FSDP), tensor parallelism (TP), and pipeline parallelism to effectively train models with billions of parameters, these methods are primarily designed to distribute model parameters, gradients, and optimizer states across GPUs, and they don’t focus on input data–related optimizations. This approach reduces memory pressure and enables efficient training of large models. However, none of these techniques effectively address partitioning along the sequence dimension. As a result, training with longer sequence lengths can still lead to out-of-memory (OOM) errors, despite using FSDP.

As a result, working with larger sequence length might result in memory pressure, and it often requires innovative approaches such as FP8 and context parallelism.

How does SMP context parallelism and FP8 help accelerate model training?

SMP addresses the challenges of memory pressure by providing an implementation of context parallelism, which is a parallelization technique that partitions on the dimension of sequence length. Furthermore, it can work together with other parallelism techniques such as FSDP and TP. SMP also implements FP8 for supported models such as Llama. FP8 is a reduced-precision floating-point format that boosts efficiency by enabling faster matrix multiplications without significant accuracy loss. You can use these techniques together to train complex models that are orders of magnitude faster and rapidly iterate and deploy innovative AI solutions that drive business value.

The following sections dive deep into the implementation details for each of these features in SMP.

Context parallelism

Context parallelism is a model parallelism technique to allow the model to train with long sequences. It’s a parallelization scheme that partitions a model’s activations along the sequence dimension. During training with SMP context parallel strategy, the inputs are partitioned along the sequence dimension before being fed to the model. With activations being partitioned along the sequence dimension, we need to consider how our model’s computations are affected. For layers that don’t have inter-token dependency during computation, we don’t require special considerations. In a transformer architecture, such layers are the embedding layers and the multilayer perceptron (MLP) layers. The layers that have inter-token dependency are the attention layers. For the attention layer, as we see from the attention computation, Query projections (Q) need to interact with the tokens of key (K) and value (V) projections.

Because we only have a partition of K and V, we require an AllGather operation to collect the keys and queries from other ranks. As detailed in the following figure, we consider a context parallel scheme with context parallel degree 2 for a causal language model. Thus GPU 0 has the first half of the input sequence and GPU 1 has the other half. During forward, the non-attention layers compute their activations as normal. For attention computation, an AllGather operation is performed for K and V across the context parallel ranks belonging to GPU 0 and GPU 1. To conserve memory, the K and V tensors obtained from the AllGather operation are discarded after the attention computation is completed. Consequently, during the backward pass, we require the same AllGather operation for K and V. Additionally, after the attention backward pass, a ReduceScatter operation is performed to scatter the gradients to corresponding context parallel ranks.

Unlike other model parallel schemes such as tensor parallelism, context parallelism keeps the model parameters intact. Thus, there are no additional communication collectives for parameters required for context parallelism.

Supported models

SMP supports context parallelism using NVIDIA Transformer Engine, and it seamlessly integrates with other model parallelism techniques Fully Sharded Data Parallel and Tensor Parallelism. SMP v2.6 supports the Llama 3.1 (and prior Llama models) and Mistral model architectures for context parallelism.

Mixed Precision Training with FP8

As shown in figure below, FP8 is a datatype supported by NVIDIA’s H100 and H200 GPUs, enables efficient deep learning workloads. The FP8 format occupies only 8 bits of memory, half that of its BF16 or FP16 counterparts, significantly reducing computational costs for operations such as matrix multiplication. The compute throughput for running matrix operations such as multipliers and convolutions is significantly higher on 8-bit float tensors compared to 32-bit float tensors. FP8 precision reduces the data footprint and computational requirements, making it ideal for large-scale models where memory and speed are critical.

Delving deeper into FP8’s architecture, we discover two distinct subtypes: E4M3 and E5M2. The E4M3 configuration, with its 1 sign bit, 4 exponent bits, and 3 mantissa bits, offers superior precision but a limited dynamic range. This makes it ideal for the forward pass in model training. Conversely, E5M2, featuring 1 sign bit, 5 exponent bits, and 2 mantissa bits, boasts a broader dynamic range at the expense of reduced precision. This configuration excels in the backward pass, where precision is less critical, but a wider range proves advantageous.

The transition to mixed precision training with FP16 or BF16 has historically necessitated static or dynamic loss-scaling to address convergence issues that stemmed from reduced precision in gradient flow. This challenge is further amplified in FP8 due to its narrower range. To combat this, the Transformer Engine introduced an innovative solution called DelayedScaling. This technique selects scaling factors based on the maximum observed value for each tensor from previous iterations. Although DelayedScaling maximizes the performance benefits of FP8 computation, it does come with a memory overhead for storing the tensors’ maximum value history. However, despite the additional overhead, the improved throughput observed with 8-bit tensor computations make this approach valuable.

Supported models

SMP supports FP8 mixed precision training using NVIDIA Transformer Engine and keeps compatibility with PyTorch MixedPrecision. This means that you can use FP8 training for supported layers and half-precision using PyTorch Automatic Mixed Precision for others. SMP v2.6 supports the following model architectures for FP8 training: Llama 3.1 (and prior Llama models), Mixtral, and Mistral.

More details about FP8 can be found at FP8 Formats For Deep Learning.

Solution overview

We can use SMP with both Amazon SageMaker Model training jobs  and Amazon SageMaker HyperPod.

For this post, we demonstrate SMP implementation on SageMaker trainings jobs.

Launching a machine learning (ML) training cluster with Amazon SageMaker training jobs is a seamless process that begins with a straightforward API call, AWS Command Line Interface (AWS CLI) command, or AWS SDK interaction. After they’re initiated, SageMaker training jobs spin up the cluster, provisioning the specified number and type of compute instances.

In our example, we use a single ml.p5.48xlarge instance, though we’re illustrating the use of four GPUs for demonstration purposes. The training data, securely stored in Amazon Simple Storage Service (Amazon S3), is copied to the cluster. Each record sequence (Seq0) is strategically split into multiple subsequences and assigned to each GPU in our cluster.

Our implementation uses the FP8 capabilities of SMP to execute model training on Nvidia H100 GPUs and showcases context parallelism capabilities. Because of the flexibility of SageMaker, you can scale your compute resources as needed, accommodating workloads across of a range of sizes. SageMaker creates a resilient training cluster, handles orchestration, closely monitors the infrastructure, and recovers from faults, providing a smooth and uninterrupted training experience. Furthermore, the SageMaker training jobs cost-effective design automatically terminates the cluster upon completion of the training job, with billing calculated down to the second of actual training time used. This combination of power, flexibility, and cost-efficiency makes SageMaker an ideal service for ML practitioners of all levels.

The following diagram shows the solution architecture.

The following walkthrough shows you how you can train a Llama 3.1 8B Instruct model using the PubMed tokenized dataset with a sequence length of approximately 16K tokens. We use SMP context parallelism implementation to enable training for this large sequence length. We compare two approaches: one without context parallelism and another one with it. This comparison highlights the importance of context parallelism when working with LLMs and datasets containing long sequences.

Additionally, we conduct a comparative run on p5.48xlarge instances with context parallelism enabled, both with FP8 enabled and disabled. This demonstration will showcase the incremental throughput benefits we can achieve by enabling FP8-based training alongside context parallelism.

In summary, the implementation follows these four steps:

  1. Set up libraries and process data
  2. Run training without context parallelism
  3. Run training with context parallelism enabled to track memory optimizations
  4. Run training with FP8 enabled to gain further performance

The following flow diagram shows these four steps.

Prerequisites

To perform the solution, you need to have the following prerequisites in place:

  1. Create a Hugging Face User Access Token and get access to the gated repository meta-llama/Llama-3.1-8B on Hugging Face.
  2. Request a Service Quota for 1x p4d.24xlarge and 1x ml.p5.48xlarge on Amazon SageMaker. To request a service quota increase, on the AWS Service Quotas console, choose AWS services, Amazon SageMaker, and then choose one ml.p4d.24xlarge and one ml.p5.48xlarge training job usage.
  3. Create an AWS Identity and Access Management (IAM) role with managed policies AmazonSageMakerFullAccess, AmazonEC2FullAccess to give required access to SageMaker to run the examples.

This walkthrough is for demonstration purposes only. You should adjust this to your specific security requirements for production. Adhere to the principle of least privilege while defining IAM policies in production.

  1. Create an Amazon SageMaker Studio domain (refer to Quick setup to Amazon SageMaker) to access Jupyter notebooks.

Solution walkthrough

To perform the solution, use the instructions in the following steps.

Set up libraries and process data

To set up libraries and process data, follow these instructions. The following flow diagram shows step 1 highlighted.

  1. Enter the following command to install the relevant HuggingFace and SageMaker libraries:
    %pip install --upgrade "sagemaker>=2.233"
    %pip install "datasets==2.14.5"
    %pip install transformers

  2. Load the PubMed dataset and tokenize it

In this example, we use the PubMed Scientific Papers dataset, containing 133,215 biomedical research articles. For our experiment, we select 1,000 papers split 80/20 for training and validation. Using the Meta-LlaMA-3 tokenizer, we process each paper into sequences of 16,384 tokens.

The dataset undergoes two main processing steps: tokenization with Llama’s tokenizer and grouping into fixed-length chunks of 16,384 tokens using utility function group_texts. This uniform sequence length enables even distribution across GPUs while maintaining the natural structure of the scientific papers.

import datasets
from datasets import load_dataset, DatasetDict

# Load the PubMed dataset
pubmed_dataset = load_dataset(
    "scientific_papers",
    "pubmed",
    cache_dir="/home/ec2-user/SageMaker/datasets",
    download_mode="force_redownload"
)

# Create a smaller subset of the dataset for our experiment
train_test = pubmed_dataset['train'].shuffle(seed=42).select(range(1000)).train_test_split(
    test_size=0.2,
    seed=42
)

lm_datasets = tokenized_datasets.map(
    group_texts,
    batched=True,
    desc=f"Grouping texts in chunks of {block_size}",
)
  1. Prepare data for the training job

In this section, we prepare the PubMed dataset for SageMaker training by managing data transfers to Amazon S3. Both training and validation splits are converted to JSON format and uploaded to designated S3 buckets, with separate paths for input data and output artifacts.

if lm_datasets["train"] is not None:
    train_dataset = lm_datasets["train"]
    train_dataset.to_json("./training.json")
    training_dataset_location = f"s3://{default_bucket}/dataset/train/"

if lm_datasets["validation"] is not None:
    eval_dataset = lm_datasets["validation"]
    eval_dataset.to_json("./validation.json")
    validation_dataset_location = f"s3://{default_bucket}/dataset/validation/"

  1. Set up training hyper parameters

In this configuration, we define hyperparameters for training Llama on PubMed, covering memory optimizations, training parameters, model architecture settings, and performance tuning. Starting with conservative settings (batch size=1, BF16 precision), we establish a baseline configuration that will be modified to test different optimization strategies, particularly for context parallelism experiments.

hyperparameters = {
    # Memory and optimization settings
    "activation_checkpointing": 1,
    "auto_wrap_policy": "transformer_auto_wrap_policy",
    ...
    
    # Training settings
    "train_batch_size": 1,
    "val_batch_size": 1,
    ...
    
    # Model configuration
    "vocab_size": 128256, # Vocab size from Llama 3.1 config file on Hugging Face
    "hf_pretrained_model_name_or_dir": model_id,
    
    ...
    
}

Run training without context parallelism

To run training without context parallelism, follow these instructions. The following flow diagram shows step 2 highlighted.

In this setup, we configure a baseline training job by disabling context parallelism and FP8 features, while maximizing memory usage through FP32 precision and larger batch sizes. Each GPU processes the full 16,384 token sequence without splitting, and memory-saving features are disabled to demonstrate the limitations and potential memory constraints when running without advanced optimizations such as context parallelism and FP8.

instance_type= "p4d.24xlarge"
instance_count= 1
hybrid_shard_degree= 8

hyperparameters.update({
    "use_smp_implementation": 0,  # Disable SMP/CP. Only FSDP is active
    "train_batch_size": 1,        # Batch size
    "max_context_width": 16384,   # Full sequence length
    "clean_cache": 0,
    "bf16": 1,                    # Use bf16
    ...
})

smp_estimator = PyTorch(
    entry_point="train.py",
    hyperparameters=hyperparameters,
    ...
    instance_type=instance_type,
    volume_size=400,
    instance_type=instance_count,
    distribution={
        "torch_distributed": {
            "enabled": True,
        },
        "smdistributed": {
            "modelparallel": {
                "enabled": True,  # Enable model parallelism but with minimal parameters
                "parameters": {
                    "hybrid_shard_degree": hybrid_shard_degree,
                    "delayed_parameter_initialization": True
                }
            }
        }
    },
    
   ...
)

smp_estimator.fit(inputs=data_channels)

The result of not using context parallelism with a large context width (16,384) means that we will get a CUDA out-of-memory error:

AlgorithmError: ExecuteUserScriptError: ExitCode 1 ErrorMessage “[rank3]: torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 7.83 GiB. GPU 3 has a total capacity of 39.38 GiB of which 5.53 GiB is free. Including non-PyTorch memory, this process has 0 bytes memory in use.

Run training with context parallelism enabled to track memory optimizations

To run training with context parallelism enabled to track memory optimizations, follow these instructions. The following flow diagram shows step 3 highlighted.

In this configuration, we enable context parallelism while keeping FP8 disabled. By setting context parallel degree to 8, we distribute the 16,384 token sequence across all available GPUs for efficient processing. The setup includes essential context parallelism parameters and launches the training job in a background thread, allowing for unblocked notebook execution while maintaining clear job identification for comparison with other configurations.

instance_type= "p4d.24xlarge"
instance_count= 1
hybrid_shard_degree= 8
context_parallel_degree=8

smp_estimator = PyTorch(
    ...
    entry_point="train.py",
    instance_type=instance_type,
    instance_count=instance_count,
    distribution={
        "torch_distributed": {
            "enabled": True,
        },
        "smdistributed": {
            "modelparallel": {
                "enabled": True,
                "parameters": {
                    "context_parallel_degree": context_parallel_degree,
                    "hybrid_shard_degree": hybrid_shard_degree,
                    "delayed_parameter_initialization": True,
                }
            }
        }
    },
    ...
)

smp_estimator.fit(inputs=data_channels)

The result of using context parallelism with such a large context width is that the job successfully completes, as shown in the following screenshot.

We also enabled delayed parameter initialization and hybrid sharding capabilities from SMP for both preceding configurations. Delayed parameter initialization allows initializing large models on a meta device without attaching data. This can resolve limited GPU memory issues when you first load the model. This approach is particularly useful for training LLMs with tens of billions of parameters, where even CPU memory might not be sufficient for initialization. Hybrid sharding is a memory saving technique that shards parameters within the hybrid shard degree (HSD) group and replicates parameters across groups. The HSD controls sharding across GPUs and can be set to an integer from 0 to world_size. This results in reduced communication volume because expensive AllGathers and ReduceScatters are only done within a node, which perform better for medium-sized models.

Run training with FP8 enabled to gain further performance

To run training with FP8 enabled to gain further memory performance, follow these instructions. The following flow diagram shows step 4 highlighted.

In this fully optimized configuration, we enable both context parallelism and FP8 training using a NVIDIA P5 instance (ml.p5.48xlarge). This setup combines sequence splitting across GPUs with FP8 precision training, creating a highly efficient training environment. Using P5 instances provides the necessary hardware support for FP8 computation, with the result that we can maximize the benefits of both memory-saving techniques.

instance_type= "p5.48xlarge"
instance_count= 1
hybrid_shard_degree= 8
context_parallel_degree=8

hyperparameters.update({
    "use_smp_implementation": 1,  # Enable SMP/CP
    "max_context_width": 16384,   # Full sequence length
    "fp8": 1,  # Enable FP8 flag
    "distributed_backend": "nccl"  # Add this line to explicitly use NCCL
    ...

})

smp_estimator = PyTorch(
    ...
    entry_point="train.py",
    instance_type=instance_type,
    instance_count=instance_count,
    distribution={
        "torch_distributed": {
            "enabled": True,
        },
        "smdistributed": {
            "modelparallel": {
                "enabled": True,
                "parameters": {
                    "context_parallel_degree": context_parallel_degree,
                    "hybrid_shard_degree": hybrid_shard_degree,
                    "delayed_parameter_initialization": True,
                }
            }
        }
    },
   ...
)

smp_estimator.fit(inputs=data_channels)

Start training with context parallelism, without FP8 (on a P5 instance)

To do a fair comparison with and without FP8, we will do another run without FP8 but with context parallelism on a P5.48xlarge instance and compare the throughputs for both runs.

instance_type= "p5.48xlarge"
instance_count= 1
hybrid_shard_degree= 8
context_parallel_degree=8

hyperparameters.update({
    "use_smp_implementation": 1,  # Enable SMP/CP
    "max_context_width": 16384,   # Full sequence length
    "bf16": 1,                    # Use BF16
    "distributed_backend": "nccl"  # Add this line to explicitly use NCCL
    ...
})

# This remains the same as in the previous step
smp_estimator = PyTorch(
    ...
    )
    
smp_estimator.fit(inputs=data_channels)

If we compare both runs, we can tell that the speed of the same context parallelism enabled job with FP8 is almost 10 times faster

With FP8, speed is around 14.6 samples/second, as shown in the following screenshot.

Without FP8, speed is around 1.4 samples/second, as shown in the following screenshot.

The following table depicts the throughput increment you get in each of the listed cases. All these cases are run on a P5.48xLarge.

The throughput may vary based on factors such as the context width or batch size. The following numbers are what we have observed in our testing.

Configuration (ml.P5.48xlarge; CP on 8 GPUs, Train Batch Size 4) Observed samples speed Observed throughput
No context parallelism & No FP8 torch.OutOfMemoryError: CUDA out of memory torch.OutOfMemoryError: CUDA out of memory
Only Context Parallelism 2.03 samples/sec 247 TFLOPS/GPU
Context parallelism + FP8 3.05 samples/sec 372 TFLOPS/GPU

Cleanup

To clean up your resources to avoid incurring more charges, follow these steps:

  1. Delete any unused SageMaker Studio resources.
  2. Optionally, delete the SageMaker Studio domain.
  3. Delete any S3 buckets created
  4. Verify that your training job isn’t running anymore! To do so, on your SageMaker console, choose Training and check Training jobs.

To learn more about cleaning up your resources provisioned, check out Clean up.

Conclusion

In this post, we demonstrated the process of setting up and running training jobs for the PubMed dataset using the Llama 3.1 8B Instruct model, both with and without context parallelism. We also showcased how to enable FP8 based training for even faster throughputs.

Key takeaways:

  • For datasets that have long sequence lengths, we observe that using context parallelism helps avoid OOM errors.
  • For faster training, we can enable FP8 based training and combine it with context parallelism to get increased throughput times. In this notebook, we observed that the throughput goes up tenfold if we enable FP8 with context parallelism.

As next steps, try out the above example by following the notebook steps at sagemaker-distributed-training-workshop.

Special thanks to Roy Allela, Senior AI/ML Specialist Solutions Architect for his support on the launch of this post.


About the Authors

Kanwaljit Khurmi is a Principal Worldwide Generative AI Solutions Architect at AWS. He collaborates with AWS product teams, engineering departments, and customers to provide guidance and technical assistance, helping them enhance the value of their hybrid machine learning solutions on AWS. Kanwaljit specializes in assisting customers with containerized applications and high-performance computing solutions.

Surya Kari is a Senior Generative AI Data Scientist at AWS. With a background in computer vision and AI devices, his current specializations include LLM training, multi-modal RAG, vision-language models, and edge computing.

Arun Kumar Lokanatha is a Senior ML Solutions Architect with the Amazon SageMaker team. He specializes in LLM training workloads, helping customers build LLM workloads using SageMaker HyperPod, SageMaker training jobs, and SageMaker distributed training. Outside of work, he enjoys running, hiking, and cooking.

Suhit Kodgule is a Software Development Engineer with the AWS Artificial Intelligence group working on deep learning frameworks. In his spare time, he enjoys hiking, traveling, and cooking.

Anirudh Viswanathan is a Sr Product Manager, Technical – External Services with the SageMaker Training team. He holds a Masters in Robotics from Carnegie Mellon University, an MBA from the Wharton School of Business, and is named inventor on over 40 patents. He enjoys long-distance running, visiting art galleries, and Broadway shows.

Read More

Getting started with Amazon Bedrock Agents custom orchestrator

Getting started with Amazon Bedrock Agents custom orchestrator

Generative AI agents are designed to interact with their environment to achieve specific objectives, such as automating repetitive tasks and augmenting human capabilities. By orchestrating multistep workflows that adapt to evolving goals in real time, these agents increase productivity, reduce errors, and deliver more personalized experiences. To manage these complex workflows effectively, agents rely on an orchestration strategy that coordinates interactions with various tools, knowledge sources, and other agents. This orchestration allows agents to analyze data, interpret context, sequence tasks, and adapt to shifting requirements, making sure that workflows remain efficient, accurate, and resilient.

Amazon Bedrock Agents streamlines the development of generative AI applications by offering a fully managed solution that uses foundation models (FMs) and augmenting tools to autonomously run tasks and achieve objectives through orchestrated, multistep workflows. Using the default orchestration strategy, reasoning and action (ReAct), users can quickly build and deploy agentic solutions. ReAct is a general problem-solving approach that uses the FM’s planning capabilities to dynamically adjust actions at each step. Although ReAct offers flexibility by allowing agents to continually reevaluate their decisions based on shifting requirements, its iterative approach can lead to higher latency when many tools are involved.

For greater orchestration control, Amazon Bedrock Agents has launched the custom orchestrator feature, which users can use to fine-tune agent behavior and manage tool interactions at each workflow step. This customization allows organizations to tailor agent functionality to their specific operational needs, improving precision, adaptability, and efficiency. In this post, we explore how custom orchestrators work and demonstrate their application with the default Bedrock Agent’s ReAct and reasoning without observation (ReWoo) examples.

Custom orchestrator overview

Implemented by users as an AWS Lambda function, the Amazon Bedrock Agents custom orchestrator offers granular control over task planning, completion, and verification. Unlike the default ReAct orchestration method, which prioritizes decision transparency and step-by-step reasoning, the custom orchestrator gives users the ability to define strategies that are better aligned with specific use case requirements. In ReAct, FM and tool invocations follow a sequential, step-by-step process, where each action depends on the outcome of the previous one. This structured, linear approach offers transparency, making it easier to trace the reasoning behind each action and decision while also promoting consistency through predictable workflows. Although ReAct’s design provides incremental adaptability by allowing agents to reassess actions at each step, its sequential structure may introduce delays when rapid parallel actions are required or when workflows demand instant responsiveness across multiple steps. This makes ReAct less suited to scenarios where speed and rapid sequential processing are paramount, such as in complex, high-volume workflows.

The custom orchestrator offers an alternative, more flexible approach, which users can use to define orchestration strategies that are more closely aligned with their specific requirements. With real-time adjustments and precise control over FM and tool interactions, users can create workflows that provide the optimal balance of performance, accuracy, and resilience. After a custom orchestrator is created, it can be reused across multiple agents by updating a single reference when configuring new agents.

Key benefits of the custom orchestrator include:

  • Full control over orchestration strategies – Tailor agent workflows for optimal performance across various metrics, such as accuracy, speed, and resilience. Use Amazon Bedrock Agents built-in integrations with action groups, knowledge bases, and guardrails to streamline interactions.
  • Real-time adjustments – Dynamically adjust agent actions based on the current context, tool outputs, or evolving user requirements so the agent adapts efficiently and effectively to new information.
  • Reusability and consistency – After an orchestration strategy is created, it can be implemented across all relevant agents, saving time and promoting consistency.

In this post, we compare the invocations of an Amazon Bedrock agent with the default ReAct prompts with the invocations of an Amazon Bedrock agent with a custom orchestration implementing the ReWoo strategy. First, we examine the underlying contracts and state management principles that drive its adaptability.

Custom orchestrator workflow management

The custom orchestrator enables dynamic decision-making and adaptable workflow management through contract-based interactions between Amazon Bedrock Agents and AWS Lambda. The Lambda function acts as the orchestration engine, processing contextual inputs—such as state, conversation history, session parameters, and user requests—to generate instructions and define the state for subsequent actions. Upon receiving user input, Amazon Bedrock Agents uses the custom orchestrator logic and the Amazon Bedrock Converse API to manage interactions between the underlying FM and various tools, such as action groups, knowledge bases, and guardrails.

The following diagram illustrates the flow of interactions between the user, Amazon Bedrock Agents, and the custom orchestrator, which manages the workflow:

The custom orchestrator workflow includes the following steps:

  1. User input – The process begins when the user submits a request or query. This input is sent to Amazon Bedrock Agents, initiating the workflow.
  2. Custom orchestrator initiation – Amazon Bedrock Agents passes the user input to the custom orchestrator, which initiates the orchestration process in the START state. The orchestrator guides the workflow through intermediate steps to process the input.
  3. Tool interactions – Amazon Bedrock Agents interacts with various tools to manage the request:
    • Knowledge bases – Provide relevant context or information based on user input.
    • Action groups – Invoke predefined action groups, which include:
      • Lambda functions for custom logic
      • Return of control (RoC) functions to sequence steps
      • Code interpreter (CI) functions for code execution
    • Guardrails – Makes sure responses comply with predefined criteria or safety standards.
    • Converse API – Manages conversation flow and processes natural language responses between Amazon Bedrock Agents and the FM.
    • Session attributes – Manage session-specific data, such as long-term memory, session attributes, and knowledge base configurations, personalizing and maintaining context across interactions.
  4. Custom orchestrator workflow – As Amazon Bedrock Agents interacts with various tools, the custom orchestrator tracks progress through states, adjusting the workflow as necessary. After the workflow reaches completion, the orchestrator signals it using the FINISH action event.
  5. Final output – Amazon Bedrock Agents generates and delivers the final output to the user, completing the interaction.

This workflow highlights how Amazon Bedrock Agents, guided by the custom orchestrator, coordinates various steps and manages the flow of information to fulfill the user request. Through state transitions, the orchestrator makes sure that each action follows a structured sequence, enabling dynamic and flexible control over the workflow. Next, we explore how state transitions and contract-based interactions structure customizable workflow management.

State and event management

State management is central to guiding the progression of interactions and determining the next steps in the workflow. States represent specific stages or conditions, allowing the orchestration engine to track and manage actions. These states make sure that the workflow proceeds in an orderly manner, with each action dependent on the current state. States are passed in the request schema from Amazon Bedrock Agents to the customer orchestrator handled through the Lambda function. In contrast, events are actions that drive state transitions or invoke further actions. Events are passed in the response schema from AWS Lambda to Amazon Bedrock Agents.

Each interaction between the agent and the custom orchestrator starts with a “START” state and ends with a “FINISH” event. During the orchestration, the custom orchestrator Lambda can receive “START”, “MODEL_INVOKED”, “TOOL_INVOKED”, “APPLY_GUARDRAILS_INVOKED”, or a custom defined state as input and will output “FINISHED”, “INVOKE_MODEL”, “INVOKE_TOOL”, “APPLY_GUARDRAILS”, or a custom defined event. The flow between states and events is shown in the following figure.

Each state transition occurs in response to specific events, allowing the workflow to adapt dynamically based on input and context. For example, when a FINISH event response is received, the orchestrator is signaling that workflow is complete. The custom orchestrator Lambda function then streams the output back to Amazon Bedrock Agents, which streams it to the user. This mechanism provides a smooth and responsive interaction, enabling effective orchestration of tasks. The requests and response contract-based interactions are handled through JSON events as detailed here.

By using these contract-based interactions, Amazon Bedrock Agents and the custom orchestrator Lambda function collaborate effectively to process contextual inputs, manage state transitions, and produce accurate, tailored responses. This flexible architecture is critical for handling complex workflows that require real-time adjustments and precise control over the agent’s behavior.

Custom orchestrator workflow patterns: ReAct and ReWoo

To illustrate the power and flexibility of the custom orchestrator, the next section examines two orchestration strategies—default Bedrock Agent’s ReAct and ReWoo—and explores how each addresses trade-offs in agent workflows. To further explore the flexibility and potential of the custom orchestrator, consider a restaurant example use case. In this use case, we have an Amazon Bedrock Agent that has one action group that can connect to three APIs: create reservation, update existing reservation, and delete reservation. The agent also connects with a knowledge base that indexes the different menus for the food served in this restaurant. The following diagram shows the agent architecture.

Default orchestrator: ReAct

The default Amazon Bedrock Agents ReAct approach is an iterative decision-making process where the model analyzes each step, deciding on the next action based on the information gathered at each stage, as shown in the following figure.

This method provides transparency and allows for a clear, step-by-step breakdown of actions, making it well-suited for workflows that benefit from incremental adjustments. Although effective in dynamic environments where real-time reevaluation is advantageous, ReAct’s sequential structure can introduce latency when a complex plan is required. For instance, considering the restaurant assistant example, when asking simple queries such as “What do you serve for dinner?” or “Can you make a reservation for two people, at 7pm tonight?” the agent plan will consist of a single action that doesn’t have a much higher latency. However, when considering a more complex query such as “What do you serve for dinner? Can you make a reservation for four people, at 9pm tonight.” The agent plan will have multiple steps. At each step the results are observed, and the plan is adapted as shown in the following diagram. Notice that the plan is implicit, and the thought provides the next step. After each step, a new model invocation is done to determine the next step or to provide the final answer.

ReWoo

The ReWoo technique optimizes performance by generating a complete task plan up front and executing it without checking intermediate outputs, as shown in the following flow diagram.

This approach minimizes model calls, significantly reducing response times for queries that require interaction with multiple tools. For tasks where speed is prioritized over iterative adjustments—or where the intermediate reasoning steps should remain hidden for security reasons—ReWoo offers clear advantages over the default ReAct strategy.

A key source of agent latency is the number of FM calls required to complete a task. Although the default ReAct strategy requires at least N+1 calls for N steps, ReWoo reduces this to at most two calls to the model for any number of tools, cutting down model invocations and, consequently, response time. For example, for a task that takes 9 seconds with three model invocations with ReAct, the difference would be marginal with ReWoo because the task would still take two model invocations. However, as the complexity scales, the latency difference becomes bigger. For instance, a task taking 18 seconds with six model invocations could take only 9 seconds and two model invocations with ReWoo—a difference that scales with the complexity of the workflow.

When analyzing the query “What do you serve for dinner? Can you make a reservation for four people, at 9pm tonight,” with ReWoo the agent will create a plan to access the knowledge base for the dinner menu information and the action group to create a new dinner reservation without validating intermediate steps as shown in the following video clip.

When running this query with an agent using Anthropic’s Claude Sonnet 3.5 v2, we observed a 50–70% latency reduction for the complex query. You can find the implementation of this solution in our GitHub repository amazon-bedrock-samples.

It’s important to notice that although ReWoo has advantages for speed, it does have a more complex prompt, and you need to build a parser for the output, which makes it a more difficult strategy to implement. This is one reason why you should weigh speed, accuracy, and complexity of solution when creating a new orchestration strategy.

Conclusion

In this post, we explored how Amazon Bedrock Agents simplifies the orchestration of generative AI workflows, particularly with the introduction of the custom orchestrator feature. You can use the custom orchestrator to fine-tune and optimize agentic workflows that align more closely with specific business and operational needs. We outlined the feature’s key benefits, including full control over orchestration, real-time adjustments, and reusability, followed by a breakdown of how it manages state transitions and contract-based interactions between Amazon Bedrock Agents and AWS Lambda.

We then dove deeper into the default ReAct and a custom ReWoo orchestration strategies, and discussed the trade-offs between flexibility and performance. Through the detailed workflow management, state events, and contract interactions applied to a custom ReWoo implementation, we highlighted how the custom orchestrator adapts to dynamic conditions, and you can therefore build more efficient and accurate AI applications. We also illustrated examples of simplified ReAct and ReWoo orchestration strategies and the trade-offs between flexibility and performance.

To learn more about custom orchestrator techniques and get started with end-to-end examples, refer to our GitHub repository.


About the Authors

Kyle T. Blocksom is a Sr. Solutions Architect with AWS based in Southern California. Kyle’s passion is to bring people together and leverage technology to deliver solutions that customers love. Outside of work, he enjoys surfing, eating, wrestling with his dog, and spoiling his niece and nephew.

Maira Ladeira Tanke is a Tech Lead Amazon Bedrock for Generative AI Agents at AWS. With a background in machine learning, she has over 10 years of experience architecting and building AI applications with customers across industries. As a technical lead, she helps customers accelerate their achievement of business value through generative AI solutions on Amazon Bedrock. In her free time, Maira enjoys traveling, playing with her cat, and spending time with her family someplace warm.

Mark Roy is a Principal Machine Learning Architect for AWS, helping customers design and build generative AI solutions. His focus since early 2023 has been leading solution architecture efforts for the launch of Amazon Bedrock, the flagship generative AI offering from AWS for builders. Mark’s work covers a wide range of use cases, with a primary interest in generative AI, agents, and scaling ML across the enterprise. He has helped companies in insurance, financial services, media and entertainment, healthcare, utilities, and manufacturing. Prior to joining AWS, Mark was an architect, developer, and technology leader for over 25 years, including 19 years in financial services. Mark holds six AWS certifications, including the ML Specialty Certification.

John Baker is a Principal SDE at AWS where he works on Amazon Bedrock and specifically Amazon Bedrock Agents. He has been with Amazon for more than 10 years and has worked across AWS, Alexa, and Amazon.com. In his spare time, John enjoys skiing and other outdoor activities throughout the Pacific Northwest.

Sudip Dutta is a senior Software Developer engineer leading the development of Amazon Bedrock Agents custom orchestrator. With more than 17 year of experience developing distributed systems and architectures he has worked at AWS for the past 6 years focusing on ML and AI services such as Bedrock and Lex. On his free time Sudip enjoys hiking in the forest of pacific northwest or reading mystery novels!

Read More

Use Amazon Bedrock Agents for code scanning, optimization, and remediation

Use Amazon Bedrock Agents for code scanning, optimization, and remediation

Amazon Bedrock is a fully managed service that makes foundation models (FMs) from leading AI startups and Amazon available through an API, so you can choose from a wide range of FMs to find the model that best suits your use case. With the Amazon Bedrock serverless experience, you can get started quickly, privately customize FMs with your own data, integrate and deploy them into your application using Amazon Web Services (AWS) tools without having to manage any infrastructure.

For enterprises in the realm of cloud computing and software development, providing secure code repositories is essential. As sophisticated cybersecurity threats become more prevalent, organizations must adopt proactive measures to protect their assets. Amazon Bedrock offers a powerful solution by automating the process of scanning repositories for vulnerabilities and remediating them. This post explores how you can use Amazon Bedrock to enhance the security of your repositories and maintain compliance with organizational and regulatory standards.

This solution demonstrates how Amazon Bedrock Agents can be configured to scan a specific code repository, remediate vulnerabilities, and push the changes to a new branch. This approach can accelerate development, reduce errors, and adhere to security guidelines.

Solution overview

There are three high-level steps to deploy the solution:

  1. Configure the Amazon Bedrock Agent
  2. Configure the AWS Lambda function for the action group
  3. Add the action group to the Amazon Bedrock agent

There are two key steps in the architecture, as illustrated in the following diagram:

  1. The user provides the necessary information through the Amazon Bedrock agent chat console. They supply the code repository URL, such as https://github.com/abc/test, and specify the branch name to scan, for instance, main. Then they list the folders to exclude from the scan, such as test, and specify file extensions to exclude, such as .md and .txt. Then they provide a new branch name where the remediated code will be uploaded.
  2. The Amazon Bedrock agent forwards the details to an action group that invokes a Lambda function. This function retrieves the code, scans it for vulnerabilities using a preselected large language model (LLM), applies remediation, and pushes the remediated code to a new branch for user validation. The excluded folders and file extensions aren’t scanned. Upon completion, the action group (Lambda function) sends the information back to the Amazon Bedrock agent, which then displays the status to the user.

Figure 1. Architecture Diagram

Prerequisites

To implement the solution, you need the following:

Configure the Amazon Bedrock agent

To configure the Amazon Bedrock agent, complete the following steps:

  1. On the Amazon Bedrock console, choose Agents in the navigation pane, then choose Create Agent.
  2. (Optional) Provide agent details, including agent name and description.
  3. Grant the agent permissions to AWS services through the IAM service role. This gives your agent access to required services, such as Lambda.
  4. Select an FM in Amazon Bedrock (such as Anthropic’s Claude 3 Sonnet).
  5. To scan a code repository and remediate vulnerabilities through Amazon Bedrock Agents, attach the following instruction to the agent:

You are a code scanning and remediating AI assistant. Greet the user and ask user for repository_url and branch_name that needs to be scanned. Ask user for list of folders that needs to be excluded from scanning and also ask user for list of specific file extensions that needs to be excluded from scanning. Ask user new branch name to push the remediated code. Pass those inputs to trigger code-scan-remediation action group.

Configure the Lambda for the action group

After initial agent configuration and adding the preceding instruction to the agent, you create one Lambda function that will be used for the action group.

Create a Lambda function designed to scan a code repository for vulnerabilities, remediate the vulnerabilities, and push the changes to a new user-specified branch. This function will be used by the action group, which will be invoked by the Amazon Bedrock agent following the user’s input of the code repository URL, branch name, and the list of folders and file extensions to exclude from the scan. Reference to the Lambda code. Confirm that the Lambda function has the required IAM permissions and set up a Resource-based policy on the Lambda function to allow Amazon Bedrock Agent to invoke the Lambda using the lambda:InvokeFunction action. Refer to the policy here.

Add the action group to the Amazon Bedrock agent

Complete the following steps to add the action groups to the Amazon Bedrock agent:

  • Add an action group to the Amazon Bedrock agent.
  • Assign a descriptive name to the action group and detail the function in the description field. This helps clarify the purpose of the action group within the workflow.
  • For Action group type, select Define with function details.
  • For Action group invocation, select the Lambda function that you have created previously.

This function runs the business logic required when an action is invoked. Make sure to choose the correct version of the Lambda function and that the GitHub token is set as an environment variable. For more on how to configure Lambda functions for action groups, refer to Configure Lambda functions to send information an Amazon Bedrock agent elicits from the user.

  • For the Action group function 1, select JSON Editor and add the required parameters. Reference to the JSON file.

The following screenshot shows an example of the user interaction with Amazon Bedrock Agents.

Amazon Bedrock Agent sample interaction

Figure 2. User Interaction with Amazon Bedrock Agent

The following screenshot shows an example of remediated code.

Example output

Figure 3. Sample difference of Actual and Remediated Code 

Best practices

Follow these best practices:

  • Add automation tests to validate the code before committing it to the repository and review the remediated code before merging it into the default branch
  • Use descriptive branch names when creating new branches during remediation to maintain clear version control
  • Configure IAM roles and permissions with the principle of least privilege to secure the Amazon Bedrock agent and Lambda functions
  • Update prompts to target and remediate use-case specific vulnerabilities

Clean up

The services used in this demo can incur costs. Complete the following steps to clean up your resources:

  1. Delete the Lambda function if it’s no longer required
  2. Delete the action group and agents you created
  3. Remove the generated branch from the GitHub repository

Conclusion

Amazon Bedrock Agents uses generative AI to transform code repositories by scanning for vulnerabilities and automatically applying fixes. This capability is essential for engineers because it speeds up the process of securing code and maintaining compliance with established best practices from the outset.

The interactive features of Amazon Bedrock Agents automate the vulnerability scanning and remediation process, not only streamlining the initial setup but also significantly enhancing ongoing code maintenance. Although this post focuses on code scanning and remediation, the interactive capabilities of Amazon Bedrock Agents can be applied across various AWS services, offering a dynamic and comprehensive solution for managing and optimizing cloud infrastructure.

Are you ready to streamline your cloud deployment process with the generative AI of Amazon Bedrock? Start by exploring the Amazon Bedrock User Guide to learn how it can facilitate your organization’s transition to the cloud. For specialized assistance, consider engaging with AWS Professional Services to maximize the efficiency and benefits of using Amazon Bedrock.

Embrace the potential for a swift, secure, and efficient cloud transformation with Amazon Bedrock. Take the first step today and discover how using generative AI can revolutionize your approach to cloud infrastructure.


About the authors

Rama Krishna Yalla is an Associate DevOps Consultant at AWS, adept at designing scalable, reliable, and secure cloud environments. He leverages automation and CI/CD best practices to streamline software delivery, reduce downtime, and enhance operational efficiency. Rama is experienced in managing infrastructure as code (IaC) ensuring consistent and repeatable deployments. He also focuses on implementing robust monitoring and logging solutions, enabling proactive issue resolution and optimized performance. Outside of work, Rama enjoys playing badminton and often participates in local tournaments.

Akhil Raj Yallamelli is a Cloud Infrastructure Architect at AWS, specializing in architecting cloud infrastructure solutions for enhanced data security and cost efficiency. He is experienced in integrating technical solutions with business strategies to create scalable, reliable, and secure cloud environments. Akhil enjoys developing solutions focusing on customer business outcomes, incorporating generative AI (Gen AI) technologies to drive innovation and cloud enablement. He holds an MS degree in Computer Science. Outside of his professional work, Akhil enjoys watching and playing sports.

Read More

Create a generative AI assistant with Slack and Amazon Bedrock

Create a generative AI assistant with Slack and Amazon Bedrock

Seamless integration of customer experience, collaboration tools, and relevant data is the foundation for delivering knowledge-based productivity gains. In this post, we show you how to integrate the popular Slack messaging service with AWS generative AI services to build a natural language assistant where business users can ask questions of an unstructured dataset.

To demonstrate, we create a generative AI-enabled Slack assistant with an integration to Amazon Bedrock Knowledge Bases that can expose the combined knowledge of the AWS Well-Architected Framework while implementing safeguards and responsible AI using Amazon Bedrock Guardrails.

Amazon Bedrock is a fully managed service that offers a choice of high-performing foundation models (FMs) from leading AI companies like AI21 labs, Anthropic, Cohere, Meta, Stability AI and Amazon through a single API.

Amazon Bedrock Knowledge Bases provides a fully managed Retrieval Augmented Generation (RAG) workflow, a technique that fetches data from company data sources and enriches the prompt to provide more relevant and accurate responses to natural language queries. This makes Amazon Bedrock Knowledge Bases an attractive option to incorporate advanced generative AI capabilities into products and services without the need for extensive machine learning expertise.

Amazon Bedrock Guardrails enables you to implement safeguards to build and customize safety, privacy, and truthfulness protections for your generative AI applications to align with responsible AI policies. Guardrails can help prevent undesirable content, block prompt injections, and remove sensitive information for privacy, protecting your company’s brand and reputation.

This content builds on posts such as Deploy a Slack gateway for Amazon Bedrock by adding integrations to Amazon Bedrock Knowledge Bases and Amazon Bedrock Guardrails, and the Bolt for Python library to simplify Slack message acknowledgement and authentication requirements.

Solution overview

The code in the accompanying GitHub repo provided in this solution enables an automated deployment of Amazon Bedrock Knowledge Bases, Amazon Bedrock Guardrails, and the required resources to integrate the Amazon Bedrock Knowledge Bases API with a Slack slash command assistant using the Bolt for Python library.

In this example, we ingest the documentation of the Amazon Well-Architected Framework into the knowledge base. Then we use the integration to the Amazon Bedrock Knowledge Bases API to provide a Slack assistant that can answer user questions on AWS architecture best practices. You can substitute the example documentation for your enterprise dataset, such as your corporate, HR, IT, or security policies, or equipment user or maintenance guides.

The following diagram illustrates the high-level solution architecture.

In the following sections, we discuss the key components in more detail.

Slack integration

The Slack integration is provided through the Slack Bolt Library for Python running in the Request Processor AWS Lambda function. The Slack Bolt Library handles authentication and permissions to the Slack application we build, and comes with built-in support for asynchronous request handling. Slack Bolt provides a dedicated user guide to deploy and run the library in a Lambda function.

Retrieval Augmented Generation

Amazon Bedrock Knowledge Bases gives FMs contextual information from your private data sources for RAG to deliver more relevant, accurate, and customized responses.

The RAG workflow consists of two key components: data ingestion and text generation.

  • Data ingestion workflow – During data ingestion, unstructured data from the data source is separated into chunks. Chunks are short series of text from each source document separated by a fixed word count, paragraphs, or a single thought. Chunks are vectorized and stored in a vector database. Amazon Bedrock Knowledge Bases supports a number of vector databases, such as Amazon OpenSearch Serverless, Amazon Aurora, Pinecone, Redis Enterprise Cloud, and Mongo DB Atlas. In this example, we use the default option of OpenSearch Serverless.
  • Text generation workflow – After the source data is ingested into the vector database, we can perform a semantic search to find chunks of data that are relevant to the user query based on contextualized meaning instead of just literal string matching. To complete the process, both the user query and the relevant data chunks are presented to the selected large language model (LLM) to create a natural language response.

Amazon Bedrock Knowledge Bases APIs

Amazon Bedrock Knowledge Bases provides a fully managed RAG workflow that is exposed using two main APIs:

  • Retrieve – This API retrieves the relevant data chunks using semantic search, which you can then process further in application logic
  • RetrieveAndGenerate – This API completes a full RAG text generation workflow to return a natural language response to a human query of the given dataset

The solution in this post calls the RetrieveAndGenerate API to return the natural language response to the Slack Bolt integration library.

Amazon Bedrock Guardrails

Amazon Bedrock Guardrails provides additional customizable safeguards on top of built-in protections offered by FMs, delivering safety features that are among the best in the industry.

In this solution, we configure Amazon Bedrock Guardrails with content filters, sensitive information filters, and word filters.

Content filters help detect and filter harmful user inputs and model-generated outputs across six categories: prompt injections, misconduct, insults, hate, violence, and sexually explicit content. In this solution, we use all six content filter categories.

Sensitive information filters detect sensitive information such as personally identifiable information (PII) data in a prompt or model responses. To align with your specific case, you can use custom sensitive information filters by defining them with regular expressions (regex).

In this solution, we configure sensitive information filters as follows:

  • Email with an action of Anonymize
  • Phone with an action of Anonymize
  • Name with an action of Anonymize
  • Credit_Debit_Card_Number with an action of Block

Word filters are used to block words and phrases in input prompts and model responses. In this solution, we have enabled the AWS provided profanity filter. To align with your use case, you can create custom word filters.

Solution walkthrough

Slack interfaces with a simple REST API, configured with Lambda proxy integration that in turn interacts with Amazon Bedrock Knowledge Bases APIs.

The solution is deployed with the following high-level steps:

  1. Create a new Slack application.
  2. Enable third-party model access in Amazon Bedrock.
  3. Deploy the Slack to Amazon Bedrock integration using the AWS Cloud Development Kit (AWS CDK).
  4. Ingest the AWS Well-Architected Framework documents to the knowledge base.

Prerequisites

To implement this solution, you need the following prerequisites:

This post assumes a working knowledge of the listed AWS services. Some understanding of vector databases, vectorization, and RAG would be advantageous, but not necessary.

Create a new Slack application

After you have logged in to your Slack workspace, complete the following steps:

  1. Navigate to your Slack apps and create a new application.
  2. Choose From scratch when prompted.
  3. Provide an application name. For this post, we use the name aws-war-bot.
  4. Choose your workspace and choose Create App.
  5. To provide permissions for your Slack application, choose OAuth & Permissions in your Slack application navigation pane.
  6. In the Scopes section, under Bot Token Scopes, add the following permissions:
    • calls:write
    • commands
    • incoming-webhook

  7. Under OAuth Tokens for Your Workspace, choose Install to [workspace name].
  8. Choose a channel that the Slack application will be accessed from. You may want to first create a dedicated channel in Slack for this purpose.
  9. Choose Allow.
  10. When the Slack application install is complete, copy the token value generated for Bot User OAuth Token to use in a later step.
  11. Under Settings in the navigation pane, choose Basic Information.
  12. In the App Credentials section, copy the value for Signing Secret and save this to use later.

Enable model access in Amazon Bedrock

Complete the following steps to enable model access in Amazon Bedrock:

  1. On the Amazon Bedrock console, choose Model access in the navigation pane.
  2. Choose Modify model Access or Enable specific models (if this is the first time using Amazon Bedrock in your account).
  3. Select the models you want to use for the embeddings and RAG query response models. In this post, we use Amazon Titan Text Embeddings V2 as the embeddings model and Anthropic’s Claude Sonnet 3 for the RAG query models in the US-EAST-1 AWS Region.
  4. Choose Next.
  5. Review the model selection and choose Submit.

If you’re not using the US-EAST-1 Region, the models available to request may differ.

When the access request is complete, you will see the model’s status shown as Access granted for the selected models.

Deploy the Slack to Amazon Bedrock integration

In this section, you deploy the companion code to this post to your AWS account, which will deploy an API on API Gateway, a Lambda function, and an Amazon Bedrock knowledge base with OpenSearch Serverless as the vector database.

This section requires AWS CDK and TypeScript to be installed in your local integrated development environment (IDE) and for an AWS account to be bootstrapped. If this has not been done, refer to Getting started with the AWS CDK.

  1. Clone the code from the GitHub repository:
    git clone https://github.com/aws-samples/amazon-bedrock-knowledgebase-slackbot.git

  2. Open the amazon-bedrock-knowledgebase-slackbot directory in your preferred IDE and open the lib/amazon-bedrock-knowledgebase-slackbot-stack.ts file.
  3. Update the variables if needed (depending on model access and Regional support) for the RAG query and embeddings models:
    const RAG_MODEL_ID = "anthropic.claude-3-sonnet-20240229-v1:0"
    const EMBEDDING_MODEL = "amazon.titan-embed-text-v2:0"

  4. Save the changes after all updates are complete.
  5. From the root of your repository, run the command npm install.
  6. Run the command cdk synth to perform basic validation of AWS CDK code. This generates a CloudFormation template from the AWS CDK stack, which can be reviewed in the cdk.out directory created in the root of the repository.
  7. To deploy the application stack, run the following command, replacing the values with the token and the signing secret you created earlier:
    cdk deploy --context slackBotToken=%slackBotToken% --context slackSigningSecret=%slackSigningSecret%

The AWS CDK will deploy the stack as a CloudFormation template. You can monitor the progress of the deployment on the AWS CloudFormation console.

Additionally, AWS CDK will attempt to deploy the application stack to the default account and Region using the default credentials file profile. To change profiles, add the profile flag. For example:

cdk deploy --profile [my-profile]

When the deployment is complete, you will see an output similar to the following screenshot, which details the API endpoint that has just been deployed.

  1. Copy the API endpoint URL for later use.

You can also retrieve this URL on the Outputs tab of the CloudFormation stack AmazonBedrockKnowledgebaseSlackbotStack that was run to deploy this solution.

  1. Switch back to the Slack API page.
  2. Under the Slack application you created, choose Slash Commands in the navigation pane and then choose Create New Command.
  3. Provide the following information (make sure to include the Region and API ID that has been deployed):
    • For Command, enter /ask-aws.
    • For Request URL, enter https://[AWS-URL]/slack/[command]. For example, https://ab12cd3efg.execute-api.us-east-1.amazonaws.com/prod/slack/ask-aws.
    • For Short Description, enter a description (for example, AWS WAR Bot).

  4. Choose Save.
  5. Reinstall the Slack application to your workspace in the Install App section by choosing Reinstall next to the workspace name.
  6. Choose the channel where the Slack app will be deployed and choose Allow.

In the Slack channel, you will see a message like the one in the following screenshot, indicating that an integration with the channel has been added.

Populate the Amazon Bedrock knowledge base

Complete the following steps to populate the Amazon Bedrock knowledge base with the combined information of the AWS Well-Architected Framework:

  1. Download the following AWS Well-Architected Framework documents:

You can also include any Well-Architected Lenses that are relevant to your organization by downloading from AWS Whitepapers and Guides.

  1. On the Amazon Bedrock console, choose Knowledge bases in the navigation pane.
  2. Choose the knowledge base you deployed (slack-bedrock-kb).
  3. In the Data source section under Source link, choose the S3 bucket link that is displayed.

This will open the S3 bucket that is being used by the Amazon Bedrock knowledge base as the data source.

  1. In the S3 bucket, choose Upload then Add files, and select all of the downloaded AWS Well-Architected documents from the previous step.
  2. When the documents have completed uploading, switch back to the Knowledge bases page on the Amazon Bedrock console.
  3. Select the data source name and choose Sync.

This will sync the documents from the S3 bucket to the OpenSearch Serverless vector database. The process can take over 10 minutes.

When the sync is complete, the data source will show a Status of Available.

Test the Slack application integration with Amazon Bedrock

Complete the following steps to test the integration:

  1. Open the Slack channel selected in the previous steps and enter /ask-aws.

The Slack application will be displayed.

  1. Choose the Slack application and enter your prompt. For this test, we use the prompt “Tell me about the AWS Well Architected Framework.

The Slack application will respond with Processing Request and a copy of the entered prompt. The application will then provide a response to the prompt.

  1. To test that the guardrails are working as required, write a prompt that will invoke a guardrail intervention.

When an intervention occurs, you will receive the following predefined message as your response.

Clean up

Complete the following steps to clean up your resources:

  1. From your terminal, run the following command, replacing the values with the token and the signing secret created earlier:
    cdk destroy --context slackBotToken=%slackBotToken% --context slackSigningSecret=%slackSigningSecret%

  2. When prompted, enter y to confirm the deletion of the deployed stack.

Conclusion

In this post, we implemented a solution that integrates an Amazon Bedrock knowledge base with a Slack chat channel to allow business users to ask natural language questions of an unstructured dataset from a familiar interface. You can use this solution for multiple use cases by configuring it to different Slack applications and populating the knowledge base with the relevant dataset.

To get started, clone the GitHub repo and enhance your customers’ interactions with Amazon Bedrock. For more information about Amazon Bedrock, see Getting started with Amazon Bedrock.


About the Authors

Barry Conway is an Enterprise Solutions Architect at AWS with 20 years of experience in the technology industry, bridging the gap between business and technology. Barry has helped banking, manufacturing, logistics, and retail organizations realize their business goals.

Dean Colcott is an AWS Senior GenAI/ML Specialist Solution Architect and SME for Amazon Bedrock. He has areas of depth in integrating generative AI outcomes into enterprise applications, full stack development, video analytics, and computer vision and enterprise data platforms.

Read More