Efficiently train models with large sequence lengths using Amazon SageMaker model parallel

Efficiently train models with large sequence lengths using Amazon SageMaker model parallel

Large language models (LLMs) have witnessed an unprecedented surge in popularity, with customers increasingly using publicly available models such as Llama, Stable Diffusion, and Mistral. Across diverse industries—including healthcare, finance, and marketing—organizations are now engaged in pre-training and fine-tuning these increasingly larger LLMs, which often boast billions of parameters and larger input sequence length. Although these advancements offer remarkable capabilities, they also present significant challenges. Longer sequence lengths and the sheer number of trainable parameters demand innovative approaches to model development and deployment. To maximize performance and optimize training, organizations frequently need to employ advanced distributed training strategies.

In this post, we demonstrate how the Amazon SageMaker model parallel library (SMP) addresses this need through support for new features such as 8-bit floating point (FP8) mixed-precision training for accelerated training performance and context parallelism for processing large input sequence lengths, expanding the list of its existing features.

We guide you through a step-by-step implementation, demonstrating how to accelerate workloads with FP8 and work with longer sequence lengths using context parallelism, with minimal code changes to your existing training workflow.

The implementation of these new SMP features promises several advantages for customers working with LLMs. First, it can lead to lower costs to convergence, allowing for more efficient use of resources during the training process. This results in reduced time to market, allowing organizations to deploy their optimized models more quickly and gain a competitive edge. Second, it enables training with larger dataset records, expanding the scope and complexity of tasks that can be tackled.

The following sections take a deeper look into this.

Business challenge

Businesses today face a significant challenge when training LLMs efficiently and cost-effectively. As models grow larger and more complex, organizations are using fine-tuning and continuous pre-training strategies to train these models with domain-specific data, using larger sequence lengths that can range from 8K to 128K tokens. These longer sequence lengths allow models to better understand long-range dependencies in text, generate more globally coherent outputs, and handle tasks requiring analysis of lengthy documents.

Although there exist various strategies such as Fully Shared Data Parallelism (FSDP), tensor parallelism (TP), and pipeline parallelism to effectively train models with billions of parameters, these methods are primarily designed to distribute model parameters, gradients, and optimizer states across GPUs, and they don’t focus on input data–related optimizations. This approach reduces memory pressure and enables efficient training of large models. However, none of these techniques effectively address partitioning along the sequence dimension. As a result, training with longer sequence lengths can still lead to out-of-memory (OOM) errors, despite using FSDP.

As a result, working with larger sequence length might result in memory pressure, and it often requires innovative approaches such as FP8 and context parallelism.

How does SMP context parallelism and FP8 help accelerate model training?

SMP addresses the challenges of memory pressure by providing an implementation of context parallelism, which is a parallelization technique that partitions on the dimension of sequence length. Furthermore, it can work together with other parallelism techniques such as FSDP and TP. SMP also implements FP8 for supported models such as Llama. FP8 is a reduced-precision floating-point format that boosts efficiency by enabling faster matrix multiplications without significant accuracy loss. You can use these techniques together to train complex models that are orders of magnitude faster and rapidly iterate and deploy innovative AI solutions that drive business value.

The following sections dive deep into the implementation details for each of these features in SMP.

Context parallelism

Context parallelism is a model parallelism technique to allow the model to train with long sequences. It’s a parallelization scheme that partitions a model’s activations along the sequence dimension. During training with SMP context parallel strategy, the inputs are partitioned along the sequence dimension before being fed to the model. With activations being partitioned along the sequence dimension, we need to consider how our model’s computations are affected. For layers that don’t have inter-token dependency during computation, we don’t require special considerations. In a transformer architecture, such layers are the embedding layers and the multilayer perceptron (MLP) layers. The layers that have inter-token dependency are the attention layers. For the attention layer, as we see from the attention computation, Query projections (Q) need to interact with the tokens of key (K) and value (V) projections.

Because we only have a partition of K and V, we require an AllGather operation to collect the keys and queries from other ranks. As detailed in the following figure, we consider a context parallel scheme with context parallel degree 2 for a causal language model. Thus GPU 0 has the first half of the input sequence and GPU 1 has the other half. During forward, the non-attention layers compute their activations as normal. For attention computation, an AllGather operation is performed for K and V across the context parallel ranks belonging to GPU 0 and GPU 1. To conserve memory, the K and V tensors obtained from the AllGather operation are discarded after the attention computation is completed. Consequently, during the backward pass, we require the same AllGather operation for K and V. Additionally, after the attention backward pass, a ReduceScatter operation is performed to scatter the gradients to corresponding context parallel ranks.

Unlike other model parallel schemes such as tensor parallelism, context parallelism keeps the model parameters intact. Thus, there are no additional communication collectives for parameters required for context parallelism.

Supported models

SMP supports context parallelism using NVIDIA Transformer Engine, and it seamlessly integrates with other model parallelism techniques Fully Sharded Data Parallel and Tensor Parallelism. SMP v2.6 supports the Llama 3.1 (and prior Llama models) and Mistral model architectures for context parallelism.

Mixed Precision Training with FP8

As shown in figure below, FP8 is a datatype supported by NVIDIA’s H100 and H200 GPUs, enables efficient deep learning workloads. The FP8 format occupies only 8 bits of memory, half that of its BF16 or FP16 counterparts, significantly reducing computational costs for operations such as matrix multiplication. The compute throughput for running matrix operations such as multipliers and convolutions is significantly higher on 8-bit float tensors compared to 32-bit float tensors. FP8 precision reduces the data footprint and computational requirements, making it ideal for large-scale models where memory and speed are critical.

Delving deeper into FP8’s architecture, we discover two distinct subtypes: E4M3 and E5M2. The E4M3 configuration, with its 1 sign bit, 4 exponent bits, and 3 mantissa bits, offers superior precision but a limited dynamic range. This makes it ideal for the forward pass in model training. Conversely, E5M2, featuring 1 sign bit, 5 exponent bits, and 2 mantissa bits, boasts a broader dynamic range at the expense of reduced precision. This configuration excels in the backward pass, where precision is less critical, but a wider range proves advantageous.

The transition to mixed precision training with FP16 or BF16 has historically necessitated static or dynamic loss-scaling to address convergence issues that stemmed from reduced precision in gradient flow. This challenge is further amplified in FP8 due to its narrower range. To combat this, the Transformer Engine introduced an innovative solution called DelayedScaling. This technique selects scaling factors based on the maximum observed value for each tensor from previous iterations. Although DelayedScaling maximizes the performance benefits of FP8 computation, it does come with a memory overhead for storing the tensors’ maximum value history. However, despite the additional overhead, the improved throughput observed with 8-bit tensor computations make this approach valuable.

Supported models

SMP supports FP8 mixed precision training using NVIDIA Transformer Engine and keeps compatibility with PyTorch MixedPrecision. This means that you can use FP8 training for supported layers and half-precision using PyTorch Automatic Mixed Precision for others. SMP v2.6 supports the following model architectures for FP8 training: Llama 3.1 (and prior Llama models), Mixtral, and Mistral.

More details about FP8 can be found at FP8 Formats For Deep Learning.

Solution overview

We can use SMP with both Amazon SageMaker Model training jobs  and Amazon SageMaker HyperPod.

For this post, we demonstrate SMP implementation on SageMaker trainings jobs.

Launching a machine learning (ML) training cluster with Amazon SageMaker training jobs is a seamless process that begins with a straightforward API call, AWS Command Line Interface (AWS CLI) command, or AWS SDK interaction. After they’re initiated, SageMaker training jobs spin up the cluster, provisioning the specified number and type of compute instances.

In our example, we use a single ml.p5.48xlarge instance, though we’re illustrating the use of four GPUs for demonstration purposes. The training data, securely stored in Amazon Simple Storage Service (Amazon S3), is copied to the cluster. Each record sequence (Seq0) is strategically split into multiple subsequences and assigned to each GPU in our cluster.

Our implementation uses the FP8 capabilities of SMP to execute model training on Nvidia H100 GPUs and showcases context parallelism capabilities. Because of the flexibility of SageMaker, you can scale your compute resources as needed, accommodating workloads across of a range of sizes. SageMaker creates a resilient training cluster, handles orchestration, closely monitors the infrastructure, and recovers from faults, providing a smooth and uninterrupted training experience. Furthermore, the SageMaker training jobs cost-effective design automatically terminates the cluster upon completion of the training job, with billing calculated down to the second of actual training time used. This combination of power, flexibility, and cost-efficiency makes SageMaker an ideal service for ML practitioners of all levels.

The following diagram shows the solution architecture.

The following walkthrough shows you how you can train a Llama 3.1 8B Instruct model using the PubMed tokenized dataset with a sequence length of approximately 16K tokens. We use SMP context parallelism implementation to enable training for this large sequence length. We compare two approaches: one without context parallelism and another one with it. This comparison highlights the importance of context parallelism when working with LLMs and datasets containing long sequences.

Additionally, we conduct a comparative run on p5.48xlarge instances with context parallelism enabled, both with FP8 enabled and disabled. This demonstration will showcase the incremental throughput benefits we can achieve by enabling FP8-based training alongside context parallelism.

In summary, the implementation follows these four steps:

  1. Set up libraries and process data
  2. Run training without context parallelism
  3. Run training with context parallelism enabled to track memory optimizations
  4. Run training with FP8 enabled to gain further performance

The following flow diagram shows these four steps.

Prerequisites

To perform the solution, you need to have the following prerequisites in place:

  1. Create a Hugging Face User Access Token and get access to the gated repository meta-llama/Llama-3.1-8B on Hugging Face.
  2. Request a Service Quota for 1x p4d.24xlarge and 1x ml.p5.48xlarge on Amazon SageMaker. To request a service quota increase, on the AWS Service Quotas console, choose AWS services, Amazon SageMaker, and then choose one ml.p4d.24xlarge and one ml.p5.48xlarge training job usage.
  3. Create an AWS Identity and Access Management (IAM) role with managed policies AmazonSageMakerFullAccess, AmazonEC2FullAccess to give required access to SageMaker to run the examples.

This walkthrough is for demonstration purposes only. You should adjust this to your specific security requirements for production. Adhere to the principle of least privilege while defining IAM policies in production.

  1. Create an Amazon SageMaker Studio domain (refer to Quick setup to Amazon SageMaker) to access Jupyter notebooks.

Solution walkthrough

To perform the solution, use the instructions in the following steps.

Set up libraries and process data

To set up libraries and process data, follow these instructions. The following flow diagram shows step 1 highlighted.

  1. Enter the following command to install the relevant HuggingFace and SageMaker libraries:
    %pip install --upgrade "sagemaker>=2.233"
    %pip install "datasets==2.14.5"
    %pip install transformers

  2. Load the PubMed dataset and tokenize it

In this example, we use the PubMed Scientific Papers dataset, containing 133,215 biomedical research articles. For our experiment, we select 1,000 papers split 80/20 for training and validation. Using the Meta-LlaMA-3 tokenizer, we process each paper into sequences of 16,384 tokens.

The dataset undergoes two main processing steps: tokenization with Llama’s tokenizer and grouping into fixed-length chunks of 16,384 tokens using utility function group_texts. This uniform sequence length enables even distribution across GPUs while maintaining the natural structure of the scientific papers.

import datasets
from datasets import load_dataset, DatasetDict

# Load the PubMed dataset
pubmed_dataset = load_dataset(
    "scientific_papers",
    "pubmed",
    cache_dir="/home/ec2-user/SageMaker/datasets",
    download_mode="force_redownload"
)

# Create a smaller subset of the dataset for our experiment
train_test = pubmed_dataset['train'].shuffle(seed=42).select(range(1000)).train_test_split(
    test_size=0.2,
    seed=42
)

lm_datasets = tokenized_datasets.map(
    group_texts,
    batched=True,
    desc=f"Grouping texts in chunks of {block_size}",
)
  1. Prepare data for the training job

In this section, we prepare the PubMed dataset for SageMaker training by managing data transfers to Amazon S3. Both training and validation splits are converted to JSON format and uploaded to designated S3 buckets, with separate paths for input data and output artifacts.

if lm_datasets["train"] is not None:
    train_dataset = lm_datasets["train"]
    train_dataset.to_json("./training.json")
    training_dataset_location = f"s3://{default_bucket}/dataset/train/"

if lm_datasets["validation"] is not None:
    eval_dataset = lm_datasets["validation"]
    eval_dataset.to_json("./validation.json")
    validation_dataset_location = f"s3://{default_bucket}/dataset/validation/"

  1. Set up training hyper parameters

In this configuration, we define hyperparameters for training Llama on PubMed, covering memory optimizations, training parameters, model architecture settings, and performance tuning. Starting with conservative settings (batch size=1, BF16 precision), we establish a baseline configuration that will be modified to test different optimization strategies, particularly for context parallelism experiments.

hyperparameters = {
    # Memory and optimization settings
    "activation_checkpointing": 1,
    "auto_wrap_policy": "transformer_auto_wrap_policy",
    ...
    
    # Training settings
    "train_batch_size": 1,
    "val_batch_size": 1,
    ...
    
    # Model configuration
    "vocab_size": 128256, # Vocab size from Llama 3.1 config file on Hugging Face
    "hf_pretrained_model_name_or_dir": model_id,
    
    ...
    
}

Run training without context parallelism

To run training without context parallelism, follow these instructions. The following flow diagram shows step 2 highlighted.

In this setup, we configure a baseline training job by disabling context parallelism and FP8 features, while maximizing memory usage through FP32 precision and larger batch sizes. Each GPU processes the full 16,384 token sequence without splitting, and memory-saving features are disabled to demonstrate the limitations and potential memory constraints when running without advanced optimizations such as context parallelism and FP8.

instance_type= "p4d.24xlarge"
instance_count= 1
hybrid_shard_degree= 8

hyperparameters.update({
    "use_smp_implementation": 0,  # Disable SMP/CP. Only FSDP is active
    "train_batch_size": 1,        # Batch size
    "max_context_width": 16384,   # Full sequence length
    "clean_cache": 0,
    "bf16": 1,                    # Use bf16
    ...
})

smp_estimator = PyTorch(
    entry_point="train.py",
    hyperparameters=hyperparameters,
    ...
    instance_type=instance_type,
    volume_size=400,
    instance_type=instance_count,
    distribution={
        "torch_distributed": {
            "enabled": True,
        },
        "smdistributed": {
            "modelparallel": {
                "enabled": True,  # Enable model parallelism but with minimal parameters
                "parameters": {
                    "hybrid_shard_degree": hybrid_shard_degree,
                    "delayed_parameter_initialization": True
                }
            }
        }
    },
    
   ...
)

smp_estimator.fit(inputs=data_channels)

The result of not using context parallelism with a large context width (16,384) means that we will get a CUDA out-of-memory error:

AlgorithmError: ExecuteUserScriptError: ExitCode 1 ErrorMessage “[rank3]: torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 7.83 GiB. GPU 3 has a total capacity of 39.38 GiB of which 5.53 GiB is free. Including non-PyTorch memory, this process has 0 bytes memory in use.

Run training with context parallelism enabled to track memory optimizations

To run training with context parallelism enabled to track memory optimizations, follow these instructions. The following flow diagram shows step 3 highlighted.

In this configuration, we enable context parallelism while keeping FP8 disabled. By setting context parallel degree to 8, we distribute the 16,384 token sequence across all available GPUs for efficient processing. The setup includes essential context parallelism parameters and launches the training job in a background thread, allowing for unblocked notebook execution while maintaining clear job identification for comparison with other configurations.

instance_type= "p4d.24xlarge"
instance_count= 1
hybrid_shard_degree= 8
context_parallel_degree=8

smp_estimator = PyTorch(
    ...
    entry_point="train.py",
    instance_type=instance_type,
    instance_count=instance_count,
    distribution={
        "torch_distributed": {
            "enabled": True,
        },
        "smdistributed": {
            "modelparallel": {
                "enabled": True,
                "parameters": {
                    "context_parallel_degree": context_parallel_degree,
                    "hybrid_shard_degree": hybrid_shard_degree,
                    "delayed_parameter_initialization": True,
                }
            }
        }
    },
    ...
)

smp_estimator.fit(inputs=data_channels)

The result of using context parallelism with such a large context width is that the job successfully completes, as shown in the following screenshot.

We also enabled delayed parameter initialization and hybrid sharding capabilities from SMP for both preceding configurations. Delayed parameter initialization allows initializing large models on a meta device without attaching data. This can resolve limited GPU memory issues when you first load the model. This approach is particularly useful for training LLMs with tens of billions of parameters, where even CPU memory might not be sufficient for initialization. Hybrid sharding is a memory saving technique that shards parameters within the hybrid shard degree (HSD) group and replicates parameters across groups. The HSD controls sharding across GPUs and can be set to an integer from 0 to world_size. This results in reduced communication volume because expensive AllGathers and ReduceScatters are only done within a node, which perform better for medium-sized models.

Run training with FP8 enabled to gain further performance

To run training with FP8 enabled to gain further memory performance, follow these instructions. The following flow diagram shows step 4 highlighted.

In this fully optimized configuration, we enable both context parallelism and FP8 training using a NVIDIA P5 instance (ml.p5.48xlarge). This setup combines sequence splitting across GPUs with FP8 precision training, creating a highly efficient training environment. Using P5 instances provides the necessary hardware support for FP8 computation, with the result that we can maximize the benefits of both memory-saving techniques.

instance_type= "p5.48xlarge"
instance_count= 1
hybrid_shard_degree= 8
context_parallel_degree=8

hyperparameters.update({
    "use_smp_implementation": 1,  # Enable SMP/CP
    "max_context_width": 16384,   # Full sequence length
    "fp8": 1,  # Enable FP8 flag
    "distributed_backend": "nccl"  # Add this line to explicitly use NCCL
    ...

})

smp_estimator = PyTorch(
    ...
    entry_point="train.py",
    instance_type=instance_type,
    instance_count=instance_count,
    distribution={
        "torch_distributed": {
            "enabled": True,
        },
        "smdistributed": {
            "modelparallel": {
                "enabled": True,
                "parameters": {
                    "context_parallel_degree": context_parallel_degree,
                    "hybrid_shard_degree": hybrid_shard_degree,
                    "delayed_parameter_initialization": True,
                }
            }
        }
    },
   ...
)

smp_estimator.fit(inputs=data_channels)

Start training with context parallelism, without FP8 (on a P5 instance)

To do a fair comparison with and without FP8, we will do another run without FP8 but with context parallelism on a P5.48xlarge instance and compare the throughputs for both runs.

instance_type= "p5.48xlarge"
instance_count= 1
hybrid_shard_degree= 8
context_parallel_degree=8

hyperparameters.update({
    "use_smp_implementation": 1,  # Enable SMP/CP
    "max_context_width": 16384,   # Full sequence length
    "bf16": 1,                    # Use BF16
    "distributed_backend": "nccl"  # Add this line to explicitly use NCCL
    ...
})

# This remains the same as in the previous step
smp_estimator = PyTorch(
    ...
    )
    
smp_estimator.fit(inputs=data_channels)

If we compare both runs, we can tell that the speed of the same context parallelism enabled job with FP8 is almost 10 times faster

With FP8, speed is around 14.6 samples/second, as shown in the following screenshot.

Without FP8, speed is around 1.4 samples/second, as shown in the following screenshot.

The following table depicts the throughput increment you get in each of the listed cases. All these cases are run on a P5.48xLarge.

The throughput may vary based on factors such as the context width or batch size. The following numbers are what we have observed in our testing.

Configuration (ml.P5.48xlarge; CP on 8 GPUs, Train Batch Size 4) Observed samples speed Observed throughput
No context parallelism & No FP8 torch.OutOfMemoryError: CUDA out of memory torch.OutOfMemoryError: CUDA out of memory
Only Context Parallelism 2.03 samples/sec 247 TFLOPS/GPU
Context parallelism + FP8 3.05 samples/sec 372 TFLOPS/GPU

Cleanup

To clean up your resources to avoid incurring more charges, follow these steps:

  1. Delete any unused SageMaker Studio resources.
  2. Optionally, delete the SageMaker Studio domain.
  3. Delete any S3 buckets created
  4. Verify that your training job isn’t running anymore! To do so, on your SageMaker console, choose Training and check Training jobs.

To learn more about cleaning up your resources provisioned, check out Clean up.

Conclusion

In this post, we demonstrated the process of setting up and running training jobs for the PubMed dataset using the Llama 3.1 8B Instruct model, both with and without context parallelism. We also showcased how to enable FP8 based training for even faster throughputs.

Key takeaways:

  • For datasets that have long sequence lengths, we observe that using context parallelism helps avoid OOM errors.
  • For faster training, we can enable FP8 based training and combine it with context parallelism to get increased throughput times. In this notebook, we observed that the throughput goes up tenfold if we enable FP8 with context parallelism.

As next steps, try out the above example by following the notebook steps at sagemaker-distributed-training-workshop.

Special thanks to Roy Allela, Senior AI/ML Specialist Solutions Architect for his support on the launch of this post.


About the Authors

Kanwaljit Khurmi is a Principal Worldwide Generative AI Solutions Architect at AWS. He collaborates with AWS product teams, engineering departments, and customers to provide guidance and technical assistance, helping them enhance the value of their hybrid machine learning solutions on AWS. Kanwaljit specializes in assisting customers with containerized applications and high-performance computing solutions.

Surya Kari is a Senior Generative AI Data Scientist at AWS. With a background in computer vision and AI devices, his current specializations include LLM training, multi-modal RAG, vision-language models, and edge computing.

Arun Kumar Lokanatha is a Senior ML Solutions Architect with the Amazon SageMaker team. He specializes in LLM training workloads, helping customers build LLM workloads using SageMaker HyperPod, SageMaker training jobs, and SageMaker distributed training. Outside of work, he enjoys running, hiking, and cooking.

Suhit Kodgule is a Software Development Engineer with the AWS Artificial Intelligence group working on deep learning frameworks. In his spare time, he enjoys hiking, traveling, and cooking.

Anirudh Viswanathan is a Sr Product Manager, Technical – External Services with the SageMaker Training team. He holds a Masters in Robotics from Carnegie Mellon University, an MBA from the Wharton School of Business, and is named inventor on over 40 patents. He enjoys long-distance running, visiting art galleries, and Broadway shows.

Read More

Getting started with Amazon Bedrock Agents custom orchestrator

Getting started with Amazon Bedrock Agents custom orchestrator

Generative AI agents are designed to interact with their environment to achieve specific objectives, such as automating repetitive tasks and augmenting human capabilities. By orchestrating multistep workflows that adapt to evolving goals in real time, these agents increase productivity, reduce errors, and deliver more personalized experiences. To manage these complex workflows effectively, agents rely on an orchestration strategy that coordinates interactions with various tools, knowledge sources, and other agents. This orchestration allows agents to analyze data, interpret context, sequence tasks, and adapt to shifting requirements, making sure that workflows remain efficient, accurate, and resilient.

Amazon Bedrock Agents streamlines the development of generative AI applications by offering a fully managed solution that uses foundation models (FMs) and augmenting tools to autonomously run tasks and achieve objectives through orchestrated, multistep workflows. Using the default orchestration strategy, reasoning and action (ReAct), users can quickly build and deploy agentic solutions. ReAct is a general problem-solving approach that uses the FM’s planning capabilities to dynamically adjust actions at each step. Although ReAct offers flexibility by allowing agents to continually reevaluate their decisions based on shifting requirements, its iterative approach can lead to higher latency when many tools are involved.

For greater orchestration control, Amazon Bedrock Agents has launched the custom orchestrator feature, which users can use to fine-tune agent behavior and manage tool interactions at each workflow step. This customization allows organizations to tailor agent functionality to their specific operational needs, improving precision, adaptability, and efficiency. In this post, we explore how custom orchestrators work and demonstrate their application with the default Bedrock Agent’s ReAct and reasoning without observation (ReWoo) examples.

Custom orchestrator overview

Implemented by users as an AWS Lambda function, the Amazon Bedrock Agents custom orchestrator offers granular control over task planning, completion, and verification. Unlike the default ReAct orchestration method, which prioritizes decision transparency and step-by-step reasoning, the custom orchestrator gives users the ability to define strategies that are better aligned with specific use case requirements. In ReAct, FM and tool invocations follow a sequential, step-by-step process, where each action depends on the outcome of the previous one. This structured, linear approach offers transparency, making it easier to trace the reasoning behind each action and decision while also promoting consistency through predictable workflows. Although ReAct’s design provides incremental adaptability by allowing agents to reassess actions at each step, its sequential structure may introduce delays when rapid parallel actions are required or when workflows demand instant responsiveness across multiple steps. This makes ReAct less suited to scenarios where speed and rapid sequential processing are paramount, such as in complex, high-volume workflows.

The custom orchestrator offers an alternative, more flexible approach, which users can use to define orchestration strategies that are more closely aligned with their specific requirements. With real-time adjustments and precise control over FM and tool interactions, users can create workflows that provide the optimal balance of performance, accuracy, and resilience. After a custom orchestrator is created, it can be reused across multiple agents by updating a single reference when configuring new agents.

Key benefits of the custom orchestrator include:

  • Full control over orchestration strategies – Tailor agent workflows for optimal performance across various metrics, such as accuracy, speed, and resilience. Use Amazon Bedrock Agents built-in integrations with action groups, knowledge bases, and guardrails to streamline interactions.
  • Real-time adjustments – Dynamically adjust agent actions based on the current context, tool outputs, or evolving user requirements so the agent adapts efficiently and effectively to new information.
  • Reusability and consistency – After an orchestration strategy is created, it can be implemented across all relevant agents, saving time and promoting consistency.

In this post, we compare the invocations of an Amazon Bedrock agent with the default ReAct prompts with the invocations of an Amazon Bedrock agent with a custom orchestration implementing the ReWoo strategy. First, we examine the underlying contracts and state management principles that drive its adaptability.

Custom orchestrator workflow management

The custom orchestrator enables dynamic decision-making and adaptable workflow management through contract-based interactions between Amazon Bedrock Agents and AWS Lambda. The Lambda function acts as the orchestration engine, processing contextual inputs—such as state, conversation history, session parameters, and user requests—to generate instructions and define the state for subsequent actions. Upon receiving user input, Amazon Bedrock Agents uses the custom orchestrator logic and the Amazon Bedrock Converse API to manage interactions between the underlying FM and various tools, such as action groups, knowledge bases, and guardrails.

The following diagram illustrates the flow of interactions between the user, Amazon Bedrock Agents, and the custom orchestrator, which manages the workflow:

The custom orchestrator workflow includes the following steps:

  1. User input – The process begins when the user submits a request or query. This input is sent to Amazon Bedrock Agents, initiating the workflow.
  2. Custom orchestrator initiation – Amazon Bedrock Agents passes the user input to the custom orchestrator, which initiates the orchestration process in the START state. The orchestrator guides the workflow through intermediate steps to process the input.
  3. Tool interactions – Amazon Bedrock Agents interacts with various tools to manage the request:
    • Knowledge bases – Provide relevant context or information based on user input.
    • Action groups – Invoke predefined action groups, which include:
      • Lambda functions for custom logic
      • Return of control (RoC) functions to sequence steps
      • Code interpreter (CI) functions for code execution
    • Guardrails – Makes sure responses comply with predefined criteria or safety standards.
    • Converse API – Manages conversation flow and processes natural language responses between Amazon Bedrock Agents and the FM.
    • Session attributes – Manage session-specific data, such as long-term memory, session attributes, and knowledge base configurations, personalizing and maintaining context across interactions.
  4. Custom orchestrator workflow – As Amazon Bedrock Agents interacts with various tools, the custom orchestrator tracks progress through states, adjusting the workflow as necessary. After the workflow reaches completion, the orchestrator signals it using the FINISH action event.
  5. Final output – Amazon Bedrock Agents generates and delivers the final output to the user, completing the interaction.

This workflow highlights how Amazon Bedrock Agents, guided by the custom orchestrator, coordinates various steps and manages the flow of information to fulfill the user request. Through state transitions, the orchestrator makes sure that each action follows a structured sequence, enabling dynamic and flexible control over the workflow. Next, we explore how state transitions and contract-based interactions structure customizable workflow management.

State and event management

State management is central to guiding the progression of interactions and determining the next steps in the workflow. States represent specific stages or conditions, allowing the orchestration engine to track and manage actions. These states make sure that the workflow proceeds in an orderly manner, with each action dependent on the current state. States are passed in the request schema from Amazon Bedrock Agents to the customer orchestrator handled through the Lambda function. In contrast, events are actions that drive state transitions or invoke further actions. Events are passed in the response schema from AWS Lambda to Amazon Bedrock Agents.

Each interaction between the agent and the custom orchestrator starts with a “START” state and ends with a “FINISH” event. During the orchestration, the custom orchestrator Lambda can receive “START”, “MODEL_INVOKED”, “TOOL_INVOKED”, “APPLY_GUARDRAILS_INVOKED”, or a custom defined state as input and will output “FINISHED”, “INVOKE_MODEL”, “INVOKE_TOOL”, “APPLY_GUARDRAILS”, or a custom defined event. The flow between states and events is shown in the following figure.

Each state transition occurs in response to specific events, allowing the workflow to adapt dynamically based on input and context. For example, when a FINISH event response is received, the orchestrator is signaling that workflow is complete. The custom orchestrator Lambda function then streams the output back to Amazon Bedrock Agents, which streams it to the user. This mechanism provides a smooth and responsive interaction, enabling effective orchestration of tasks. The requests and response contract-based interactions are handled through JSON events as detailed here.

By using these contract-based interactions, Amazon Bedrock Agents and the custom orchestrator Lambda function collaborate effectively to process contextual inputs, manage state transitions, and produce accurate, tailored responses. This flexible architecture is critical for handling complex workflows that require real-time adjustments and precise control over the agent’s behavior.

Custom orchestrator workflow patterns: ReAct and ReWoo

To illustrate the power and flexibility of the custom orchestrator, the next section examines two orchestration strategies—default Bedrock Agent’s ReAct and ReWoo—and explores how each addresses trade-offs in agent workflows. To further explore the flexibility and potential of the custom orchestrator, consider a restaurant example use case. In this use case, we have an Amazon Bedrock Agent that has one action group that can connect to three APIs: create reservation, update existing reservation, and delete reservation. The agent also connects with a knowledge base that indexes the different menus for the food served in this restaurant. The following diagram shows the agent architecture.

Default orchestrator: ReAct

The default Amazon Bedrock Agents ReAct approach is an iterative decision-making process where the model analyzes each step, deciding on the next action based on the information gathered at each stage, as shown in the following figure.

This method provides transparency and allows for a clear, step-by-step breakdown of actions, making it well-suited for workflows that benefit from incremental adjustments. Although effective in dynamic environments where real-time reevaluation is advantageous, ReAct’s sequential structure can introduce latency when a complex plan is required. For instance, considering the restaurant assistant example, when asking simple queries such as “What do you serve for dinner?” or “Can you make a reservation for two people, at 7pm tonight?” the agent plan will consist of a single action that doesn’t have a much higher latency. However, when considering a more complex query such as “What do you serve for dinner? Can you make a reservation for four people, at 9pm tonight.” The agent plan will have multiple steps. At each step the results are observed, and the plan is adapted as shown in the following diagram. Notice that the plan is implicit, and the thought provides the next step. After each step, a new model invocation is done to determine the next step or to provide the final answer.

ReWoo

The ReWoo technique optimizes performance by generating a complete task plan up front and executing it without checking intermediate outputs, as shown in the following flow diagram.

This approach minimizes model calls, significantly reducing response times for queries that require interaction with multiple tools. For tasks where speed is prioritized over iterative adjustments—or where the intermediate reasoning steps should remain hidden for security reasons—ReWoo offers clear advantages over the default ReAct strategy.

A key source of agent latency is the number of FM calls required to complete a task. Although the default ReAct strategy requires at least N+1 calls for N steps, ReWoo reduces this to at most two calls to the model for any number of tools, cutting down model invocations and, consequently, response time. For example, for a task that takes 9 seconds with three model invocations with ReAct, the difference would be marginal with ReWoo because the task would still take two model invocations. However, as the complexity scales, the latency difference becomes bigger. For instance, a task taking 18 seconds with six model invocations could take only 9 seconds and two model invocations with ReWoo—a difference that scales with the complexity of the workflow.

When analyzing the query “What do you serve for dinner? Can you make a reservation for four people, at 9pm tonight,” with ReWoo the agent will create a plan to access the knowledge base for the dinner menu information and the action group to create a new dinner reservation without validating intermediate steps as shown in the following video clip.

When running this query with an agent using Anthropic’s Claude Sonnet 3.5 v2, we observed a 50–70% latency reduction for the complex query. You can find the implementation of this solution in our GitHub repository amazon-bedrock-samples.

It’s important to notice that although ReWoo has advantages for speed, it does have a more complex prompt, and you need to build a parser for the output, which makes it a more difficult strategy to implement. This is one reason why you should weigh speed, accuracy, and complexity of solution when creating a new orchestration strategy.

Conclusion

In this post, we explored how Amazon Bedrock Agents simplifies the orchestration of generative AI workflows, particularly with the introduction of the custom orchestrator feature. You can use the custom orchestrator to fine-tune and optimize agentic workflows that align more closely with specific business and operational needs. We outlined the feature’s key benefits, including full control over orchestration, real-time adjustments, and reusability, followed by a breakdown of how it manages state transitions and contract-based interactions between Amazon Bedrock Agents and AWS Lambda.

We then dove deeper into the default ReAct and a custom ReWoo orchestration strategies, and discussed the trade-offs between flexibility and performance. Through the detailed workflow management, state events, and contract interactions applied to a custom ReWoo implementation, we highlighted how the custom orchestrator adapts to dynamic conditions, and you can therefore build more efficient and accurate AI applications. We also illustrated examples of simplified ReAct and ReWoo orchestration strategies and the trade-offs between flexibility and performance.

To learn more about custom orchestrator techniques and get started with end-to-end examples, refer to our GitHub repository.


About the Authors

Kyle T. Blocksom is a Sr. Solutions Architect with AWS based in Southern California. Kyle’s passion is to bring people together and leverage technology to deliver solutions that customers love. Outside of work, he enjoys surfing, eating, wrestling with his dog, and spoiling his niece and nephew.

Maira Ladeira Tanke is a Tech Lead Amazon Bedrock for Generative AI Agents at AWS. With a background in machine learning, she has over 10 years of experience architecting and building AI applications with customers across industries. As a technical lead, she helps customers accelerate their achievement of business value through generative AI solutions on Amazon Bedrock. In her free time, Maira enjoys traveling, playing with her cat, and spending time with her family someplace warm.

Mark Roy is a Principal Machine Learning Architect for AWS, helping customers design and build generative AI solutions. His focus since early 2023 has been leading solution architecture efforts for the launch of Amazon Bedrock, the flagship generative AI offering from AWS for builders. Mark’s work covers a wide range of use cases, with a primary interest in generative AI, agents, and scaling ML across the enterprise. He has helped companies in insurance, financial services, media and entertainment, healthcare, utilities, and manufacturing. Prior to joining AWS, Mark was an architect, developer, and technology leader for over 25 years, including 19 years in financial services. Mark holds six AWS certifications, including the ML Specialty Certification.

John Baker is a Principal SDE at AWS where he works on Amazon Bedrock and specifically Amazon Bedrock Agents. He has been with Amazon for more than 10 years and has worked across AWS, Alexa, and Amazon.com. In his spare time, John enjoys skiing and other outdoor activities throughout the Pacific Northwest.

Sudip Dutta is a senior Software Developer engineer leading the development of Amazon Bedrock Agents custom orchestrator. With more than 17 year of experience developing distributed systems and architectures he has worked at AWS for the past 6 years focusing on ML and AI services such as Bedrock and Lex. On his free time Sudip enjoys hiking in the forest of pacific northwest or reading mystery novels!

Read More

Use Amazon Bedrock Agents for code scanning, optimization, and remediation

Use Amazon Bedrock Agents for code scanning, optimization, and remediation

Amazon Bedrock is a fully managed service that makes foundation models (FMs) from leading AI startups and Amazon available through an API, so you can choose from a wide range of FMs to find the model that best suits your use case. With the Amazon Bedrock serverless experience, you can get started quickly, privately customize FMs with your own data, integrate and deploy them into your application using Amazon Web Services (AWS) tools without having to manage any infrastructure.

For enterprises in the realm of cloud computing and software development, providing secure code repositories is essential. As sophisticated cybersecurity threats become more prevalent, organizations must adopt proactive measures to protect their assets. Amazon Bedrock offers a powerful solution by automating the process of scanning repositories for vulnerabilities and remediating them. This post explores how you can use Amazon Bedrock to enhance the security of your repositories and maintain compliance with organizational and regulatory standards.

This solution demonstrates how Amazon Bedrock Agents can be configured to scan a specific code repository, remediate vulnerabilities, and push the changes to a new branch. This approach can accelerate development, reduce errors, and adhere to security guidelines.

Solution overview

There are three high-level steps to deploy the solution:

  1. Configure the Amazon Bedrock Agent
  2. Configure the AWS Lambda function for the action group
  3. Add the action group to the Amazon Bedrock agent

There are two key steps in the architecture, as illustrated in the following diagram:

  1. The user provides the necessary information through the Amazon Bedrock agent chat console. They supply the code repository URL, such as https://github.com/abc/test, and specify the branch name to scan, for instance, main. Then they list the folders to exclude from the scan, such as test, and specify file extensions to exclude, such as .md and .txt. Then they provide a new branch name where the remediated code will be uploaded.
  2. The Amazon Bedrock agent forwards the details to an action group that invokes a Lambda function. This function retrieves the code, scans it for vulnerabilities using a preselected large language model (LLM), applies remediation, and pushes the remediated code to a new branch for user validation. The excluded folders and file extensions aren’t scanned. Upon completion, the action group (Lambda function) sends the information back to the Amazon Bedrock agent, which then displays the status to the user.

Figure 1. Architecture Diagram

Prerequisites

To implement the solution, you need the following:

Configure the Amazon Bedrock agent

To configure the Amazon Bedrock agent, complete the following steps:

  1. On the Amazon Bedrock console, choose Agents in the navigation pane, then choose Create Agent.
  2. (Optional) Provide agent details, including agent name and description.
  3. Grant the agent permissions to AWS services through the IAM service role. This gives your agent access to required services, such as Lambda.
  4. Select an FM in Amazon Bedrock (such as Anthropic’s Claude 3 Sonnet).
  5. To scan a code repository and remediate vulnerabilities through Amazon Bedrock Agents, attach the following instruction to the agent:

You are a code scanning and remediating AI assistant. Greet the user and ask user for repository_url and branch_name that needs to be scanned. Ask user for list of folders that needs to be excluded from scanning and also ask user for list of specific file extensions that needs to be excluded from scanning. Ask user new branch name to push the remediated code. Pass those inputs to trigger code-scan-remediation action group.

Configure the Lambda for the action group

After initial agent configuration and adding the preceding instruction to the agent, you create one Lambda function that will be used for the action group.

Create a Lambda function designed to scan a code repository for vulnerabilities, remediate the vulnerabilities, and push the changes to a new user-specified branch. This function will be used by the action group, which will be invoked by the Amazon Bedrock agent following the user’s input of the code repository URL, branch name, and the list of folders and file extensions to exclude from the scan. Reference to the Lambda code. Confirm that the Lambda function has the required IAM permissions and set up a Resource-based policy on the Lambda function to allow Amazon Bedrock Agent to invoke the Lambda using the lambda:InvokeFunction action. Refer to the policy here.

Add the action group to the Amazon Bedrock agent

Complete the following steps to add the action groups to the Amazon Bedrock agent:

  • Add an action group to the Amazon Bedrock agent.
  • Assign a descriptive name to the action group and detail the function in the description field. This helps clarify the purpose of the action group within the workflow.
  • For Action group type, select Define with function details.
  • For Action group invocation, select the Lambda function that you have created previously.

This function runs the business logic required when an action is invoked. Make sure to choose the correct version of the Lambda function and that the GitHub token is set as an environment variable. For more on how to configure Lambda functions for action groups, refer to Configure Lambda functions to send information an Amazon Bedrock agent elicits from the user.

  • For the Action group function 1, select JSON Editor and add the required parameters. Reference to the JSON file.

The following screenshot shows an example of the user interaction with Amazon Bedrock Agents.

Amazon Bedrock Agent sample interaction

Figure 2. User Interaction with Amazon Bedrock Agent

The following screenshot shows an example of remediated code.

Example output

Figure 3. Sample difference of Actual and Remediated Code 

Best practices

Follow these best practices:

  • Add automation tests to validate the code before committing it to the repository and review the remediated code before merging it into the default branch
  • Use descriptive branch names when creating new branches during remediation to maintain clear version control
  • Configure IAM roles and permissions with the principle of least privilege to secure the Amazon Bedrock agent and Lambda functions
  • Update prompts to target and remediate use-case specific vulnerabilities

Clean up

The services used in this demo can incur costs. Complete the following steps to clean up your resources:

  1. Delete the Lambda function if it’s no longer required
  2. Delete the action group and agents you created
  3. Remove the generated branch from the GitHub repository

Conclusion

Amazon Bedrock Agents uses generative AI to transform code repositories by scanning for vulnerabilities and automatically applying fixes. This capability is essential for engineers because it speeds up the process of securing code and maintaining compliance with established best practices from the outset.

The interactive features of Amazon Bedrock Agents automate the vulnerability scanning and remediation process, not only streamlining the initial setup but also significantly enhancing ongoing code maintenance. Although this post focuses on code scanning and remediation, the interactive capabilities of Amazon Bedrock Agents can be applied across various AWS services, offering a dynamic and comprehensive solution for managing and optimizing cloud infrastructure.

Are you ready to streamline your cloud deployment process with the generative AI of Amazon Bedrock? Start by exploring the Amazon Bedrock User Guide to learn how it can facilitate your organization’s transition to the cloud. For specialized assistance, consider engaging with AWS Professional Services to maximize the efficiency and benefits of using Amazon Bedrock.

Embrace the potential for a swift, secure, and efficient cloud transformation with Amazon Bedrock. Take the first step today and discover how using generative AI can revolutionize your approach to cloud infrastructure.


About the authors

Rama Krishna Yalla is an Associate DevOps Consultant at AWS, adept at designing scalable, reliable, and secure cloud environments. He leverages automation and CI/CD best practices to streamline software delivery, reduce downtime, and enhance operational efficiency. Rama is experienced in managing infrastructure as code (IaC) ensuring consistent and repeatable deployments. He also focuses on implementing robust monitoring and logging solutions, enabling proactive issue resolution and optimized performance. Outside of work, Rama enjoys playing badminton and often participates in local tournaments.

Akhil Raj Yallamelli is a Cloud Infrastructure Architect at AWS, specializing in architecting cloud infrastructure solutions for enhanced data security and cost efficiency. He is experienced in integrating technical solutions with business strategies to create scalable, reliable, and secure cloud environments. Akhil enjoys developing solutions focusing on customer business outcomes, incorporating generative AI (Gen AI) technologies to drive innovation and cloud enablement. He holds an MS degree in Computer Science. Outside of his professional work, Akhil enjoys watching and playing sports.

Read More

Create a generative AI assistant with Slack and Amazon Bedrock

Create a generative AI assistant with Slack and Amazon Bedrock

Seamless integration of customer experience, collaboration tools, and relevant data is the foundation for delivering knowledge-based productivity gains. In this post, we show you how to integrate the popular Slack messaging service with AWS generative AI services to build a natural language assistant where business users can ask questions of an unstructured dataset.

To demonstrate, we create a generative AI-enabled Slack assistant with an integration to Amazon Bedrock Knowledge Bases that can expose the combined knowledge of the AWS Well-Architected Framework while implementing safeguards and responsible AI using Amazon Bedrock Guardrails.

Amazon Bedrock is a fully managed service that offers a choice of high-performing foundation models (FMs) from leading AI companies like AI21 labs, Anthropic, Cohere, Meta, Stability AI and Amazon through a single API.

Amazon Bedrock Knowledge Bases provides a fully managed Retrieval Augmented Generation (RAG) workflow, a technique that fetches data from company data sources and enriches the prompt to provide more relevant and accurate responses to natural language queries. This makes Amazon Bedrock Knowledge Bases an attractive option to incorporate advanced generative AI capabilities into products and services without the need for extensive machine learning expertise.

Amazon Bedrock Guardrails enables you to implement safeguards to build and customize safety, privacy, and truthfulness protections for your generative AI applications to align with responsible AI policies. Guardrails can help prevent undesirable content, block prompt injections, and remove sensitive information for privacy, protecting your company’s brand and reputation.

This content builds on posts such as Deploy a Slack gateway for Amazon Bedrock by adding integrations to Amazon Bedrock Knowledge Bases and Amazon Bedrock Guardrails, and the Bolt for Python library to simplify Slack message acknowledgement and authentication requirements.

Solution overview

The code in the accompanying GitHub repo provided in this solution enables an automated deployment of Amazon Bedrock Knowledge Bases, Amazon Bedrock Guardrails, and the required resources to integrate the Amazon Bedrock Knowledge Bases API with a Slack slash command assistant using the Bolt for Python library.

In this example, we ingest the documentation of the Amazon Well-Architected Framework into the knowledge base. Then we use the integration to the Amazon Bedrock Knowledge Bases API to provide a Slack assistant that can answer user questions on AWS architecture best practices. You can substitute the example documentation for your enterprise dataset, such as your corporate, HR, IT, or security policies, or equipment user or maintenance guides.

The following diagram illustrates the high-level solution architecture.

In the following sections, we discuss the key components in more detail.

Slack integration

The Slack integration is provided through the Slack Bolt Library for Python running in the Request Processor AWS Lambda function. The Slack Bolt Library handles authentication and permissions to the Slack application we build, and comes with built-in support for asynchronous request handling. Slack Bolt provides a dedicated user guide to deploy and run the library in a Lambda function.

Retrieval Augmented Generation

Amazon Bedrock Knowledge Bases gives FMs contextual information from your private data sources for RAG to deliver more relevant, accurate, and customized responses.

The RAG workflow consists of two key components: data ingestion and text generation.

  • Data ingestion workflow – During data ingestion, unstructured data from the data source is separated into chunks. Chunks are short series of text from each source document separated by a fixed word count, paragraphs, or a single thought. Chunks are vectorized and stored in a vector database. Amazon Bedrock Knowledge Bases supports a number of vector databases, such as Amazon OpenSearch Serverless, Amazon Aurora, Pinecone, Redis Enterprise Cloud, and Mongo DB Atlas. In this example, we use the default option of OpenSearch Serverless.
  • Text generation workflow – After the source data is ingested into the vector database, we can perform a semantic search to find chunks of data that are relevant to the user query based on contextualized meaning instead of just literal string matching. To complete the process, both the user query and the relevant data chunks are presented to the selected large language model (LLM) to create a natural language response.

Amazon Bedrock Knowledge Bases APIs

Amazon Bedrock Knowledge Bases provides a fully managed RAG workflow that is exposed using two main APIs:

  • Retrieve – This API retrieves the relevant data chunks using semantic search, which you can then process further in application logic
  • RetrieveAndGenerate – This API completes a full RAG text generation workflow to return a natural language response to a human query of the given dataset

The solution in this post calls the RetrieveAndGenerate API to return the natural language response to the Slack Bolt integration library.

Amazon Bedrock Guardrails

Amazon Bedrock Guardrails provides additional customizable safeguards on top of built-in protections offered by FMs, delivering safety features that are among the best in the industry.

In this solution, we configure Amazon Bedrock Guardrails with content filters, sensitive information filters, and word filters.

Content filters help detect and filter harmful user inputs and model-generated outputs across six categories: prompt injections, misconduct, insults, hate, violence, and sexually explicit content. In this solution, we use all six content filter categories.

Sensitive information filters detect sensitive information such as personally identifiable information (PII) data in a prompt or model responses. To align with your specific case, you can use custom sensitive information filters by defining them with regular expressions (regex).

In this solution, we configure sensitive information filters as follows:

  • Email with an action of Anonymize
  • Phone with an action of Anonymize
  • Name with an action of Anonymize
  • Credit_Debit_Card_Number with an action of Block

Word filters are used to block words and phrases in input prompts and model responses. In this solution, we have enabled the AWS provided profanity filter. To align with your use case, you can create custom word filters.

Solution walkthrough

Slack interfaces with a simple REST API, configured with Lambda proxy integration that in turn interacts with Amazon Bedrock Knowledge Bases APIs.

The solution is deployed with the following high-level steps:

  1. Create a new Slack application.
  2. Enable third-party model access in Amazon Bedrock.
  3. Deploy the Slack to Amazon Bedrock integration using the AWS Cloud Development Kit (AWS CDK).
  4. Ingest the AWS Well-Architected Framework documents to the knowledge base.

Prerequisites

To implement this solution, you need the following prerequisites:

This post assumes a working knowledge of the listed AWS services. Some understanding of vector databases, vectorization, and RAG would be advantageous, but not necessary.

Create a new Slack application

After you have logged in to your Slack workspace, complete the following steps:

  1. Navigate to your Slack apps and create a new application.
  2. Choose From scratch when prompted.
  3. Provide an application name. For this post, we use the name aws-war-bot.
  4. Choose your workspace and choose Create App.
  5. To provide permissions for your Slack application, choose OAuth & Permissions in your Slack application navigation pane.
  6. In the Scopes section, under Bot Token Scopes, add the following permissions:
    • calls:write
    • commands
    • incoming-webhook

  7. Under OAuth Tokens for Your Workspace, choose Install to [workspace name].
  8. Choose a channel that the Slack application will be accessed from. You may want to first create a dedicated channel in Slack for this purpose.
  9. Choose Allow.
  10. When the Slack application install is complete, copy the token value generated for Bot User OAuth Token to use in a later step.
  11. Under Settings in the navigation pane, choose Basic Information.
  12. In the App Credentials section, copy the value for Signing Secret and save this to use later.

Enable model access in Amazon Bedrock

Complete the following steps to enable model access in Amazon Bedrock:

  1. On the Amazon Bedrock console, choose Model access in the navigation pane.
  2. Choose Modify model Access or Enable specific models (if this is the first time using Amazon Bedrock in your account).
  3. Select the models you want to use for the embeddings and RAG query response models. In this post, we use Amazon Titan Text Embeddings V2 as the embeddings model and Anthropic’s Claude Sonnet 3 for the RAG query models in the US-EAST-1 AWS Region.
  4. Choose Next.
  5. Review the model selection and choose Submit.

If you’re not using the US-EAST-1 Region, the models available to request may differ.

When the access request is complete, you will see the model’s status shown as Access granted for the selected models.

Deploy the Slack to Amazon Bedrock integration

In this section, you deploy the companion code to this post to your AWS account, which will deploy an API on API Gateway, a Lambda function, and an Amazon Bedrock knowledge base with OpenSearch Serverless as the vector database.

This section requires AWS CDK and TypeScript to be installed in your local integrated development environment (IDE) and for an AWS account to be bootstrapped. If this has not been done, refer to Getting started with the AWS CDK.

  1. Clone the code from the GitHub repository:
    git clone https://github.com/aws-samples/amazon-bedrock-knowledgebase-slackbot.git

  2. Open the amazon-bedrock-knowledgebase-slackbot directory in your preferred IDE and open the lib/amazon-bedrock-knowledgebase-slackbot-stack.ts file.
  3. Update the variables if needed (depending on model access and Regional support) for the RAG query and embeddings models:
    const RAG_MODEL_ID = "anthropic.claude-3-sonnet-20240229-v1:0"
    const EMBEDDING_MODEL = "amazon.titan-embed-text-v2:0"

  4. Save the changes after all updates are complete.
  5. From the root of your repository, run the command npm install.
  6. Run the command cdk synth to perform basic validation of AWS CDK code. This generates a CloudFormation template from the AWS CDK stack, which can be reviewed in the cdk.out directory created in the root of the repository.
  7. To deploy the application stack, run the following command, replacing the values with the token and the signing secret you created earlier:
    cdk deploy --context slackBotToken=%slackBotToken% --context slackSigningSecret=%slackSigningSecret%

The AWS CDK will deploy the stack as a CloudFormation template. You can monitor the progress of the deployment on the AWS CloudFormation console.

Additionally, AWS CDK will attempt to deploy the application stack to the default account and Region using the default credentials file profile. To change profiles, add the profile flag. For example:

cdk deploy --profile [my-profile]

When the deployment is complete, you will see an output similar to the following screenshot, which details the API endpoint that has just been deployed.

  1. Copy the API endpoint URL for later use.

You can also retrieve this URL on the Outputs tab of the CloudFormation stack AmazonBedrockKnowledgebaseSlackbotStack that was run to deploy this solution.

  1. Switch back to the Slack API page.
  2. Under the Slack application you created, choose Slash Commands in the navigation pane and then choose Create New Command.
  3. Provide the following information (make sure to include the Region and API ID that has been deployed):
    • For Command, enter /ask-aws.
    • For Request URL, enter https://[AWS-URL]/slack/[command]. For example, https://ab12cd3efg.execute-api.us-east-1.amazonaws.com/prod/slack/ask-aws.
    • For Short Description, enter a description (for example, AWS WAR Bot).

  4. Choose Save.
  5. Reinstall the Slack application to your workspace in the Install App section by choosing Reinstall next to the workspace name.
  6. Choose the channel where the Slack app will be deployed and choose Allow.

In the Slack channel, you will see a message like the one in the following screenshot, indicating that an integration with the channel has been added.

Populate the Amazon Bedrock knowledge base

Complete the following steps to populate the Amazon Bedrock knowledge base with the combined information of the AWS Well-Architected Framework:

  1. Download the following AWS Well-Architected Framework documents:

You can also include any Well-Architected Lenses that are relevant to your organization by downloading from AWS Whitepapers and Guides.

  1. On the Amazon Bedrock console, choose Knowledge bases in the navigation pane.
  2. Choose the knowledge base you deployed (slack-bedrock-kb).
  3. In the Data source section under Source link, choose the S3 bucket link that is displayed.

This will open the S3 bucket that is being used by the Amazon Bedrock knowledge base as the data source.

  1. In the S3 bucket, choose Upload then Add files, and select all of the downloaded AWS Well-Architected documents from the previous step.
  2. When the documents have completed uploading, switch back to the Knowledge bases page on the Amazon Bedrock console.
  3. Select the data source name and choose Sync.

This will sync the documents from the S3 bucket to the OpenSearch Serverless vector database. The process can take over 10 minutes.

When the sync is complete, the data source will show a Status of Available.

Test the Slack application integration with Amazon Bedrock

Complete the following steps to test the integration:

  1. Open the Slack channel selected in the previous steps and enter /ask-aws.

The Slack application will be displayed.

  1. Choose the Slack application and enter your prompt. For this test, we use the prompt “Tell me about the AWS Well Architected Framework.

The Slack application will respond with Processing Request and a copy of the entered prompt. The application will then provide a response to the prompt.

  1. To test that the guardrails are working as required, write a prompt that will invoke a guardrail intervention.

When an intervention occurs, you will receive the following predefined message as your response.

Clean up

Complete the following steps to clean up your resources:

  1. From your terminal, run the following command, replacing the values with the token and the signing secret created earlier:
    cdk destroy --context slackBotToken=%slackBotToken% --context slackSigningSecret=%slackSigningSecret%

  2. When prompted, enter y to confirm the deletion of the deployed stack.

Conclusion

In this post, we implemented a solution that integrates an Amazon Bedrock knowledge base with a Slack chat channel to allow business users to ask natural language questions of an unstructured dataset from a familiar interface. You can use this solution for multiple use cases by configuring it to different Slack applications and populating the knowledge base with the relevant dataset.

To get started, clone the GitHub repo and enhance your customers’ interactions with Amazon Bedrock. For more information about Amazon Bedrock, see Getting started with Amazon Bedrock.


About the Authors

Barry Conway is an Enterprise Solutions Architect at AWS with 20 years of experience in the technology industry, bridging the gap between business and technology. Barry has helped banking, manufacturing, logistics, and retail organizations realize their business goals.

Dean Colcott is an AWS Senior GenAI/ML Specialist Solution Architect and SME for Amazon Bedrock. He has areas of depth in integrating generative AI outcomes into enterprise applications, full stack development, video analytics, and computer vision and enterprise data platforms.

Read More

Unleash your Salesforce data using the Amazon Q Salesforce Online connector

Unleash your Salesforce data using the Amazon Q Salesforce Online connector

Thousands of companies worldwide use Salesforce to manage their sales, marketing, customer service, and other business operations. The Salesforce cloud-based platform centralizes customer information and interactions across the organization, providing sales reps, marketers, and support agents with a unified 360-degree view of each customer. With Salesforce at the heart of their business, companies accumulate vast amounts of customer data within the platform over time. This data is incredibly valuable for gaining insights into customers, improving operations, and guiding strategic decisions. However, accessing and analyzing the blend of structured data and unstructured data can be challenging. With the Amazon Q Salesforce Online connector, companies can unleash the value of their Salesforce data.

Amazon Q Business is a generative AI-powered assistant that can answer questions, provide summaries, generate content, and securely take actions based on data and information in your enterprise systems. It empowers employees to be more data-driven, efficient, prepared, and productive.

Amazon Q Business offers pre-built connectors for over 40 data sources, including Amazon Simple Storage Service (Amazon S3), Microsoft SharePoint, Salesforce, Google Drive, Atlassian Confluence, Atlassian Jira, and many more. For a full list of data source connectors, see Amazon Q Business connectors.

In this post, we walk you through configuring and setting up the Amazon Q Salesforce Online connector.

Overview of the Amazon Q Salesforce Online connector

Amazon Q Business supports its own index where you can add and sync documents. Amazon Q connectors make it straightforward to synchronize data from multiple content repositories with your Amazon Q index. You can set up connectors to automatically sync your index with your data source based on a schedule, so you’re always securely searching through up-to-date content.

The Amazon Q Salesforce Online connector provides a simple, seamless integration between Salesforce and Amazon Q. With a few clicks, you can securely connect your Salesforce instance to Amazon Q and unlock a robust self-service conversational AI assistant for your Salesforce data.

The following diagram illustrates this architecture.

Amazon Q Business architecture diagram

Types of documents

When you connect Amazon Q Business to a data source like Salesforce, what Amazon Q considers and crawls as a document varies by connector type.

The Amazon Q Salesforce Online connector crawls and indexes the following content types:

  • Account
  • Campaign
  • Case
  • Chatter
  • Contact
  • Contract
  • Custom object
  • Document
  • Group
  • Idea
  • Knowledge articles
  • Lead
  • Opportunity
  • Partner
  • Pricebook
  • Product
  • Profile
  • Solution
  • Task
  • User

The Amazon Q Salesforce Online connector also supports field mappings to enrich index data with additional fields data. Field mappings allow you to map Salesforce field names to Amazon Q index field names. This includes both default field mappings created automatically by Amazon Q, and custom field mappings that you can create and edit.

Authentication

The Amazon Q Salesforce Online connector supports OAuth 2.0 with the Resource Owner Password Flow.

ACL crawling

To securely index documents, the Amazon Q Salesforce Online connector supports crawling access control lists (ACLs) with role hierarchy by default. With ACL crawling, the information can be used to filter chat responses to your end-user’s document access level. You can apply ACL-based chat filtering using Salesforce standard objects and chatter feeds. ACL-based chat filtering isn’t available for Salesforce knowledge articles.

If you index documents without ACLs, all documents are considered public. If you want to index documents without ACLs, make sure the documents are marked as public in your data source.

Solution overview

In this post, we guide you through connecting an existing Amazon Q application to Salesforce Online. You configure authentication, map fields, sync data between Salesforce and Amazon Q, and then deploy your AI assistant using the Amazon Q web experience.

We also demonstrate how to use Amazon Q to have a conversation about Salesforce accounts, opportunities, tasks, and other supported data types.

Prerequisites

You need the following prerequisites:

Set up Salesforce authentication

To set up authentication and allow external programs to Salesforce, complete the following steps to configure your connected application settings:

  1. In Salesforce, in the Quick Find box, search and choose App Manager.
  2. Choose New Connected App.
  3. For Connected App Name, enter a name.
  4. For API name, enter an API name used when referring to the connected application.
  5. Enter your contact email address and phone.
  6. If you are using OAuth, select the right scope for OAuth.salesforce app manager new connected app
  1. Choose Save and wait for connected application to be created.
  2. On the Connected Apps page, select the application, and on the drop-down menu, choose View.
  3. On the details page, next to Consumer Key and Secret, choose Manage Consumer Details.salesforce app manager manage connected app
  1. Copy the client ID and client secret for future use in Salesforce.

Set up the Amazon Q Salesforce Online connector

Complete the following steps to set up the Amazon Q Salesforce Online connector:

  1. On the Amazon Q Business console, choose Applications in the navigation pane.
  2. Select your application and on the Actions menu, choose Edit.Amazon Q Edit application
  1. On the Update application page, leave settings as default and choose Update.
  2. On the Update retriever page, leave settings as default and choose Update.
  3. On the Connect data sources page, on the All tab, search for Salesforce.
  4. Choose the plus sign for the Salesforce Online connector.Amazon Q Connect Data Source
  1. In the Name and description section, enter a name and description.
  2. In the Source section, for Salesforce URL, enter your Salesforce server URL in https://yourcompany.my.salesforce.com/Amazon Q Connect Salesforce Data Source
  1. In the Authentication section, choose Create and add new secret.
  2. Enter the Salesforce connected application authentication information and choose Save.
    Create a secret in AWS secrets manager
  1. In the IAM role section, choose Create a new service role (recommended).

create new service role

  1. In the Sync scope section, select All standard objects.

If you choose to sync only specific objects, then select each object type accordingly.

Define sync Scope

  1. In the Sync mode section, select New, modified, or deleted content sync.

sync mode

  1. Under Sync run schedule, choose the desired frequency. For testing purposes, we choose Run on demand.

sync run schedule

  1. Choose Add data source and wait for the connector to be created.
  2. After the Salesforce connector is created, you’re redirected back to the Connect data sources page, where you can add additional data sources if needed.
  3. Choose Next.
  4. On the Update groups and users page, assign users or groups from IAM Identity Center set up by your administrator. Optionally, if you have permissions to add new users, you can select Add new users.
  5. Choose Next.

assign new users

  1. Choose a user or group from the list to give them access to the Amazon Q web experience.
  2. Choose Done.

  1. Choose Update application to complete setting up the Salesforce data connector for Amazon Q Business.

Additional Salesforce field mappings

When you connect Amazon Q to a data source, Amazon Q automatically maps specific data source document attributes to fields within an Amazon Q index. If a document attribute in your data source doesn’t have an attribute mapping already available, or if you want to map additional document attributes to index fields, use the custom field mappings to specify how a data source attribute maps to an Amazon Q index field. You create field mappings by editing your data source after your application and retriever are created.

To update the field mapping, complete the following steps:

  1. On the Amazon Q console, navigate to your Amazon Q application.
  2. Under Data sources, select your data source and on the Actions menu, choose Edit.

Amazon Q data sources

  1. In the Field mappings section, find the item that you want to add fields to and choose Add field. (For this post, we add the postalCode field to Lead.)
  2. Add any other fields that you want to be included in the Amazon Q index and then choose Update.

Amazon Q connector fields mapping

The setup process is complete.

  1. In the application details, choose Sync now to start the Amazon Q crawling and indexing process.

The initial sync may take a few minutes to get started.

When the sync process is complete, you can see a summary of ingested data on the connector’s Sync history tab. Check Total items scanned and Added to confirm that the right number of documents are included in the index.

Amazon Q Data source details

Mapping custom fields

Salesforce allows you to store your unique business data by creating and using custom fields. When you need to fetch a custom field to generate answers, additional steps are needed for mapping and crawling the field. For example, knowledge articles in Salesforce use custom fields to store content of articles.

Make sure the initial sync process for the connector is complete. On the initial sync, the connector gets a list of all fields and objects in Salesforce, which is needed for custom fields mapping.

Complete the following steps to index contents of knowledge articles:

  1. Navigate to Salesforce Setup and search and open Object Manager.
  2. In Object Manager, choose the Knowledge

Salesforce object manager

  1. In the Fields & Relationships section, find the field name (for this example, we’re looking for Article Body and the field name is Article_Body__c) and record this field name.

Salesforce Object manager fields and relationships

  1. On the Amazon Q Business console, navigate back to your application and choose Data sources in the navigation pane.
  2. Select the Salesforce data source and on the Actions menu, choose Edit.

Amazon Q Edit Data sources

  1. In the Field mappings section, under Knowledge Articles, choose Add field.
  2. For Salesforce field name, enter Article_Body__c and map it to _document_body for Index field name.
  3. Select your object type.
  4. Choose Update to save the changes.

  1. Return to the Data sources page of the application and choose Sync now.

When the sync process is complete, you can chat with Salesforce data source about default fields and also the Salesforce custom field that you added.

Sync Data Sources - Sync Now

Talk with your Salesforce data using the Amazon Q web experience

When the synchronization process is complete, you can start using the Amazon Q web experience. To access the Amazon Q application UI, select your application and choose Customize web experience, which opens a preview of the UI and options to customize it.

Amazon Q applications list

You can customize the values for Title, Subtitle, and Welcome message in the UI. After you make changes, choose Save and then choose View web experience.

Amazon Q Business web UI

After signing in, you can start chatting with your generative AI assistant. To verify answers, check the citation links included in the answers. If you need to improve answers, add more details and context to the questions.

Amazon Q Business chat

The results aren’t limited to cases and activities. You can also include other objects like knowledge bases. If a field isn’t included in the default mapped fields, you still can add them in the retriever settings and update the content index.

Let’s look at opportunities in Salesforce for a specific company and ask Amazon Q about these opportunities.

AMaon Q Business - getting summary of opportunities

After opportunities, check a sample knowledge article from Salesforce.

Salesforce - example knowledgebase article

When you chat with Amazon Q, you can see the exact article is referenced as the primary source.

Amazon Q chat about cost optimization

As you can see, each answer has a thumbs up/thumbs down button to provide feedback. Amazon Q uses this feedback to improve responses for all your organization users.

Metadata fields

In Salesforce, document metadata refers to the information that describes the properties and characteristics of documents stored in Salesforce. The Amazon Q data source connector crawls relevant metadata or attributes associated with a document. To use metadata search, go to the Amazon Q application page and choose Metadata controls in the navigation pane. Select the metadata fields that are needed, for instance sf_subject and sf_status. This allows you to ask metadata lookup queries such as “Summarize case titled as supply chain vendors cost optimization” or “Give me status of case with subject as cloud modernization project.” Here, the sf_status and sf_subject metadata fields will be used to query and generate the relevant answer.

Amazon Q metadata search

Frequently asked questions

In this section, we discuss some frequently asked questions.

Amazon Q Business is unable to answer your questions

If you get the response “Sorry, I could not find relevant information to complete your request,” this may be due to a few reasons:

  • No permissions – ACLs applied to your account don’t allow you to query certain data sources. If this is the case, reach out to your application administrator to make sure your ACLs are configured to access the data sources.
  • Data connector sync failed – Your data connector may have failed to sync information from the source to the Amazon Q Business application. Verify the data connector’s sync run schedule and sync history to confirm the sync is successful.
  • No subscriptions – Make sure that logged-in users have a subscription for Amazon Q.

If none of these reasons apply to your use case, open a support case and work with your technical account manager to get this resolved.

Custom fields aren’t showing up in fields mappings

A custom fields list is retrieved after the initial full synchronization. After a successful synchronization, you can add field mappings for custom fields.

Clean up

To prevent incurring additional costs, it’s essential to clean up and remove any resources created during the implementation of this solution. Specifically, you should delete the Amazon Q application, which will consequently remove the associated index and data connectors. However, any AWS Identity and Access Management (IAM) roles and secrets created during the Amazon Q application setup process will need to be removed separately. Failing to clean up these resources may result in ongoing charges, so it’s crucial to take the necessary steps to remove all components related to this solution.

Complete the following steps to delete the Amazon Q application, secret, and IAM role:

  1. On the Amazon Q Business console, select the application that you created.
  2. On the Actions menu, choose Delete and confirm the deletion.
  3. On the Secrets Manager console, select the secret that was created for the connector.
  4. On the Actions menu, choose Delete.
  5. Set the waiting period as 7 days and choose Schedule deletion.

delete secret

  1. On the IAM console, select the role that was created during the Amazon Q application creation.
  2. Choose Delete and confirm the deletion.

Conclusion

In this post, we provided an overview of the Amazon Q Salesforce Online connector and how you can use it for a safe and seamless integration of generative AI assistance with Salesforce. By using a single interface for the variety of data sources in the organization, you can enable employees to be more data-driven, efficient, prepared, and productive.

To learn more about the Amazon Q Salesforce Online connector, refer to Connecting Salesforce Online to Amazon Q Business.


About the Author

author mehdy haghy Mehdy Haghy is a Senior Solutions Architect at the AWS WWCS team, specializing in AI and ML on AWS. He works with enterprise customers, helping them migrate, modernize, and optimize their workloads for the AWS Cloud. In his spare time, he enjoys cooking Persian food and tinkering with circuit boards.

Read More

Reducing hallucinations in large language models with custom intervention using Amazon Bedrock Agents

Reducing hallucinations in large language models with custom intervention using Amazon Bedrock Agents

Hallucinations in large language models (LLMs) refer to the phenomenon where the LLM generates an output that is plausible but factually incorrect or made-up. This can occur when the model’s training data lacks the necessary information or when the model attempts to generate coherent responses by making logical inferences beyond its actual knowledge. Hallucinations arise because of the inherent limitations of the language modeling approach, which aims to produce fluent and contextually appropriate text without necessarily ensuring factual accuracy.

Remediating hallucinations is crucial for production applications that use LLMs, particularly in domains where incorrect information can have serious consequences, such as healthcare, finance, or legal applications. Unchecked hallucinations can undermine the reliability and trustworthiness of the system, leading to potential harm or legal liabilities. Strategies to mitigate hallucinations can include rigorous fact-checking mechanisms, integrating external knowledge sources using Retrieval Augmented Generation (RAG), applying confidence thresholds, and implementing human oversight or verification processes for critical outputs.

RAG is an approach that aims to reduce hallucinations in language models by incorporating the capability to retrieve external knowledge and making it part of the prompt that’s used as input to the model. The retriever module is responsible for retrieving relevant passages or documents from a large corpus of textual data based on the input query or context. The retrieved information is then provided to the LLM, which uses this external knowledge in conjunction with prompts to generate the final output. By grounding the generation process in factual information from reliable sources, RAG can reduce the likelihood of hallucinating incorrect or made-up content, thereby enhancing the factual accuracy and reliability of the generated responses.

Amazon Bedrock Guardrails offer hallucination detection with contextual grounding checks, which can be seamlessly applied using Amazon Bedrock APIs (such as Converse or InvokeModel) or embedded into workflows. After an LLM generates a response, these workflows perform a check to see if hallucinations occurred. This setup can be achieved through Amazon Bedrock Prompt Flows or with custom logic using AWS Lambda functions. Customers can also do batch evaluation with human reviewers using Amazon Bedrock model evaluation’s human-based evaluation feature. However, these are static workflows, updating the hallucination detection logic requires modifying the entire workflow, limiting adaptability.

To address this need for flexibility, Amazon Bedrock Agents enables dynamic workflow orchestration. With Amazon Bedrock Agents, organizations can implement scalable, customizable hallucination detection that adjusts based on specific needs, reducing the effort needed to incorporate new detection techniques and additional API calls in the workflow without restructuring the entire workflow and letting the LLM decide the plan of action to orchestrate the workflow.

In this post, we will set up our own custom agentic AI workflow using Amazon Bedrock Agents to intervene when LLM hallucinations are detected and route the user query to customer service agents through a human-in-the-loop process. Imagine this to be a simpler implementation of calling a customer service agent when the chatbot is unable to answer the customer query. The chatbot is based on a RAG approach, which reduces hallucinations to a large extent, and the agentic workflow provides a customizable mechanism in how to measure, detect, and mitigate hallucinations that might occur.

Agentic workflows are a fresh new perspective in building dynamic and complex business use case-based workflows with the help of LLMs as the reasoning engine or brain. These agentic workflows decompose the natural language query-based tasks into multiple actionable steps with iterative feedback loops and self-reflection to produce the final result using tools and APIs.

Amazon Bedrock Agents helps accelerate generative AI application development by orchestrating multistep tasks. Amazon Bedrock Agents uses the reasoning capability of LLMs to break down user-requested tasks into multiple steps. They use the given instruction to create an orchestration plan and then carry out the plan by invoking company APIs or accessing knowledge bases using RAG to provide a final response to the user. This offers tremendous use case flexibility, enables dynamic workflows, and reduces development cost. Amazon Bedrock Agents is instrumental in customizing applications to help meet specific project requirements while protecting private data and helping to secure applications. These agents work with AWS managed infrastructure capabilities such as Lambda and Amazon Bedrock, reducing infrastructure management overhead. Additionally, agents streamline workflows and automate repetitive tasks. With the power of AI automation, you can boost productivity and reduce costs.

Amazon Bedrock is a fully managed service that offers a choice of high-performing foundation models (FMs) from leading AI companies such as AI21 Labs, Anthropic, Cohere, Meta, Mistral AI, Stability AI, and Amazon through a single API, along with a broad set of capabilities to build generative AI applications with security, privacy, and responsible AI.

Use case overview

In this post, we add our own custom intervention to a RAG-powered chatbot in an event of hallucinations being detected. We will be using Retrieval Augmented Generation Automatic Score (metrics such as answer correctness and answer relevancy to develop a custom hallucination score for measuring hallucinations. If the hallucination score for a particular LLM response is less than a custom threshold, it indicates that the generated model response is not well-aligned with the ground truth. In this situation, we notify a pool of human agents through Amazon Simple Notification Service (Amazon SNS) notification to assist with the query instead of providing the customer with the hallucinated LLM response.

The RAG-based chatbot we use ingests the Amazon Bedrock User Guide to assist customers on queries related to Amazon Bedrock.

Dataset

The dataset used in the notebook is the latest Amazon Bedrock User guide PDF file, which is publicly available to download. Alternatively, you can use other PDFs of your choice to create the knowledge base from scratch and use it in this notebook.

If you use a custom PDF, you will need to curate a supervised dataset of ground truth answers to multiple questions to test this approach. The custom hallucination detector uses RAGAS metrics, which are generated using a CSV file containing question-answer pairs. For custom PDFs, it is necessary to replace this CSV file and re-run the notebook for a different dataset.

In addition to the dataset in the notebook, we ask the agent multiple questions, a few of them from the PDF and a few not part of the PDF. The ground truth answers are manually curated based on the PDF contents if relevant.

this image has 4 sample questions. we ask the agent multiple questions, a few of them from the PDF and a few not part of the PDF. The ground truth answers are manually curated based on PDF contents if relevant.

Prerequisites

To run this solution in your AWS account, complete the following prerequisites:

  1. Clone the GitHub repository and follow the steps explained in the README.
  2. Set up an Amazon SageMaker notebook on an ml.t3.medium Amazon Elastic Compute Cloud (Amazon EC2)
  3. Acquire access to models hosted on Amazon Bedrock. Choose Manage model access in the navigation pane of the Amazon Bedrock console and choose from the list of available options. We use Anthropic’s Claude v3 (Sonnet) on Amazon Bedrock and Amazon Titan Embeddings Text v2 on Amazon Bedrock for this post.

Implement the solution

The following illustrates the solution architecture:

Architecture diagram of custom hallucination detection and mitigation : The user's question is fed to a search engine (with optional LLM-based step to pre-process it to a good search query). The documents or snippets returned by the search engine, together with the user's question, are inserted into a prompt template - and an LLM generates a final answer based on the retrieved documents. The final answer can be evaluated against the reference answer from the dataset to get a custom hallucination score. Based on a pre-defined empirical threshold, a customer service agent is requested to join the conversation using SNS notification

Architecture Diagram for Custom Hallucination Detection and Mitigation

The overall workflow involves the following steps:

  1. Data ingestion involving raw PDFs stored in an Amazon Simple Storage Service (Amazon S3) bucket synced as a data source with  .
  2. User asks questions relevant to the Amazon Bedrock User Guide, which are handled by an Amazon Bedrock agent that is set up to handle user queries.

User query: What models are supported by bedrock agents?

  1. The agent creates a plan and identifies the need to use a knowledge base. It then sends a request to the knowledge base, which retrieves relevant data from the underlying vector database. The agent retrieves an answer through RAG using the following steps:
    • The search query is directed to the vector database (Amazon OpenSearch Serverless).
    • Relevant answer chunks are retrieved.
    • The knowledge base response is generated from the retrieved answer chunks and sent back to the agent.

Generated Answer: Amazon Bedrock supports foundation models from various providers including Anthropic (Claude models), AI21 Labs (Jamba models), Cohere (Command models), Meta (Llama models), Mistral AI

  1. The user query and knowledge base response are used together to invoke the correct action group.
  2. The user question and knowledge base response are passed as inputs to a Lambda function that calculates a hallucination score.

The generated answer has some correct and some incorrect information as it picks up general Amazon Bedrock model support and not Amazon Bedrock Agents-specific model support. Therefore we have hallucination detected with a score of 0.4.

  1. An SNS notification is sent if the answer score is lower than the custom threshold.

Because answer score is 0.4 < 0.9 (hallucination threshold), the SNS notification is triggered.

  1. If the answer score is higher than the custom threshold, the hallucination detector set up in Lambda responds with a final knowledge base response. Otherwise, it returns a pre-defined response asking the user to wait until a customer service agent joins the conversation shortly.

Customer service human agent queue is notified and the next available agent joins or emails back if it is an offline response mechanism.

  1. The final agent response is shown in the chatbot UI(User Interface).

In the GitHub repository notebook, we cover the following learning objectives:

  1. Measure and detect hallucinations with an Agentic AI workflow which has the ability to notify humans-in-the-loop to remediate hallucinations, if detected.
  2. Custom hallucination detector with pre-defined thresholds based on select evaluation metrics in RAGAS.
  3. To remediate, we will send an SNS notification to the customer service queue and wait for a human to help us with the question.

Step 1: Setting up Amazon Bedrock Knowledge Bases with Amazon Bedrock Agents

In this section, we will integrate Amazon Bedrock Knowledge Bases with Amazon Bedrock Agents to create a RAG workflow. RAG systems use external knowledge sources to augment the LLM’s output, improving factual accuracy and reducing hallucinations. We create the agent with the following high-level instruction encouraging it to take a question-answering role.

agent_instruction = """

You are a question answering agent that helps customers answer questions from the Amazon Bedrock User Guide inside the associated knowledge base.

Next you will always use the knowledge base search result to detect and measure any hallucination using the functions provided"

"""

Step 2: Invoke Amazon Bedrock Agents with user questions about Amazon Bedrock documentation

We are using a supervised dataset with predefined questions and ground truth answers to invoke Amazon Bedrock Agents which triggers the custom hallucination detector based on the agent response from the knowledge base. In the notebook, we demonstrate how the answer score based on RAGAS metrics can notify a human customer service representative if it does not meet a pre-defined custom threshold score.

We use RAGAS metrics such as answer correctness and answer relevancy to determine the custom threshold score. Depending on the use case and dataset, the list of applicable RAGAS metrics can be customized accordingly.

To change the threshold score, you can modify the measure_hallucination() method inside the Lambda function lambda_hallucination_detection().

The agent is prompted with the following template. The user_question in the template is iterated from the supervised dataset CSV file that contains the question and ground truth answers.

USER_PROMPT_TEMPLATE = """Question: {user_question}

Given an input question, you will search the Knowledge Base on Amazon Bedrock User Guide to answer the user question. 
If the knowledge base search results do not return any answer, you can try answering it to the best of your ability, but do not answer anything you do not know. Do not hallucinate.
Using this knowledge base search result you will ALWAYS execute the appropriate action group API to measure and detect the hallucination on that knowledge base search result.

Remove any XML tags from the knowledge base search results and final user response.


Some samples for `user_question` parameter:

What models are supported by bedrock agents?
Which models can I use with Amazon Bedrock Agents?
Which are the dates for reinvent 2024?
What is Amazon Bedrock?

"""

Step 3: Trigger human-in-the-loop in case of hallucination

If the custom hallucination score threshold is not met by the agent response, a human in the loop is notified using SNS notifications. These notifications can be sent to the customer service representative queue or Amazon Simple Queue Service (Amazon SQS) queues for email and text notifications. These representatives can respond to the email (offline) or ongoing chat (online) based on their training and knowledge of the system and additional resources. This would be based out of the specific product workflow design.

To view the actual SNS messages sent out, we can view the latest Lambda AWS CloudWatch logs following the instructions as given in viewing CloudWatch logs for Lambda functions. You can search for the string Received SNS message :: inside the CloudWatch logs for the Lambda function LambdaAgentsHallucinationDetection().

Cost considerations

The following are important cost considerations:

  • This current implementation has no separate charges for building resources using Amazon Bedrock Knowledge Bases or Amazon Bedrock Agents.
  • You will incur charges for the embedding model and text model invocation on Amazon Bedrock. For more details, see Amazon Bedrock pricing.
  • You will incur charges for Amazon S3 and vector database usage. For more details, see Amazon S3 pricing and Amazon OpenSearch Service pricing, respectively.

Clean up

To avoid incurring unnecessary costs, the implementation has the option to clean up resources after an entire run of the notebook. You can check the instructions in the cleanup_infrastructure() method for how to avoid the automatic cleanup and experiment with different prompts and datasets.

The order of resource cleanup is as follows:

  1. Disable the action group.
  2. Delete the action group.
  3. Delete the alias.
  4. Delete the agent.
  5. Delete the Lambda function.
  6. Empty the S3 bucket.
  7. Delete the S3 bucket.
  8. Delete AWS Identity and Access Management (IAM) roles and policies.
  9. Delete the vector DB collection policies.
  10. Delete the knowledge bases.

Key considerations

Amazon Bedrock Agents can increase overall latency compared to using just Amazon Bedrock Guardrails and Amazon Bedrock Prompt Flows. It is a trade-off decision between having LLM generated workflows compared to static or deterministic workflows. With agents, the LLM generates the workflow orchestration in real time using the available knowledge bases, tools, and APIs. Whereas with prompt flows and guardrails, the workflow has to be orchestrated and designed offline.

For evaluation, while we have chosen an LLM-based evaluation framework RAGAS, it is possible to swap out the elements in the hallucination detection Lambda function for another framework.

Conclusion

This post demonstrated how to use Amazon Bedrock Agents, Amazon Knowledge Bases, and the RAGAS evaluation metrics to build a custom hallucination detector and remediate it by using human-in-the-loop. The agentic workflow can be extended to custom use cases through different hallucination remediation techniques and offers the flexibility to detect and mitigate hallucinations using custom actions.

For more information on creating agents to orchestrate workflows, see Amazon Bedrock Agents. To learn about multiple RAGAS metrics for LLM evaluations see RAGAS: Getting Started.


About the Authors

Shayan Ray is an Applied Scientist at Amazon Web Services. His area of research is all things natural language (like NLP, NLU, and NLG). His work has been focused on conversational AI, task-oriented dialogue systems, and LLM-based agents. His research publications are on natural language processing, personalization, and reinforcement learning.

Bharathi Srinivasan is a Generative AI Data Scientist at AWS WWSO where she works building solutions for Responsible AI challenges. She is passionate about driving business value from machine learning applications by addressing broad concerns of Responsible AI. Outside of building new AI experiences for customers, Bharathi loves to write science fiction and challenge herself with endurance sports.

Read More

Deploy Meta Llama 3.1-8B on AWS Inferentia using Amazon EKS and vLLM

Deploy Meta Llama 3.1-8B on AWS Inferentia using Amazon EKS and vLLM

With the rise of large language models (LLMs) like Meta Llama 3.1, there is an increasing need for scalable, reliable, and cost-effective solutions to deploy and serve these models. AWS Trainium and AWS Inferentia based instances, combined with Amazon Elastic Kubernetes Service (Amazon EKS), provide a performant and low cost framework to run LLMs efficiently in a containerized environment.

In this post, we walk through the steps to deploy the Meta Llama 3.1-8B model on Inferentia 2 instances using Amazon EKS.

Solution overview

The steps to implement the solution are as follows:

  1. Create the EKS cluster.
  2. Set up the Inferentia 2 node group.
  3. Install the Neuron device plugin and scheduling extension.
  4. Prepare the Docker image.
  5. Deploy the Meta Llama 3.18B model.

We also demonstrate how to test the solution and monitor performance, and discuss options for scaling and multi-tenancy.

Prerequisites

Before you begin, make sure you have the following utilities installed on your local machine or development environment. If you don’t have them installed, follow the instructions provided for each tool.

In this post, the examples use an inf2.48xlarge instance; make sure you have a sufficient service quota to use this instance. For more information on how to view and increase your quotas, refer to Amazon EC2 service quotas.

Create the EKS cluster

If you don’t have an existing EKS cluster, you can create one using eksctl. Adjust the following configuration to suit your needs, such as the Amazon EKS version, cluster name, and AWS Region. Before running the following commands, make sure you authenticate towards AWS:

export AWS_REGION=us-east-1
export CLUSTER_NAME=my-cluster
export EKS_VERSION=1.30
export AWS_ACCOUNT_ID=$(aws sts get-caller-identity --query Account --output text)

Then complete the following steps:

  1. Create a new file named eks_cluster.yaml with the following command:
cat > eks_cluster.yaml <<EOF
---
apiVersion: eksctl.io/v1alpha5
kind: ClusterConfig

metadata:
  name: $CLUSTER_NAME
  region: $AWS_REGION
  version: "$EKS_VERSION"

addons:
- name: vpc-cni
  version: latest

cloudWatch:
  clusterLogging:
    enableTypes: ["*"]
    
iam:
  withOIDC: true
EOF

This configuration file contains the following parameters:

  • metadata.name – Specifies the name of your EKS cluster, which is set to my-cluster in this example. You can change it to a name of your choice.
  • metadata.region – Specifies the Region where you want to create the cluster. In this example, it’s set to us-east-2. Change this to your desired Region. Because we’re using Inf2 instances, you should choose a Region where those instances are presented.
  • metadata.version – Specifies the Kubernetes version to use for the cluster. In this example, it’s set to 1.30. You can change this to a different version if needed, but make sure to use a version that is supported by Amazon EKS. For a list of supported versions, see Review release notes for Kubernetes versions on standard support.
  • addons.vpc-cni – Specifies the version of the Amazon VPC CNI (Container Network Interface) add-on to use. Setting it to latest will install the latest available version.
  • cloudWatch.clusterLogging – Enables cluster logging, which sends logs from the control plane to Amazon CloudWatch Logs.
  • iam.withOIDC – Enables the OpenID Connect (OIDC) provider for the cluster, which is required for certain AWS services to interact with the cluster.
  1. After you create the eks_cluster.yaml file, you can create the EKS cluster by running the following command:
eksctl create cluster --config-file eks_cluster.yaml

This command will create the EKS cluster based on the configuration specified in the eks_cluster.yaml file. The process will take approximately 15–20 minutes to complete.

During the cluster creation process, eksctl will also create a default node group with a recommended instance type and configuration. However, in the next section, we create a separate node group with Inf2 instances, specifically for running the Meta Llama 3.1-8B model.

  1. To complete the setup of kubectl, run the following code:
aws eks update-kubeconfig —region $AWS_REGION —name $CLUSTER_NAME

Set up the Inferentia 2 node group

To run the Meta Llama 3.1-8B model, you’ll need to create an Inferentia 2 node group. Complete the following steps:

  1. First, retrieve the latest Amazon EKS optimized accelerated AMI ID:
export ACCELERATED_AMI=$(aws ssm get-parameter 
--name /aws/service/eks/optimized-ami/$EKS_VERSION/amazon-linux-2-gpu/recommended/image_id 
--region $AWS_REGION 
--query "Parameter.Value" 
--output text)
  1. Create the Inferentia 2 node group using eksctl:
cat > eks_nodegroup.yaml <<EOF
---
apiVersion: eksctl.io/v1alpha5
kind: ClusterConfig

metadata:
  name: $CLUSTER_NAME
  region: $AWS_REGION
  version: "$EKS_VERSION"
    
managedNodeGroups:
  - name: neuron-group
    instanceType: inf2.48xlarge
    desiredCapacity: 1
    volumeSize: 512
    ami: "$ACCELERATED_AMI"
    amiFamily: AmazonLinux2
    iam:
      attachPolicyARNs:
      - arn:aws:iam::aws:policy/AmazonEKSWorkerNodePolicy
      - arn:aws:iam::aws:policy/AmazonEC2ContainerRegistryReadOnly
      - arn:aws:iam::aws:policy/AmazonSSMManagedInstanceCore
      - arn:aws:iam::aws:policy/AmazonS3ReadOnlyAccess

    overrideBootstrapCommand: |
      #!/bin/bash

      /etc/eks/bootstrap.sh $CLUSTER_NAME
EOF

  1. Run eksctl create nodegroup --config-file eks_nodegroup.yaml to create the node group.

This will take approximately 5 minutes.

Install the Neuron device plugin and scheduling extension

To set up your EKS cluster for running workloads on Inferentia chips, you need to install two key components: the Neuron device plugin and the Neuron scheduling extension.

The Neuron device plugin is essential for exposing Neuron cores and devices as resources in Kubernetes. The Neuron scheduling extension facilitates the optimal scheduling of pods requiring multiple Neuron cores or devices.

For detailed instructions on installing and verifying these components, refer to Kubernetes environment setup for Neuron. Following these instructions will help you make sure your EKS cluster is properly configured to schedule and run workloads that require worker nodes, such as the Meta Llama 3.1-8B model.

Prepare the Docker image

To run the model, you’ll need to prepare a Docker image with the required dependencies. We use the following code to create an Amazon Elastic Container Registry (Amazon ECR) repository and then build a custom Docker image based on the AWS Deep Learning Container (DLC).

  1. Set up environment variables:
export ECR_REPO_NAME=vllm-neuron
  1. Create an ECR repository:
aws ecr create-repository --repository-name $ECR_REPO_NAME --region $AWS_REGION

Although the base Docker image already includes TorchServe, to keep things simple, this implementation uses the server provided by the vLLM repository, which is based on FastAPI. In your production scenario, you can connect TorchServe to vLLM with your own custom handler.

  1. Create the Dockerfile:
cat > Dockerfile <<EOF
FROM public.ecr.aws/neuron/pytorch-inference-neuronx:2.1.2-neuronx-py310-sdk2.20.0-ubuntu20.04
# Clone the vllm repository
RUN git clone https://github.com/vllm-project/vllm.git
# Set the working directory
WORKDIR /vllm
RUN git checkout v0.6.0
# Set the environment variable
ENV VLLM_TARGET_DEVICE=neuron
# Install the dependencies
RUN python3 -m pip install -U -r requirements-neuron.txt
RUN python3 -m pip install .
# Modify the arg_utils.py file to support larger block_size option
RUN sed -i "/parser.add_argument('--block-size',/ {N;N;N;N;N;s/[8, 16, 32]/[8, 16, 32, 128, 256, 512, 1024, 2048, 4096, 8192]/}" vllm/engine/arg_utils.py
# Install ray
RUN python3 -m pip install ray
RUN pip install -U  triton>=3.0.0
# Set the entry point
ENTRYPOINT ["python3", "-m", "vllm.entrypoints.openai.api_server"]
EOF

  1. Use the following commands to create an ECR repository, build your Docker image, and push it to the newly created repository. The account ID and Region are dynamically set using AWS CLI commands, making the process more flexible and avoiding hard-coded values.
# Authenticate Docker to your ECR registry
aws ecr get-login-password --region $AWS_REGION | docker login --username AWS --password-stdin $AWS_ACCOUNT_ID.dkr.ecr.$AWS_REGION.amazonaws.com
# Build the Docker image
docker build -t ${ECR_REPO_NAME}:latest .

# Tag the image
docker tag ${ECR_REPO_NAME}:latest $AWS_ACCOUNT_ID.dkr.ecr.$AWS_REGION.amazonaws.com/${ECR_REPO_NAME}:latest
# Push the image to ECR
docker push $AWS_ACCOUNT_ID.dkr.ecr.$AWS_REGION.amazonaws.com/${ECR_REPO_NAME}:latest

Deploy the Meta Llama 3.1-8B model

With the setup complete, you can now deploy the model using a Kubernetes deployment. The following is an example deployment specification that requests specific resources and sets up multiple replicas:

cat > neuronx-vllm-deployment.yaml <<EOF
apiVersion: apps/v1
kind: Deployment
metadata:
  name: neuronx-vllm-deployment
  labels:
    app: neuronx-vllm
spec:
  replicas: 3
  selector:
    matchLabels:
      app: neuronx-vllm
  template:
    metadata:
      labels:
        app: neuronx-vllm
    spec:
      schedulerName: my-scheduler
      containers:
      - name: neuronx-vllm
        image: <replace with the url to the docker image you pushed to the ECR>
        resources:
          limits:
            cpu: 32
            memory: "64G"
            aws.amazon.com/neuroncore: "8"
          requests:
            cpu: 32
            memory: "64G"
            aws.amazon.com/neuroncore: "8"
        ports:
        - containerPort: 8000
        env:
        - name: HF_TOKEN
          value: <your huggingface token>
        - name: FI_EFA_FORK_SAFE
          value: "1"
        args:
        - "--model"
        - "meta-llama/Meta-Llama-3.1-8B"
        - "--tensor-parallel-size"
        - "8"
        - "--max-num-seqs"
        - "64"
        - "--max-model-len"
        - "8192"
        - "--block-size"
        - "8192"
EOF

Apply the deployment specification with kubectl apply -f neuronx-vllm-deployment.yaml.

This deployment configuration sets up multiple replicas of the Meta Llama 3.1-8B model using tensor parallelism (TP) of 8. In the current setup, we’re hosting three copies of the model across the available Neuron cores. This configuration allows for the efficient utilization of the hardware resources while enabling multiple concurrent inference requests.

The use of TP=8 helps in distributing the model across multiple Neuron cores, which improves inference performance and throughput. The specific number of replicas and cores used may vary depending on your particular hardware setup and performance requirements.

To modify the setup, update the neuronx-vllm-deployment.yaml file, adjusting the replicas field in the deployment specification and the NUM_NEURON_CORES environment variable in the container specification. Always verify that the total number of cores used (replicas * cores per replica) doesn’t exceed your available hardware resources and that the number of attention heads is evenly divisible by the TP degree for optimal performance.

The deployment also includes environment variables for the Hugging Face token and EFA fork safety. The args section (see the preceding code) configures the model and its parameters, including an increased max model length and block size of 8192.

Test the deployment

After you deploy the model, it’s important to monitor its progress and verify its readiness. Complete the following steps:

  1. Check the deployment status:
kubectl get deployments

This will show you the desired, current, and up-to-date number of replicas.

  1. Monitor the pods:
kubectl get pods -l app=neuronx-vllm -w

The -w flag will watch for changes. You’ll see the pods transitioning from "Pending" to "ContainerCreating" to "Running".

  1. Check the logs of a specific pod:
kubectl logs <pod-name>

The initial startup process takes around 15 minutes. During this time, the model is being compiled for the Neuron cores. You’ll see the compilation progress in the logs.

To support proper management of your vLLM pods, you should configure Kubernetes probes in your deployment. These probes help Kubernetes determine when a pod is ready to serve traffic, when it’s alive, and when it has successfully started.

  1. Add the following probe configurations to your container spec in the deployment YAML:
spec:
  containers:
  - name: neuronx-vllm
    # ... other container configurations ...
    readinessProbe:
      httpGet:
        path: /health
        port: 8000
      initialDelaySeconds: 1800
      periodSeconds: 10
    livenessProbe:
      httpGet:
        path: /health
        port: 8000
      initialDelaySeconds: 1800
      periodSeconds: 15
    startupProbe:
      httpGet:
        path: /health
        port: 8000
      initialDelaySeconds: 1800
      failureThreshold: 30
      periodSeconds: 10

The configuration is comprised of three probes:

  • Readiness probe – Checks if the pod is ready to serve traffic. It starts checking after 60 seconds and repeats every 10 seconds.
  • Liveness probe – Verifies if the pod is still running correctly. It begins after 120 seconds and checks every 15 seconds.
  • Startup probe – Gives the application time to start up. It allows up to 25 minutes for the application to start before considering it failed.

These probes assume that your vLLM application exposes a /health endpoint. If it doesn’t, you’ll need to implement one or adjust the probe configurations accordingly.

With these probes in place, Kubernetes will do the following:

  • Only send traffic to pods that are ready
  • Restart pods that are no longer alive
  • Allow sufficient time for initial startup and compilation

This configuration helps facilitate high availability and proper functioning of your vLLM deployment.

Now you’re ready to access the pods.

  1. Identify the pod that is running your inference server. You can use the following command to list the pods with the neuronx-vllm label:
kubectl get pods -l app=neuronx-vllm

This command will output a list of pods, and you’ll need the name of the pod you want to forward.

  1. Use kubectl port-forward to forward the port from the Kubernetes pod to your local machine. Use the name of your pod from the previous step:
kubectl port-forward <pod-name> 8000:8000

This command forwards port 8000 on the pod to port 8000 on your local machine. You can now access the inference server at http://localhost:8000.

Because we’re forwarding a port directly from a single pod, requests will only be sent to that specific pod. As a result, traffic won’t be balanced across all replicas of your deployment. This is suitable for testing and development purposes, but it doesn’t utilize the deployment efficiently in a production scenario where load balancing across multiple replicas is crucial to handle higher traffic and provide fault tolerance.

In a production environment, a proper solution like a Kubernetes service with a LoadBalancer or Ingress should be used to distribute traffic across available pods. This facilitates the efficient utilization of resources, a balanced load, and improved reliability of the inference service.

  1. You can test the inference server by making a request from your local machine. The following code is an example of how to make an inference call using curl:
curl -X POST http://localhost:8000/v1/completions  
-H "Content-Type: application/json"  
-d '{ 
  "model": " meta-llama/Meta-Llama-3.1-8B", 
  "prompt": "Explain the theory of relativity.", 
  "max_tokens": 100 
}'

This setup allows you to test and interact with your inference server locally without needing to expose your service publicly or set up complex networking configurations. For production use, make sure that load balancing and scalability considerations are addressed appropriately.

For more information about routing, see Route application and HTTP traffic with Application Load Balancers.

Monitor performance

AWS offers powerful tools to monitor and optimize your vLLM deployment on Inferentia chips. The AWS Neuron Monitor container, used with Prometheus and Grafana, provides advanced visualization of your ML application performance. Additionally, CloudWatch Container Insights for Neuron offers deep, Neuron-specific analytics.

These tools allow you to track Inferentia chip utilization, model performance, and overall cluster health. By analyzing this data, you can make informed decisions about resource allocation and scaling to meet your workload requirements.

Remember that the initial 15-minute startup time for model compilation is a one-time process per deployment, with subsequent restarts being faster due to caching.

To learn more about setting up and using these monitoring capabilities, see Scale and simplify ML workload monitoring on Amazon EKS with AWS Neuron Monitor container.

Scaling and multi-tenancy

As your application’s demand grows, you may need to scale your deployment to handle more requests. Scaling your Meta Llama 3.1-8B deployment on Amazon EKS with Neuron cores involves two coordinated steps:

  • Increasing the number of nodes in your EKS node group to provide additional Neuron cores
  • Increasing the number of replicas in your deployment to utilize these new resources

You can scale your deployment manually. Use the AWS Management Console or AWS CLI to increase the size of your EKS node group. When new nodes are available, scale your deployment with the following code:

kubectl scale deployment neuronx-vllm-deployment --replicas=<new-number>

Alternatively, you can set up auto scaling:

  • Configure auto scaling for your EKS node group to automatically add nodes based on resource demands
  • Use Horizontal Pod Autoscaling (HPA) to automatically adjust the number of replicas in your deployment

You can configure the node group’s auto scaling to respond to increased CPU, memory, or custom metric demands, automatically provisioning new nodes with Neuron cores as needed. This makes sure that as the number of incoming requests grows, both your infrastructure and your deployment can scale accordingly.

Example scaling solutions include:

  • Cluster Autoscaler with Karpenter – Though not currently installed in this setup, Karpenter offers more flexible and efficient auto scaling for future consideration. It can dynamically provision the right number of nodes needed for your Neuron workloads based on pending pods and custom scheduling constraints. For more details, see Scale cluster compute with Karpenter and Cluster Autoscaler.
  • Multi-cluster federation – For even larger scale, you could set up multiple EKS clusters, each with its own Neuron-equipped nodes, and use a multi-cluster federation tool to distribute traffic among them.

You should consider the following when scaling:

  • Alignment of resources – Make sure that your scaling strategy for both nodes and pods aligns with the Neuron core requirements (multiples of 8 for optimal performance). This is model dependent and unique for the Meta Llama 3.1 model.
  • Compilation time – Remember the 15-minute compilation time for new pods when planning your scaling strategy. Consider pre-warming pods during off-peak hours.
  • Cost management – Monitor costs closely as you scale, because Neuron-equipped instances can be expensive.
  • Performance testing – Conduct thorough performance testing as you scale to verify that increased capacity translates to improved throughput and reduced latency.

By coordinating the scaling of both your node group and your deployment, you can effectively handle increased request volumes while maintaining optimal performance. The auto scaling capabilities of both your node group and deployment can work together to automatically adjust your cluster’s capacity based on incoming request volumes, providing a more responsive and efficient scaling solution.

Clean up

Use the following code to delete the cluster created in this solution:

eksctl delete cluster --name $CLUSTER_NAME --region $AWS_REGION

Conclusion

Deploying LLMs like Meta Llama 3.1-8B at scale poses significant computational challenges. Using Inferentia 2 instances and Amazon EKS can help overcome these challenges by enabling efficient model deployment in a containerized, scalable, and multi-tenant environment.

This solution combines the exceptional performance and cost-effectiveness of Inferentia 2 chips with the robust and flexible landscape of Amazon EKS. Inferentia 2 chips deliver high throughput and low latency inference, ideal for LLMs. Amazon EKS provides dynamic scaling, efficient resource utilization, and multi-tenancy capabilities.

The process involves setting up an EKS cluster, configuring an Inferentia 2 node group, installing Neuron components, and deploying the model as a Kubernetes pod. This approach facilitates high availability, resilience, and efficient resource sharing for language model services, while allowing for automatic scaling, load balancing, and self-healing capabilities.

For the complete code and detailed implementation steps, visit the GitHub repository.


About the Authors

Dmitri Laptev is a Senior GenAI Solutions Architect at AWS, based in Munich. With 17 years of experience in the IT industry, his interest in AI and ML dates back to his university years, fostering a long-standing passion for these technologies. Dmitri is enthusiastic about cloud computing and the ever-evolving landscape of technology.

Maurits de Groot is a Solutions Architect at Amazon Web Services, based out of Amsterdam. He specializes in machine learning-related topics and has a predilection for startups. In his spare time, he enjoys skiing and bouldering.

Ziwen Ning is a Senior Software Development Engineer at AWS. He currently focuses on enhancing the AI/ML experience through the integration of AWS Neuron with containerized environments and Kubernetes. In his free time, he enjoys challenging himself with kickboxing, badminton, and other various sports, and immersing himself in music.

Jianying Lang is a Principal Solutions Architect at the AWS Worldwide Specialist Organization (WWSO). She has over 15 years of working experience in the HPC and AI fields. At AWS, she focuses on helping customers deploy, optimize, and scale their AI/ML workloads on accelerated computing instances. She is passionate about combining the techniques in HPC and AI fields. Jianying holds a PhD in Computational Physics from the University of Colorado at Boulder.

Read More

Serving LLMs using vLLM and Amazon EC2 instances with AWS AI chips

Serving LLMs using vLLM and Amazon EC2 instances with AWS AI chips

The use of large language models (LLMs) and generative AI has exploded over the last year. With the release of powerful publicly available foundation models, tools for training, fine tuning and hosting your own LLM have also become democratized. Using vLLM on AWS Trainium and Inferentia makes it possible to host LLMs for high performance inference and scalability.

In this post, we will walk you through how you can quickly deploy Meta’s latest Llama models, using vLLM on an Amazon Elastic Compute Cloud (Amazon EC2) Inf2 instance. For this example, we will use the 1B version, but other sizes can be deployed using these steps, along with other popular LLMs.

Deploy vLLM on AWS Trainium and Inferentia EC2 instances

In these sections, you will be guided through using vLLM on an AWS Inferentia EC2 instance to deploy Meta’s newest Llama 3.2 model. You will learn how to request access to the model, create a Docker container to use vLLM to deploy the model and how to run online and offline inference on the model. We will also talk about performance tuning the inference graph.

Prerequisite: Hugging Face account and model access

To use the meta-llama/Llama-3.2-1B model, you’ll need a Hugging Face account and access to the model. Please go to the model card, sign up, and agree to the model license. You will then need a Hugging Face token, which you can get by following these steps. When you get to the Save your Access Token screen, as shown in the following figure, make sure you copy the token because it will not be shown again.

Create an EC2 instance

You can create an EC2 Instance by following the guide. A few things to note:

  1. If this is your first time using inf/trn instances, you will need to request a quota increase.
  2. You will use inf2.xlarge as your instance type. inf2.xlarge instances are only available in these AWS Regions.
  3. Increase the gp3 volume to 100 G.
  4. You will use Deep Learning AMI Neuron (Ubuntu 22.04) as your AMI, as shown in the following figure.

After the instance is launched, you can connect to it to access the command line. In the next step, you’ll use Docker (preinstalled on this AMI) to run a vLLM container image for neuron.

Start vLLM server

You will use Docker to create a container with all the tools needed to run vLLM. Create a Dockerfile using the following command:

cat > Dockerfile <<EOF
# default base image
ARG BASE_IMAGE="public.ecr.aws/neuron/pytorch-inference-neuronx:2.1.2-neuronx-py310-sdk2.20.0-ubuntu20.04"
FROM $BASE_IMAGE
RUN echo "Base image is $BASE_IMAGE"
# Install some basic utilities
RUN apt-get update && 
    apt-get install -y 
        git 
        python3 
        python3-pip 
        ffmpeg libsm6 libxext6 libgl1
### Mount Point ###
# When launching the container, mount the code directory to /app
ARG APP_MOUNT=/app
VOLUME [ ${APP_MOUNT} ]
WORKDIR ${APP_MOUNT}/vllm
RUN python3 -m pip install --upgrade pip
RUN python3 -m pip install --no-cache-dir fastapi ninja tokenizers pandas
RUN python3 -m pip install sentencepiece transformers==4.36.2 -U
RUN python3 -m pip install transformers-neuronx --extra-index-url=https://pip.repos.neuron.amazonaws.com -U
RUN python3 -m pip install --pre neuronx-cc==2.15.* --extra-index-url=https://pip.repos.neuron.amazonaws.com -U
ENV VLLM_TARGET_DEVICE neuron
RUN git clone https://github.com/vllm-project/vllm.git && 
    cd vllm && 
    git checkout v0.6.2 && 
    python3 -m pip install -U 
        cmake>=3.26 ninja packaging setuptools-scm>=8 wheel jinja2 
        -r requirements-neuron.txt && 
    pip install --no-build-isolation -v -e . && 
    pip install --upgrade triton==3.0.0
CMD ["/bin/bash"]
EOF

Then run:

docker build . -t vllm-neuron

Building the image will take about 10 minutes. After it’s done, use the new Docker image (replace YOUR_TOKEN_HERE with the token from Hugging Face):

export HF_TOKEN="YOUR_TOKEN_HERE"
docker run 
        -it 
        -p 8000:8000 
        --device /dev/neuron0 
        -e HF_TOKEN=$HF_TOKEN 
        -e NEURON_CC_FLAGS=-O1 
        vllm-neuron

You can now start the vLLM server with the following command:

vllm serve meta-llama/Llama-3.2-1B --device neuron --tensor-parallel-size 2 --block-size 8 --max-model-len 4096 --max-num-seqs 32

This command runs vLLM with the following parameters:

  • serve meta-llama/Llama-3.2-1B: The Hugging Face modelID of the model that is being deployed for inference.
  • --device neuron: Configures vLLM to run on the neuron device.
  • --tensor-parallel-size 2: Sets the number of partitions for tensor parallelism. inf2.xlarge has 1 neuron device and each neuron device has 2 neuron cores.
  • --max-model-len 4096: This is set to the maximum sequence length (input tokens plus output tokens) for which to compile the model.
  • --block-size 8: For neuron devices, this is internally set to the max-model-len.
  • --max-num-seqs 32: This is set to the hardware batch size or a desired level of concurrency that the model server needs to handle.

The first time you load a model, if there isn’t a previously compiled model, it will need to be compiled. This compiled model can optionally be saved so the compilation step is not necessary if the container is recreated. After everything is done and the model server is running, you should see the following logs:

Avg prompt throughput: 0.0 tokens/s ...

This means that the model server is running, but it isn’t yet processing requests because none have been received. You can now detach from the container by pressing ctrl + p and ctrl + q.

Inference

When you started the Docker container, you ran it with the command -p 8000:8000. This told Docker to forward port 8000 from the container to port 8000 on your local machine. When you run the following command, you should see that the model server with meta-llama/Llama-3.2-1B is running.

curl localhost:8000/v1/models

This should return something like:

{"object":"list","data":[{"id":"meta-llama/Llama-3.2-1B","object":"model","created":1732552038,"owned_by":"vllm","root":"meta-llama/Llama-3.2-1B","parent":null,"max_model_len":4096,"permission":[{"id":"modelperm-6d44a6f6e52447eb9074b13ae1e9e285","object":"model_permission","created":1732552038,"allow_create_engine":false,"allow_sampling":true,"allow_logprobs":true,"allow_search_indices":false,"allow_view":true,"allow_fine_tuning":false,"organization":"*","group":null,"is_blocking":false}]}]}ubuntu@ip-172-31-12-216:~$ 

Now, send it a prompt:

curl localhost:8000/v1/completions 
-H "Content-Type: application/json" 
-d '{"model": "meta-llama/Llama-3.2-1B", "prompt": "What is Gen AI?", "temperature":0, "max_tokens": 128}' | jq '.choices[0].text'

You should get back a response similar to the following from vLLM:

ubuntu@ip-172-31-13-178:~$ curl localhost:8000/v1/completions 
-H "Content-Type: application/json" 
-d '{"model": "meta-llama/Llama-3.2-1B", "prompt": "What is Gen AI?", "temperature":0, "max_tokens": 128}' | jq '.choices[0].text'
  % Total    % Received % Xferd  Average Speed   Time    Time    Time  Current
                                 Dload  Upload   Total   Spent  Left  Speed
100  1067  100   966  100   101    108     11  0:00:09  0:00:08 0:00:01   258
" How does it work?nGen AI is a new type of artificial intelligence that is designed to learn and adapt to new situations and environments. It is based on the idea that the human brain is a complex system 
that can learn and adapt to new situations and environments. Gen AI is designed to be able to learn and adapt to new situations and environments in a way that is similar to how the human brain does.nGen AI is 
a new type of artificial intelligence that is designed to learn and adapt to new situations and environments. It is based on the idea that the human brain is a complex system that can learn and adapt to new 
situations and environments."

Offline inference with vLLM

Another way to use vLLM on Inferentia is by sending a few requests all at the same time in a script. This is useful for automation or when you have a batch of prompts that you want to send all at the same time.

You can reattach to your Docker container and stop the online inference server with the following:

docker attach $(docker ps --format "{{.ID}}")

At this point, you should see a blank cursor, press ctrl + c to stop the server and you should be back at the bash prompt in the container. Create a file for using the offline inference engine:

cat > offline_inference.py <<EOF
from vllm.entrypoints.llm import LLM
from vllm.sampling_params import SamplingParams

# Sample prompts.
prompts = [
    "Hello, my name is",
    "The president of the United States is",
    "The capital of France is",
    "The future of AI is",
]
# Create a sampling params object.
sampling_params = SamplingParams(temperature=0.8, top_p=0.95)

# Create an LLM.
llm = LLM(model="meta-llama/Llama-3.2-1B",
        max_num_seqs=32,
        max_model_len=4096,
        block_size=8,
        device="neuron",
        tensor_parallel_size=2)
# Generate texts from the prompts. The output is a list of RequestOutput objects
# that contain the prompt, generated text, and other information.
outputs = llm.generate(prompts, sampling_params)
# Print the outputs.
for output in outputs:
    prompt = output.prompt
    generated_text = output.outputs[0].text
    print(f"Prompt: {prompt!r}, Generated text: {generated_text!r}")

EOF

Now, run the script python offline_inference.py and you should get back responses for the four prompts. This may take a minute as the model needs to be started again.

Processed prompts: 100%|
█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 4/4 [00:01<00:00,  2.53it/s, est. speed input: 16.46 toks/s, output: 40.51 toks/s]
Prompt: 'Hello, my name is', Generated text: ' Anna and I am the 4th year student of the Bachelor of Engineering at'
Prompt: 'The president of the United States is', Generated text: ' the head of state and head of government of the United States of America. A'
Prompt: 'The capital of France is', Generated text: ' also the most expensive city to live in. The average cost of living in Paris'
Prompt: 'The future of AI is', Generated text: ' nownThe 10 most influential AI professionals to watch in 2019n'

You can now type exit and press return and then press ctrl + c to shut down the Docker container and go back to your inf2 instance.

Clean up

Now that you’re done testing the Llama 3.2 1B LLM, you should terminate your EC2 instance to avoid additional charges.

Performance tuning for variable sequence lengths

You will probably have to process variable length sequences during LLM inference. The Neuron SDK generates buckets and a computation graph that works with the shape and size of the buckets. To fine tune the performance based on the length of input and output tokens in the inference requests, you can set two kinds of buckets corresponding to the two phases of LLM inference through the following environment variables as a list of integers:

  • NEURON_CONTEXT_LENGTH_BUCKETS corresponds to the context encoding phase. Set this to the estimated length of prompts during inference.
  • NEURON_TOKEN_GEN_BUCKETS corresponds to the token generation phase. Set this to a range of powers of two within your generation length.

You can use Docker run command to set the environment variables while starting the vLLM server (remember to replace YOUR_TOKEN_HERE with your Hugging Face token):

export HF_TOKEN="YOUR_TOKEN_HERE"
docker run 
        -it 
        -p 8000:8000 
        --device /dev/neuron0 
        -e HF_TOKEN=$HF_TOKEN 
        -e NEURON_CC_FLAGS=-O1 
        -e NEURON_CONTEXT_LENGTH_BUCKETS="1024,1280,1536,1792,2048" 
        -e NEURON_TOKEN_GEN_BUCKETS="256,512,1024" 
        vllm-neuron

You can then start the server using the same command:

vllm serve meta-llama/Llama-3.2-1B --device neuron --tensor-parallel-size 2 --block-size 8 --max-model-len 4096 --max-num-seqs 32

As the model graph has changed, the model will need to be recompiled. If the container was terminated, the model will be downloaded again. You can then send a request by detaching from the container by pressing ctrl + p and ctrl + q and using the same command:

curl localhost:8000/v1/completions
-H "Content-Type: application/json"
-d '{"model": "meta-llama/Llama-3.2-1B", "prompt": "What is Gen AI?", "temperature":0, "max_tokens": 128}' | jq '.choices[0].text'

For more information about how to configure the buckets, see the developer guide on bucketing. Note, NEURON_CONTEXT_LENGTH_BUCKETS corresponds to context_length_estimate in the documentation and NEURON_TOKEN_GEN_BUCKETS corresponds to n_positions in the documentation.

Conclusion

You’ve just seen how to deploy meta-llama/Llama-3.2-1B using vLLM on an Amazon EC2 Inf2 instance. If you’re interested in deploying other popular LLMs from Hugging Face, you can replace the modelID in the vLLM serve command. More details on the integration between the Neuron SDK and vLLM can be found in the Neuron user guide for continuous batching and the vLLM guide for Neuron.

After you’ve identified a model that you want to use in production, you will want to deploy it with autoscaling, observability, and fault tolerance. You can also refer to this blog post to understand how to deploy vLLM on Inferentia through Amazon Elastic Kubernetes Service (Amazon EKS). In the next post of this series, we’ll go into using Amazon EKS with Ray Serve to deploy vLLM into production with autoscaling and observability.


About the authors

Omri Shiv is an Open Source Machine Learning Engineer focusing on helping customers through their AI/ML journey. In his free time, he likes cooking, tinkering with open source and open hardware, and listening to and playing music.

Pinak Panigrahi works with customers to build ML-driven solutions to solve strategic business problems on AWS. In his current role, he works on optimizing training and inference of generative AI models on AWS AI chips.

Read More

Using LLMs to fortify cyber defenses: Sophos’s insight on strategies for using LLMs with Amazon Bedrock and Amazon SageMaker

Using LLMs to fortify cyber defenses: Sophos’s insight on strategies for using LLMs with Amazon Bedrock and Amazon SageMaker

This post is co-written with Adarsh Kyadige and Salma Taoufiq from Sophos. 

As a leader in cutting-edge cybersecurity, Sophos is dedicated to safeguarding over 500,000 organizations and millions of customers across more than 150 countries. By harnessing the power of threat intelligence, machine learning (ML), and artificial intelligence (AI), Sophos delivers a comprehensive range of advanced products and services. These solutions are designed to protect and defend users, networks, and endpoints against a wide array of cyber threats including phishing, ransomware, and malware. The Sophos Artificial Intelligence (AI) group (SophosAI) oversees the development and maintenance of Sophos’s major ML security technology.

Large language models (LLMs) have demonstrated impressive capabilities in natural language understanding and generation across diverse domains as showcased in numerous leaderboards (e.g., HELM, Hugging Face Open LLM leaderboard) that evaluate them on a myriad of generic tasks. However, their effectiveness in specialized fields like cybersecurity relies heavily on domain-specific knowledge. In this context, fine-tuning emerges as a crucial technique to adapt these general-purpose models to the intricacies of cybersecurity. For example, we could use Instruction fine-tuning to increase the model performance on an incident classification or summarization. However, before fine-tuning, it’s important to determine an out-of-the-box model’s potential by testing its abilities on a set of tasks based on the domain. We have defined three specialized tasks that are covered later in the blog. These same tasks can also be used to measure the gains in performance obtained through fine-tuning, Retrieval-Augmented Generation (RAG), or knowledge distillation.

In this post, SophosAI shares insights in using and evaluating an out-of-the-box LLM for the enhancement of a security operations center’s (SOC) productivity using Amazon Bedrock and Amazon SageMaker. We use Anthropic’s Claude 3 Sonnet on Amazon Bedrock to illustrate the use cases.

Amazon Bedrock is a fully managed service that offers a choice of high-performing foundation models (FMs) from leading AI companies like AI21 Labs, Anthropic, Cohere, Meta, Mistral AI, Stability AI, and Amazon through a single API, along with a broad set of capabilities you need to build generative AI applications with security, privacy, and responsible AI.

Tasks

We will showcase three example tasks to delve into using LLMs in the context of an SOC. An SOC is an organizational unit responsible for monitoring, detecting, analyzing, and responding to cybersecurity threats and incidents. It employs a combination of technology, processes, and skilled personnel to maintain the confidentiality, integrity, and availability of information systems and data. SOC analysts continuously monitor security events, investigate potential threats, and take appropriate action to mitigate risks. Known challenges faced by SOCs are the high volume of alerts generated by detection tools and the subsequent alert fatigue among analysts. These challenges are often coupled with staffing shortages. To address these challenges and enhance operational efficiency and scalability, many SOCs are increasingly turning to automation technologies to streamline repetitive tasks, prioritize alerts, and accelerate incident response. Considering the nature of tasks analysts need to perform, LLMs are good tools to enhance the level of automation in SOCs and empower security teams.

For this work, we focus on three essential SOC use cases where LLMs have the potential of greatly assisting analysts, namely:

  1. SQL Query generation from natural language to simplify data extraction
  2. Incident severity prediction to prioritize which incidents analysts should focus on
  3. Incident summarization based on its constituent alert data to increase analyst productivity

Based on the token consumption of these tasks, particularly the summarization component, we need a model with a context window of at least 4000 tokens. While the tasks have been tested in English, Anthropic’s Claude 3 Sonnet model can perform in other languages. However, we recommend evaluating the performance in your specific language of interest.

Let’s dive into the details of each task.

Task 1: Query generation from natural language

This task’s objective is to assess a model’s capacity to translate natural language questions into SQL queries, using contextual knowledge of the underlying data schema. This skill simplifies the data extraction process, allowing security analysts to conduct investigations more efficiently without requiring deep technical knowledge. We used prompt engineering guidelines to tailor our prompts to generate better responses from the LLM.

A three-shot prompting strategy is used for this task. Given a database schema, the model is provided with three examples pairing a natural-language question with its corresponding SQL query. Following these examples, the model is then prompted to generate the SQL query for a question of interest.

The prompt below is a three-shot prompt example for query generation from natural language. Empirically, we have obtained better results with few-shot prompting as opposed to one-shot (where the model is provided with only one example question and corresponding query before the actual question of interest) or zero-shot (where the model is directly prompted to generate a desired query without any examples).

Translate the following request into SQL
Schema for alert_table table
   <Table schema>
Schema for process_table table
   <Table schema>
Schema for network_table table
   <Table schema>

Here are some examples
<examples>
Request:tell me a list of processes that were executed between 2021/10/19 and 2021/11/30
   SQL:select * from process_table where timestamp between '2021-10-19' and '2021-11-30';

Request:show me any low severity security alerts for the 23 days ago
   SQL:select * from alert_table where severity='low' and timestamp>=DATEADD('day', -23, CURRENT_TIMESTAMP());

Request:show me the count of msword.exe processes that ran between Dec/01 and Dec/11
   SQL:select count(*) from process_table where process='msword.exe' and timestamp>='2022-12-01' and timestamp<='2022-12-11';
</examples>

Request:"Any Ubuntu processes that was run by the user ""admin"" from host ""db-server"""
SQL:

To evaluate a model’s performance on this task, we rely on a proprietary data set of about 100 target queries based on a test database schema. To determine the accuracy of the queries generated by the model, a multi-step evaluation is followed. First, we verify whether the model’s output is an exact match to the expected SQL statement. Exact matches are then recorded as successful outcomes. If there is a mismatch, we then run both the model’s query and the expected query against our mock database to compare their results. However, this method can be prone to false positives and false negatives. To mitigate this, we further perform a query equivalence assessment using a different stronger LLM on this task. This method is known as LLM-as-a-judge.

Anthropic’s Claude 3 Sonnet model achieved a good accuracy rate of 88 percent on the chosen dataset, suggesting that this natural-language-to-SQL task is quite simple for LLMs. With basic few-shot prompting, an LLM can therefore be used out-of-the-box without fine-tuning by security analysts to assist them in retrieving key information while investigating threats. The above model performance is based on our dataset and our experiment. This means that you can perform your own test using the strategy explained above.

Task 2: Incident severity prediction

For the second task, we assess a model’s ability to recognize the severity of observed events as indicators of an incident. Specifically, we try to determine whether an LLM can review a security incident and accurately gauge its importance. Armed with such a capability, a model can assist analysts in determining which incidents are most pressing, so they can work more efficiently by organizing their work queue based on severity levels, cut through the noise, and save time and energy.

The input data in this use case is semi-structured alert data, typical of what is produced by various detection systems during an incident. We clearly define severity categories—critical, high, medium, low, and informational—across which the model is to classify the severity of the incident. This is therefore a classification problem that tests an LLM’s intrinsic cybersecurity knowledge.

Each security incident within the Sophos Managed Detection and Response (MDR) platform is made up of multiple detections that highlight suspicious activities occurring in a user’s environment. A detection might involve identifying potentially harmful patterns, such as unusual command executions, abnormal file access, anomalous network traffic, or suspicious script use. We have attached below an example input data.

The “detection” section provides detailed information about each specific suspicious activity that was identified. It includes the type of security incident, such as “Execution,” along with a description that explains the nature of the threat, like the use of suspicious PowerShell commands. The detection is tied to a unique identifier for tracking and reference purposes. Additionally, it contains details from the MITRE ATT&CK framework which categorizes the tactics and techniques involved in the threat. This section might also reference related Sigma rules, which are community-driven signatures for detecting threats across different systems. By including these elements, the detection section serves as a comprehensive outline of the potential threat, helping analysts understand not just what was detected but also why it matters.

The “machine_data” section holds crucial information about the machine on which the detection occurred. It can provide further metadata on the machine, helping to pinpoint where exactly in the environment the suspicious activity was observed.

{
    ...
  "detection": {
    "attack": "Execution",
    "description": "Identifies the use of suspicious PowerShell IEX patterns. IEX is the shortened version of the Invoke-Expression PowerShell cmdlet. The cmdlet runs the specified string as a command.",
    "id": <Detection ID>,
    "mitre_attack": [
      {
        "tactic": {
          "id": "TA0002",
          "name": "Execution",
          "techniques": [
            {
              "id": "T1059.001",
              "name": "PowerShell"
            }
          ]
        }
      },
      {
        "tactic": {
          "id": "TA0005",
          "name": "Defense Evasion",
          "techniques": [
            {
              "id": "T1027",
              "name": "Obfuscated Files or Information"
            }
          ]
        }
      }
    ],
    "sigma": {
      "id": <Detection ID>,
      "references": [
        "https://github.com/SigmaHQ/sigma/blob/master/rules/windows/process_creation/proc_creation_win_susp_powershell_download_iex.yml",
        "https://github.com/VirtualAlllocEx/Payload-Download-Cradles/blob/main/Download-Cradles.cmd"
      ]
    },
    "type": "process",
  },
  "machine_data": {
    ...
    "username": <Username>
    },
    "customer_id": <Customer ID>,
    "decorations": {
        <Customer data>
    },
    "original_file_name": "powershell.exe",
    "os_platform": "windows",
    "parent_process_name": "cmd.exe",
    "parent_process_path": "C:\Windows\System32\cmd.exe",
    "powershell_code": "iex ([system.text.encoding]::ASCII.GetString([Convert]::FromBase64String('aWYoR2V0LUNvbW1hbmQgR2V0LVdpbmRvd3NGZWF0dXJlIC1lYSBTaWxlbnRseUNvbnRpbnVlKQp7CihHZXQtV2luZG93c0ZlYXR1cmUgfCBXaGVyZS1PYmplY3QgeyRfLm5hbWUgLWVxICdSRFMtUkQtU2VydmVyJ30gfCBTZWxlY3QgSW5zdGFsbFN0YXRlKS5JbnN0YWxsU3RhdGUKfQo=')))",
    "process_name": "powershell.exe",
    "process_path": "C:\Windows\System32\WindowsPowerShell\v1.0\powershell.exe",
  },
  ...
} 

To facilitate evaluation, the prompt used for this task requires that the model communicates its severity assessments in a uniform way, providing the response in a standardized format, for example, as a dictionary with severity_pred as the key and their chosen severity level as the value. The prompt below is an example for incident severity classification. Model performance is then evaluated against a test set of over 3,800 security incidents with target severity levels.

You are a helpful cybersecurity incident investigation expert that classifies incidents according to their severity level given a set of detections per incident.
Respond strictly with this JSON format: {"severity_pred": "xxx"} where xxx should only be either:
    - Critical,
    <Criteria for a critical incident>
    - High,
    <Criteria for a high severity incident>
    - Medium,
    <Criteria for a medium severity incident>
    - Low,
    <Criteria for a low severity incident>
    - Informational
    <Criteria for an informational incident>
    No other value is allowed.

Detections:

Various experimental setups are used for this task, including zero-shot prompting, three-shot prompting using random or nearest-neighbor incidents examples, and simple classifiers.

This task turned out to be quite challenging, because of the noise in the target labels and the inherent difficulty of assessing the criticality of an incident without further investigation by models that weren’t trained specifically for this use case.

Even under various setups, such as few-shot prompting with nearest neighbor incidents, the model’s performance couldn’t reliably outperform random chance. For reference, the baseline accuracy on the test set is approximately 71 percent and the baseline balanced accuracy is 20 percent.

Figure 1 presents the confusion matrix of the model’s responses. The confusion matrix allows to see in one graph the performance of the model’s classification. We can see that only 12% (0.12) of the Actual critical incidents have been correctly predicted/classified. Then 50% of the Critical incidents have been predicted as High incidents, 25% as Medium incidents and 12% as Informational incidents. We can similarly see low accuracy on the rest of the labels and the lowest being bee the Low incidents label with only 2% of the incidents correctly predicted. There is also a notable tendency to overpredict High and Medium categories across the board.

Figure 1: Confusion matrix for the five-severity-level classification using Anthropic Claude 3 Sonnet

The performance observed in this benchmark task indicates this is a particularly hard problem for an unmodified, all-purpose LLM, and the problem requires a more specialized model, specifically trained or fine-tuned on cybersecurity data.

Task 3: Incident summarization

The third task is concerned with the summarization of incoming incidents. It evaluates the potential of a model to assist threat analysts in the triage and investigation of security incidents as they come in by providing a succinct and concise summary of the activity that triggered the incident.

Security incidents typically consist of a series of events occurring on a user endpoint or network, associated with detected suspicious activity. The analysts investigating the incident are presented with a series of events that occurred on the endpoint at the time the suspicious activity was detected. However, analyzing this event sequence can be challenging and time-consuming, resulting in difficulty in identifying noteworthy events. This is where LLMs can be beneficial by helping organize and categorize event data following a specific template, thereby aiding comprehension, and helping analysts quickly determine the appropriate next actions.

We use real incident data from Sophos’s MDR for incident summarization. The input for this task encompasses a set of JSON events, each having distinct schemas and attributes based on the capturing sensor. Along with instructions and a predefined template, this data is provided to the model to generate a summary. The prompt below is an example template prompt for generating incident summaries from SOC data.

As a cybersecurity assistant, your task is to:
    1. Analyze the provided cybersecurity detections data.
    2. Create a report of the events using the information from the '### Detections' section, which may include security artifacts such as command lines and file paths.
    3. [Any other additional general requirements for formatting, etc.]
The report outline should look like this:
Summary:
    <Few sentence description of the activity. [Any additional requirements for the summary: what to  include, etc.]>
Observed MITRE Techniques:
    <List only the registered MITRE Technique or Tactic ID and name pairs if available. The ID should start with 'T'.>
Impacted Hosts:
    <List of all hostname observed in the detections, provide corresponding IPs if available>
Active Users:
    <List of all usernames observed in the detections. There could be multiple, list all of them>
Events:
    <One sentence description for top three detection events. Start the list with n1. >
IPs/URLs:
    <List available IPs and URLs.>
    <Enumerate only up to ten artifacts under each report category, and summarize any remaining events beyond that.>
Files: 
    <List the files found in the incident as follows:>
    <TEMPLATE FOR FILES WITH DETAILS>
Command Lines: 
    <List the command lines found in the detections as follows:>
    <TEMPLATE FOR COMMAND LINES WITH DETAILS>

### Detections:

Evaluating these generated incident summaries is tricky because several factors must be considered. For example, it’s crucial that the extracted information is not only correct, but also relevant. To gain a general understanding of the quality of a model’s incident summarization, we use a set of five distinct metrics and rely on a dataset comprising of N incidents. We compare the generated descriptions with corresponding gold-standard descriptions crafted based on Sophos analysts’ feedback.

We compute two classes of metrics. The first class of metrics assesses factual accuracy; they are used to evaluate how many artifacts such as command lines, file paths, usernames, and so on were correctly identified and summarized by the model. The computation here is straightforward; we compute the average distance across extracted artifacts between the generated description and the target. We use two distance metrics, Levenshtein distance and longest common subsequence (LCS).

The second class of metrics is used to provide a more semantic evaluation of the generated description, using three different metrics:

  • BERTScore metric: This metric is used to evaluate the generated summaries using a pre-trained BERT model’s contextual embeddings. It determines the similarity between the generated summary and the reference summary using cosine similarity.
  • ADA2 embeddings cosine similarity: This metric assesses the cosine similarity of ADA2 embeddings of tokens in the generated summary with those of the reference summary.
  • METEOR score: METEOR is an evaluation metric based on the harmonic mean of unigram precision and recall.

More advanced evaluation methods can be used such as training a reward model on human preferences and using it as an evaluator, but for the sake of simplicity and cost-effectiveness, we limited the scope to these metrics.

Below is a summary of our results on this task:

Model Levenshtein-based factual accuracy LCS-based factual accuracy BERTScore Cosine similarity of ADA2 embeddings METEOR score
Anthropic’s Claude 3 Sonnet 0.810 0.721 0.886 0.951 0.4165

Based on these findings, we gain a broad understanding of the performance of the model when it comes to generating incident summaries, focusing especially on factual accuracy and retrieval rate. Anthropic’s Claude 3 Sonnet model can capture the activity that’s occurring in the incident and summarize it well. However, it ignores certain instructions such as defanging all IPs and URLs. The returned reports are also not fully aligned with the target responses on a token level as signaled by the METEOR score. Anthropic’s Claude 3 Sonnet model skims over some details and explanations in the reports.

Experimental setup using Amazon Bedrock and Amazon SageMaker

This section outlines the experimental setup for evaluating various large language models (LLMs) using Amazon Bedrock and Amazon SageMaker. These services allowed us to efficiently interact with and deploy multiple LLMs for quick and cost-effective experimentation.

Amazon Bedrock

Amazon Bedrock is a managed service that allows experimenting with various LLMs quickly in an on-demand manner. This brings the advantage of being able to interact and experiment with LLMs without having to self-host them and only pay by tokens consumed. We used the InvokeModel API to interact with the model with minimal latency. We wrote the following function that let us call different models by passing the necessary inference parameters to the API. For more details on what the inference parameters are per provider, we recommend you read the Inference request parameters and response fields for foundation models section in the Amazon Bedrock documentation. The example below uses the function based on Anthropic’s Claude 3 Sonnet model. Notice that we gave the model a role via the system prompt and that we prefilled its response.

system_prompt = “You are a helpful cybersecurity incident investigation expert that classifies incidents according to their severity level given a set of detections per incident”
messages = [
             {"role": "user", 
             "content": 
" Respond strictly with this JSON format:{"severity_pred": "xxx"} where xxx should only be either:
- Critical,
<Criteria for a critical incident>
- High,
<Criteria for a high severity incident>
- Medium,
<Criteria for a medium severity incident>
- Low,
<Criteria for a low severity incident>
- Informational
<Criteria for an informational incident>
No other value is allowed."},
              {"role": "assistant", "content": " Detections:"}]
def generate_message(bedrock_runtime, model_id, system_prompt, messages, max_tokens):
    body=json.dumps(
        {
            "anthropic_version": " bedrock-2023-05-31",
            "max_tokens": max_tokens,
            "system": system_prompt,
            "messages": messages
        }  
    )   
    response = bedrock_runtime.invoke_model(body=body, modelId=model_id)
    response_body = json.loads(response.get('body').read())
    return response_body

The above example is based on our use case. The model_id parameter specifies the identifier of the specific model you wish to invoke using the Bedrock runtime. We used the model id anthropic.claude-3-sonnet-20240229-v1:0. For other model ids, please refer to the bedrock documentation. For further details about this API, we recommend you read the API documentation. We advise you to adapt it to your use case based on your requirements.

Our analysis in this blog post has focused on Anthropic’s Claude 3 Sonnet model and three specific use cases. These insights can be adapted to other SOCs’ specific requirements and desired models. For example, it’s possible to access other models such as Meta’s Llama models, Mistral models, Amazon Titan models and others. For additional models, we used Amazon SageMaker Jumpstart.

Amazon SageMaker

Amazon SageMaker is a fully managed machine learning (ML) service. With SageMaker, data scientists and developers can quickly and confidently build, train, and deploy ML models into a production-ready hosted environment. Amazon SageMaker JumpStart is a robust feature within the SageMaker machine learning (ML) environment, offering practitioners a comprehensive hub of publicly available and proprietary foundation models (FMs). It offers a wide range of publicly available and proprietary LLMs that you can, in a low-code manner, quickly tune and deploy. To quickly deploy and experiment with the out of the box models in SageMaker in a cost-effective manner, we deployed the LLMs from SageMaker JumpStart using asynchronous inference endpoints.

Inference endpoints were an effortless way for us to directly download these models from the respective Hugging Face repositories and deploy them using a few lines of code and pre-made Text Generation Inference (TGI) containers (see the example notebook on GitHub). In addition, we used asynchronous inference endpoints with autoscaling, which helped us to manage costs by automatically scaling the inference endpoints down to zero when they weren’t being used. Considering the number of endpoints we were creating, asynchronous inference made it simple for us to manage endpoints by having the endpoint ready to use whenever they were needed and scaling them down when they weren’t being used, without additional management on our end after the scaling policy was defined.

Next steps

In this blog post we applied the tasks on a single model to show case it as an example; in reality, you would select a couple of LLMs that you would put through the experiments in this post based on your requirements. From there, if the out-of-the-box models aren’t sufficient for the task, you would select the best suited LLM and then fine-tune it on the specific task.

For example, based on the outcomes of our three experimental tasks, we found that the results of the incident information summarization task didn’t meet our expectations. Therefore, we will fine-tune the out-of-the-box model that best suits our needs. This fine-tuning process can be accomplished using Amazon Bedrock Custom Models or SageMaker fine tuning, and the fine-tuned model could then be deployed using the customized model by importing it into Amazon Bedrock or by deploying the model to a SageMaker endpoint.

In this blog we covered the experimentation phase. Once you identify an LLM that meets your performance requirements, it’s important to start considering how to productionize it. When productionizing an LLM, it is important to consider things like guardrails and scalability of the LLM. Implementing guardrails helps you to minimize the risk of the model being misused or security breaches. Amazon Bedrock Guardrails enables you to implement safeguards for your generative AI applications based on your use cases and responsible AI policies. This blog covers how to build guardrails in your generative AI applications. When moving an LLM into ] production, you also want to validate the scalability of the LLM based on request traffic. In Amazon Bedrock, consider increasing the quotas of your model, batch inference, queuing the requests, or even distributing the requests between different Regions that have the same model. Select the technique that suits you based on your use case and traffic.

Conclusion

In this post, SophosAI shared insights on how to use and evaluate out-of-the-box LLMs following a set of specialized tasks for the enhancement of a security operations center’s (SOC) productivity by using Amazon Bedrock and Amazon SageMaker. We used Anthropic’s Claude 3 Sonnet model on Amazon Bedrock to illustrate three use cases.

Amazon Bedrock and SageMaker have been key to enabling us to run these experiments. With the convenient access to high-performing foundation models (FMs) from leading AI companies provided by Amazon Bedrock through a single API call, we were able to test various LLMs without needing to deploy them ourselves. Additionally, the on-demand pricing model allowed us to only pay for what we used based on token consumption.

To access additional models with flexible control, SageMaker is a great alternative that offers a wide range of LLMs ready for deployment. While you would deploy these models yourself, you can still achieve great cost optimization by using asynchronous endpoints with a scaling policy that scales the instance down to zero when not in use.

General takeaways as to the applicability of an LLM such as Anthropic’s Claude 3 Sonnet model in cybersecurity can be summarized as follows:

  • An out-of-the-box LLM can be an effective assistant in threat hunting and incident investigation. However, it still requires some guardrails and guidance. We believe that this potential application can be implemented using an existing powerful model, such as Anthropic’s Claude 3 Sonnet model, with careful prompt engineering.
  • When it comes to summarizing incident information from raw data, Anthropic’s Claude 3 Sonnet model performs adequately, but there’s room for improvement through fine-tuning.
  • Evaluating individual artifacts or groups of artifacts remains a challenging task for a pre-trained LLM. To tackle this problem, a specialized LLM trained specifically on cybersecurity data might be required.

It is also worth noticing that while we used the InvokeModel API from Amazon Bedrock, another simpler way to access Amazon Bedrock models is by using the Converse API. The Converse API provides consistent API calls that work with Amazon Bedrock models that support messages. This means you can write code once and use it with different models. Should a model have unique inference parameters, the Converse API also allows you to pass those unique parameters in a model specific structure.


About the Authors

Benoit de Patoul is a GenAI/AI/ML Specialist Solutions Architect at AWS. He helps customers by providing guidance and technical assistance to build solutions related to GenAI/AI/ML using Amazon Web Services. In his free time, he likes to play piano and spend time with friends.

Naresh Nagpal is a Solutions Architect at AWS with extensive experience in application development, integration, and technology architecture. At AWS, he works with ISV customers in the UK to help them build and modernize their SaaS applications on AWS. He is also helping customers to integrate GenAI capabilities in their SaaS applications.

Adarsh Kyadige oversees the Research wing of the Sophos AI team, where he has been working since 2018 at the intersection of Machine Learning and Security. He earned a Masters degree in Computer Science, with a specialization in Artificial Intelligence and Machine Learning, from UC San Diego. His interests and responsibilities involve applying Deep Learning to Cybersecurity, as well as orchestrating pipelines for large scale data processing. In his leisure time, Adarsh can be found at the archery range, tennis courts, or in nature. His latest research can be found on Google Scholar.

Salma Taoufiq was a Senior Data Scientist at Sophos focusing at the intersection of machine learning and cybersecurity. With an undergraduate background in computer science, she graduated from the Central European University with a MSc. in Mathematics and Its Applications. When not developing a malware detector, Salma is an avid hiker, traveler, and consumer of thrillers.

Read More

Enhanced observability for AWS Trainium and AWS Inferentia with Datadog

Enhanced observability for AWS Trainium and AWS Inferentia with Datadog

This post is co-written with Curtis Maher and Anjali Thatte from Datadog. 

This post walks you through Datadog’s new integration with AWS Neuron, which helps you monitor your AWS Trainium and AWS Inferentia instances by providing deep observability into resource utilization, model execution performance, latency, and real-time infrastructure health, enabling you to optimize machine learning (ML) workloads and achieve high-performance at scale.

Neuron is the SDK used to run deep learning workloads on Trainium and Inferentia based instances. AWS AI chips, Trainium and Inferentia, enable you to build and deploy generative AI models at higher performance and lower cost. With the increasing use of large models, requiring a large number of accelerated compute instances, observability plays a critical role in ML operations, empowering you to improve performance, diagnose and fix failures, and optimize resource utilization.

Datadog, an observability and security platform, provides real-time monitoring for cloud infrastructure and ML operations. Datadog is excited to launch its Neuron integration, which pulls metrics collected by the Neuron SDK’s Neuron Monitor tool into Datadog, enabling you to track the performance of your Trainium and Inferentia based instances. By providing real-time visibility into model performance and hardware usage, Datadog helps you achieve efficient training and inference, optimized resource utilization, and the prevention of service slowdowns.

Comprehensive monitoring for Trainium and Inferentia

Datadog’s integration with the Neuron SDK automatically collects metrics and logs from Trainium and Inferentia instances and sends them to the Datadog platform. Upon enabling the integration, users will find an out-of-the-box dashboard in Datadog, making it straightforward to start monitoring quickly. You can also modify preexisting dashboards and monitors, and add news ones tailored to your specific monitoring requirements.

The Datadog dashboard offers a detailed view of your AWS AI chip (Trainium or Inferentia) performance, such as the number of instances, availability, and AWS Region. Real-time metrics give an immediate snapshot of infrastructure health, with preconfigured monitors alerting teams to critical issues like latency, resource utilization, and execution errors. The following screenshot shows an example dashboard.

For instance, when latency spikes on a specific instance, a monitor in the monitor summary section of the dashboard will turn red and trigger alerts through Datadog or other paging mechanisms (like Slack or email). High latency may indicate high user demand or inefficient data pipelines, which can slow down response times. By identifying these signals early, teams can quickly respond in real time to maintain high-quality user experiences.

Datadog’s Neuron integration enables tracking of key performance aspects, providing crucial insights for troubleshooting and optimization:

  • NeuronCore counters – Monitoring NeuronCore utilization helps make sure that cores are being used efficiently, helping you identify if you need to make adjustments to balance workloads or optimize performance.
  • Execution status – You can monitor the progress of training jobs, including completed tasks and failed runs. This data makes sure models are being trained smoothly and reliably. If failures increase, it may signal issues with data quality, model configurations, or resource limitations that need to be addressed.
  • Memory used – You can gain a granular view of memory usage across both the host and Neuron device, including memory allocated for tensors and model execution. This helps you understand how effectively resources are being used, and when it might be time to rebalance workloads or scale resources to prevent bottlenecks from causing disruptions during training.
  • Neuron runtime vCPU usage – You can keep an eye on vCPU utilization to make sure your models aren’t overburdening the infrastructure. When vCPU usage crosses a certain threshold, you will be alerted to decide whether to redistribute workloads or upgrade instance types to avoid training slowdowns.

By consolidating these metrics into one view, Datadog provides a powerful tool for maintaining efficient, high-performance Neuron workloads, helping teams identify issues in real time and optimize infrastructure as needed. Using the Neuron integration combined with Datadog’s LLM Observability capabilities, you can gain comprehensive visibility into your large language model (LLM) applications.

Get started with Datadog and Inferentia and Trainium

Datadog’s integration with Neuron provides real-time visibility into Trainium and Inferentia, helping you optimize resource utilization, troubleshoot issues, and achieve seamless performance at scale. To get started, see AWS Inferentia and AWS Trainium Monitoring.

To learn more about how Datadog integrates with Amazon ML services and Datadog LLM Observability, see Monitor Amazon Bedrock with Datadog and Monitoring Amazon SageMaker with Datadog.

If you don’t already have a Datadog account, you can sign up for a free 14-day trial today.


About the Authors

Curtis Maher is a Product Marketing Manager at Datadog, focused on the platform’s cloud and AI/ML integrations. Curtis works closely with Datadog’s product, marketing, and sales teams to coordinate product launches and help customers observe and secure their cloud infrastructure.

Anjali Thatte is a Product Manager at Datadog. She currently focuses on building technology to monitor AI infrastructure and ML tooling and helping customers gain visibility across their AI application tech stacks.

Jason Mimick is a Senior Partner Solutions Architect at AWS working closely with product, engineering, marketing, and sales teams daily.

Anuj Sharma is a Principal Solution Architect at Amazon Web Services. He specializes in application modernization with hands-on technologies such as serverless, containers, generative AI, and observability. With over 18 years of experience in application development, he currently leads co-building with containers and observability focused AWS Software Partners.

Read More