Improve employee productivity using generative AI with Amazon Bedrock

Improve employee productivity using generative AI with Amazon Bedrock

The Employee Productivity GenAI Assistant Example is a practical AI-powered solution designed to streamline writing tasks, allowing teams to focus on creativity rather than repetitive content creation. Built on AWS technologies like AWS Lambda, Amazon API Gateway, and Amazon DynamoDB, this tool automates the creation of customizable templates and supports both text and image inputs. Using generative AI models such as Anthropic’s Claude 3 from Amazon Bedrock, it provides a scalable, secure, and efficient way to generate high-quality content. Whether you’re new to AI or an experienced user, this simplified interface allows you to quickly take advantage of the power of this sample code, enhancing your team’s writing capabilities and enabling them to focus on more valuable tasks.

By using Amazon Bedrock and generative AI on AWS, organizations can accelerate their innovation cycles, unlock new business opportunities, and deliver innovative solutions powered by the latest advancements in generative AI technology, while maintaining high standards of security, scalability, and operational efficiency.

AWS takes a layered approach to generative AI, providing a comprehensive stack that covers the infrastructure for training and inference, tools to build with large language models (LLMs) and other foundation models (FMs), and applications that use these models. At the bottom layer, AWS offers advanced infrastructure like graphics processing units (GPUs), AWS Trainium, AWS Inferentia, and Amazon SageMaker, along with capabilities like UltraClusters, Elastic Fabric Adapter (EFA), and Amazon EC2 Capacity Blocks for efficient model training and inference. The middle layer, Amazon Bedrock, provides a managed service that allows you to choose from industry-leading models, customize them with your own data, and use security, access controls, and other features. This layer includes capabilities like guardrails, agents, Amazon Bedrock Studio, and customization options. The top layer consists of applications like Amazon Q Business, Amazon Q Developer, Amazon Q in QuickSight, and Amazon Q in Connect, which enable you to use generative AI for various tasks and workflows. This post focuses exclusively on the middle layer, tools with LLMs and other FMs, specifically Amazon Bedrock and its capabilities for building and scaling generative AI applications.

Employee GenAI Assistant Example: Key features

In this section, we discuss the key features of the Employee Productivity GenAI Assistant Example and its console options.

The Playground page of the Employee Productivity GenAI Assistant Example is designed to interact with Anthropic’s Claude language models on Amazon Bedrock. In this example, we explore how to use the Playground feature to request a poem about New York City, with the model’s response dynamically streamed back to the user.

Playground GIF

This process includes the following steps:

  1. The Playground interface provides a dropdown menu to choose the specific AI model to be used. In this case, use claude-3:sonnet-202402229-v1.0, which is a version of Anthropic’s Claude 3.
  2. In the Input field, enter the prompt “Write a poem about NYC” to request the AI model to compose a poem about New York.
  3. After you enter the prompt, choose Submit. This sends the API request to Amazon Bedrock, which is hosting the Anthropic’s Claude 3 Sonnet language model. 

As the AI model processes the request and generates the poem, it’s streamed back to Output in real time, allowing you to observe the text being generated word by word or line by line.

The Templates page lists various predefined sample prompt templates, such as Interview Question Crafter, Perspective Change Prompt, Grammar Genie, and Tense Change Prompt.

Template GIF

Now let’s create a template called Product Naming Pro:

  1. Add a customized prompt by choosing Add Prompt Template.
  2. Enter Product Naming Pro as the name and Create catchy product names from descriptions and keywords as the description.
  3. Choose anthropic.claude-3:sonnet-202402229-v1.0 as the model.

The template section includes a System Prompt option. In this example, we provide the System Prompt with guidance on creating effective product names that capture the essence of the product and leave a lasting impression.

The ${INPUT_DATA} field is a placeholder variable that allows template users to provide their input text, which will be incorporated into the prompt used by the system. The visibility of the template can be set as Public or Private. A public template can be seen by authenticated users within the deployment of the solution, making sure that only those with an account and proper authentication can access it. In contrast, a private template is only visible to your own authenticated user, keeping it exclusive to you. Additional information, such as the creator’s email address, is also displayed.

The interface showcases the creation of a Product Naming Pro template designed to generate catchy product names from descriptions and keywords, enabling efficient prompt engineering.

On the Activity page, you can choose a prompt template to generate output based on provided input.

Activity GIF

The following steps demonstrate how to use the Activity feature:

  1. Choose the Product Naming Pro template created in the previous section.
  2. In the input field, enter a description: A noise-canceling, wireless, over-ear headphone with a 20-hour battery life and touch controls. Designed for audiophiles and frequent travelers.
  3. Add relevant keywords: immersive, comfortable, high-fidelity, long-lasting, convenient.
  4. After you provide the input description and keywords, choose Submit.

The output section displays five suggested product names that were generated based on the input. For example, SoundScape Voyager, AudioOasis Nomad, EnvoyAcoustic, FidelityTrek, and SonicRefuge Traveler.

The template has processed the product description and keywords to create catchy and descriptive product name suggestions that capture the essence of the noise-canceling, wireless, over-ear headphones designed for audiophiles and frequent travelers.

The History page displays logs of the interactions and activities performed within the application, including requests made on the Playground and Activity pages.

History GIF

At the top of the interface, a notification indicates that text has been copied to the clipboard, enabling you to copy generated outputs or prompts for use elsewhere.

The View and Delete options allow you to review the full details of the interaction or delete the entry from the history log, respectively.

The History page provides a way to track and revisit past activities within the application, providing transparency and allowing you to reference or manage your previous interactions with the system. The history saves your inputs and outputs on the Playground and Activity page (at the time of writing, Chat page history is not yet supported). You can only see the history of your own user requests, safeguarding security and privacy, and no other users can access your data. Additionally, you have the option to delete records stored in the history at any time if you prefer not to keep them.

Chat GIF

The interactive chat interface displays a chat conversation. The user is greeted by the assistant, and then chooses the Product Naming Pro template and provides a product description for a noise-canceling, wireless headphone designed for audiophiles and frequent travelers. The assistant responds with an initial product name recommendation based on the description. The user then requests additional recommendations, and the assistant provides five more product name suggestions. This interactive conversation highlights how the chat functionality allows continued natural language interaction with the AI model to refine responses and explore multiple options.

In the following example, the user chooses an AI model (for example, anthropic.claude-3-sonnet-202402280-v1.0) and provides input for that model. An image named headphone.jpg has been uploaded and the user asks “Please describe the image uploaded in detail to me.”

MultiModal GIF

The user chooses Submit and the AI model’s output is displayed, providing a detailed description of the headphone image. It describes the headphones as “over-ear wireless headphones in an all-black color scheme with a sleek and modern design.” It mentions the matte black finish on the ear cups and headband, as well as the well-padded soft leather or leatherette material for comfort during extended listening sessions.

This demonstrates the power of multi-modality models like the Anthropic’s Claude 3 family on Amazon Bedrock, allowing you to upload and use up to six images on the Playground or Activity pages as inputs for generating context-rich, multi-modal responses.

Solution overview

The Employee Productivity GenAI Assistant Example is built on robust AWS serverless technologies such as AWS Lambda, API Gateway, DynamoDB, and Amazon Simple Storage Service (Amazon S3), maintaining scalability, high availability, and security through Amazon Cognito. These technologies provide a foundation that allows the Employee Productivity GenAI Assistant Example to respond to user needs on-demand while maintaining strict security standards. The core of its generative abilities is derived from the powerful AI models available in Amazon Bedrock, which help deliver tailored and high-quality content swiftly.

The following diagram illustrates the solution architecture.

Architecture Diagram

The workflow of the Employee Productivity GenAI Assistant Example includes the following steps:

  1. Users access a static website hosted in the us-east-1 AWS Region, secured with AWS WAF. The frontend of the application consists of a React application hosted on an S3 bucket (S3 React Frontend), distributed using Amazon CloudFront.
  2. Users can initiate REST API calls from the static website, which are routed through an API Gateway. API Gateway manages these calls and interacts with multiple components:
    1. The API interfaces with a DynamoDB table to store and retrieve template and history data.
    2. The API communicates with a Python-based Lambda function to process requests.
    3. The API generates pre-signed URLs for image uploads and downloads to and from an S3 bucket (S3 Images).
  3. API Gateway integrates with Amazon Cognito for user authentication and authorization, managing users and groups.
  4. Users upload images to the S3 bucket (S3 Images) using the pre-signed URLs provided by API Gateway.
  5. When users request image downloads, a Lambda authorizer function written in Java is invoked, recording the request in the history database (DynamoDB table).
  6. For streaming data, users establish a WebSocket connection with an API Gateway WebSocket, which interacts with a Python Lambda function to handle the streaming data. The streaming data undergoes processing before being transmitted to an Amazon Bedrock streaming service.

Running generative AI workloads in Amazon Bedrock offers a robust and secure environment that seamlessly scales to help meet the demanding computational requirements of generative AI models. The layered security approach of Amazon Bedrock, built on the foundational principles of the comprehensive security services provided by AWS, provides a fortified environment for handling sensitive data and processing AI workloads with confidence. Its flexible architecture lets organizations use AWS elastic compute resources to scale dynamically with workload demands, providing efficient performance and cost control. Furthermore, the modular design of Amazon Bedrock empowers organizations to integrate their existing AI and machine learning (ML) pipelines, tools, and frameworks, fostering a seamless transition to a secure and scalable generative AI infrastructure within the AWS ecosystem.

In addition to the interactive features, the Employee Productivity GenAI Assistant Example provides a robust architectural pattern for building generative AI solutions on AWS. By using Amazon Bedrock and AWS serverless services such as Lambda, API Gateway, and DynamoDB, the Employee Productivity GenAI Assistant Example demonstrates a scalable and secure approach to deploying generative AI applications. You can use this architecture pattern as a foundation to build various generative AI solutions tailored to different use cases. Furthermore, the solution includes a reusable component-driven UI built on the React framework, enabling developers to quickly extend and customize the interface to fit their specific needs. The example also showcases the implementation of streaming support using WebSockets, allowing for real-time responses in both chat-based interactions and one-time requests, enhancing the user experience and responsiveness of the generative AI assistant.

Prerequisites

You should have the following prerequisites:

  • An AWS account
  • Permission to use Lambda, API Gateway, Amazon Bedrock, Amazon Cognito, CloudFront, AWS WAF, Amazon S3, and DynamoDB

Deploy the solution

To deploy and use the application, complete the following steps:

  1. Clone the GitHub repository into your AWS environment:
    git clone https://github.com/aws-samples/improve-employee-productivity-using-genai

  2. See the How to Deploy Locally section if you want to deploy from your computer.
  3. See How to Deploy via AWS CloudShell if you want to deploy from AWS CloudShell in your AWS account.
  4. After deployment is complete, see Post Deployment Steps to get started.
  5. See Demos to see examples of the solution’s capabilities and features.

Cost estimate for running the Employee Productivity GenAI Assistant Example

The cost of running the Employee Productivity GenAI Assistant Example will vary depending on the Amazon Bedrock model you choose and your usage patterns, as well as the Region you use. The primary cost drivers are the Amazon Bedrock model pricing and the AWS services used to host and run the application.

For this example, let’s assume a scenario with 50 users, each using this example code five times a day, with an average of 500 input tokens and 200 output tokens per use.

The total monthly token usage calculation is as follows:

  • Input tokens: 7.5 million
    • 500 tokens per request * 5 requests per day * 50 users * 30 days = 3.75 million tokens
  • Output tokens: 1.5 million
    • 200 tokens per request * 5 requests day * 50 users * 30 days = 1.5 million tokens

The estimated monthly costs (us-east-1 Region) for different Anthropic’s Claude models on Amazon Bedrock would be the following:

  • Anthropic’s Claude 3 Haiku model:
    • Amazon Bedrock: $2.81
      • 75 million input tokens at $0.00025/thousand tokens = $0.9375
      • 5 million output tokens at $0.00125/thousand tokens = $1.875
    • Other AWS services: $16.51
    • Total: $19.32
  • Anthropic’s Claude 3 and 3.5 Sonnet model:
    • Amazon Bedrock: $33.75
      • 75 million input tokens at $0.003/thousand tokens = $11.25
      • 5 million output tokens at $0.015/thousand tokens = $22.50
    • Other AWS services: $16.51
    • Total: $50.26
  • Anthropic’s Claude 3 Opus model:
    • Amazon Bedrock: $168.75
      • 75 million input tokens at $0.015/thousand tokens = $56.25
      • 5 million output tokens at $0.075/thousand tokens = $112.50
    • Other AWS services: $16.51
    • Total: $185.26

These estimates don’t consider the AWS Free Tier for eligible services, so your actual costs might be lower if you’re still within the Free Tier limits. Additionally, the pricing for AWS services might change over time, so the actual costs might vary from these estimates.

The beauty of this serverless architecture is that you can scale resources up or down based on demand, making sure that you only pay for the resources you consume. Some components, such as Lambda, Amazon S3, CloudFront, DynamoDB, and Amazon Cognito, might not incur additional costs if you’re still within the AWS Free Tier limits.

For a detailed breakdown of the cost estimate, including assumptions and calculations, refer to the Cost Estimator.

Clean up

When you’re done, delete any resources you no longer need to avoid ongoing costs.

To delete the stack, use the command

./deploy.sh --delete --region=<your-aws-region> --email=<your-email>

For example:

./deploy.sh --delete --us-east-1 --email=abc@example.com

For more information about how to delete the resources from your AWS account, see the How to Deploy Locally section in the GitHub repo.

Summary

The Employee Productivity GenAI Assistant Example is a cutting-edge sample code that uses generative AI to automate repetitive writing tasks, freeing up resources for more meaningful work. It uses Amazon Bedrock and generative AI models to create initial templates that can be customized. You can input both text and images, benefiting from the multimodal capabilities of AI models. Key features include a user-friendly playground, template creation and application, activity history tracking, interactive chat with templates, and support for multi-modal inputs. The solution is built on robust AWS serverless technologies such as Lambda, API Gateway, DynamoDB, and Amazon S3, maintaining scalability, security, and high availability.

Visit our GitHub repository and try it firsthand.

By using Amazon Bedrock and generative on AWS, organizations can accelerate innovation cycles, unlock new business opportunities, and deliver AI-powered solutions while maintaining high standards of security and operational efficiency.


About the Authors

Samuel Baruffi is a seasoned technology professional with over 17 years of experience in the information technology industry. Currently, he works at AWS as a Principal Solutions Architect, providing valuable support to global financial services organizations. His vast expertise in cloud-based solutions is validated by numerous industry certifications. Away from cloud architecture, Samuel enjoys soccer, tennis, and travel.

Somnath Chatterjee is an accomplished Senior Technical Account Manager at AWS, Somnath Chatterjee is dedicated to guiding customers in crafting and implementing their cloud solutions on AWS. He collaborates strategically with customers to help them run cost-optimized and resilient workloads in the cloud. Beyond his primary role, Somnath holds specialization in the Compute technical field community. He is an SAP on AWS Specialty certified professional and EFS SME. With over 14 years of experience in the information technology industry, he excels in cloud architecture and helps customers achieve their desired outcomes on AWS.

Mohammed Nawaz Shaikh is a Technical Account Manager at AWS, dedicated to guiding customers in crafting and implementing their AWS strategies. Beyond his primary role, Nawaz serves as an AWS GameDay Regional Lead and is an active member of the AWS NextGen Developer Experience technical field community. With over 16 years of expertise in solution architecture and design, he is not only a passionate coder but also an innovator, holding three US patents.

Read More

Build a multimodal social media content generator using Amazon Bedrock

Build a multimodal social media content generator using Amazon Bedrock

In today’s digital age, social media has revolutionized the way brands interact with their consumers, creating a need for dynamic and engaging content that resonates with their target audience. There’s growing competition for consumer attention in this space; content creators and influencers face constant challenges to produce new, engaging, and brand-consistent content. The challenges come from three key factors: the need for rapid content production, the desire for personalized content that is both captivating and visually appealing and reflects the unique interests of the consumer, and the necessity for content that is consistent with a brand’s identity, messaging, aesthetics, and tone.

Traditionally, the content creation process has been a time-consuming task involving multiple steps such as ideation, research, writing, editing, design, and review. This slow cycle of creation does not fit for the rapid pace of social media.

Generative AI offers new possibilities to address this challenge and can be used by content teams and influencers to enhance their creativity and engagement while maintaining brand consistency. More specifically, multimodal capabilities of large language models (LLMs) allow us to create the rich, engaging content spanning text, images, audio, and video formats that are omnipresent in advertising, marketing, and social media content. With recent advancements in vision LLMs, creators can use visual input, such as reference images, to start the content creation process. Image similarity search and text semantic search further enhance the process by quickly retrieving relevant content and context.

In this post, we walk you through a step-by-step process to create a social media content generator app using vision, language, and embedding models (Anthropic’s Claude 3, Amazon Titan Image Generator, and Amazon Titan Multimodal Embeddings) through Amazon Bedrock API and Amazon OpenSearch Serverless. Amazon Bedrock is a fully managed service that provides access to high-performing foundation models (FMs) from leading AI companies through a single API. OpenSearch Serverless is a fully managed service that makes it easier to store vectors and other data types in an index and allows you to perform sub second query latency when searching billions of vectors and measuring the semantic similarity.

Here’s how the proposed process for content creation works:

  1. First, the user (content team or marketing team) uploads a product image with a simple background (such as a handbag). Then, they provide natural language descriptions of the scene and enhancements they wish to add to the image as a prompt (such as “Christmas holiday decorations”).
  2. Next, Amazon Titan Image Generator creates the enhanced image based on the provided scenario.
  3. Then, we generate rich and engaging text that describes the image while aligning with brand guidelines and tone using Claude 3.
  4. After the draft (text and image) is created, our solution performs multimodal similarity searches against historical posts to find similar posts and gain inspiration and recommendations to enhance the draft post.
  5. Finally, based on the generated recommendations, the post text is further refined and provided to the user on the webpage. The following diagram illustrates the end-to-end new content creation process.

Solution overview

In this solution, we start with data preparation, where the raw datasets can be stored in an Amazon Simple Storage Service (Amazon S3) bucket. We provide a Jupyter notebook to preprocess the raw data and use the Amazon Titan Multimodal Embeddings model to convert the image and text into embedding vectors. These vectors are then saved on OpenSearch Serverless as collections, as shown in the following figure.

Next is the content generation. The GUI webpage is hosted using a Streamlit application, where the user can provide an initial product image and a brief description of how they expect the enriched image to look. From the application, the user can also select the brand (which will link to a specific brand template later), choose the image style (such as photographic or cinematic), and select the tone for the post text (such as formal or casual).

After all the configurations are provided, the content creation process, shown in the following figure, is launched.

In stage 1, the solution retrieves the brand-specific template and guidelines from a CSV file. In a production environment, you could maintain the brand template table in Amazon DynamoDB for scalability, reliability, and maintenance. The user input is used to generate the enriched image with the Amazon Titan Image Generator. Together with all the other information, it’s fed into the Claude 3 model, which has vision capability, to generate the initial post text that closely aligns with the brand guidelines and the enriched image. At the end of this stage, the enriched image and initial post text are created and sent back to the GUI to display to users.

In stage 2, we combine the post text and image and use the Amazon Titan Multimodal Embeddings model to generate the embedding vector. Multimodal embedding models integrate information from different data types, such as text and images, into a unified representation. This enables searching for images using text descriptions, identifying similar images based on visual content, or combining both text and image inputs to refine search results. In this solution, the multimodal embedding vector is used to search and retrieve the top three similar historical posts from the OpenSearch vector store. The retrieved results are fed into the Anthropic’s Claude 3 model to generate a caption, provide insights on why these historical posts are engaging, and offer recommendations on how the user can improve their post.

In stage 3, based on the recommendations from stage 2, the solution automatically refines the post text and provides a final version to the user. The user has the flexibility to select the version they like and make changes before publishing. For the end-to-end content generation process, steps are orchestrated with the Streamlit application.

The whole process is shown in the following image:

Implementation steps

This solution has been tested in AWS Region us-east-1. However, it can also work in other Regions where the following services are available. Make sure you have the following set up before moving forward:

We use Amazon SageMaker Studio to generate historical post embeddings and save those embedding vectors to OpenSearch Serverless. Additionally, you will run the Streamlit app from the SageMaker Studio terminal to visualize and test the solution. Testing the Streamlit app in a SageMaker environment is intended for a temporary demo. For production, we recommend deploying the Streamlit app on Amazon Elastic Compute Cloud (Amazon EC2) or Amazon Elastic Container Service (Amazon ECS) services with proper security measures such as authentication and authorization.

We use the following models from Amazon Bedrock in the solution. Please see Model support by AWS Region and select the Region that supports all three models:

  • Amazon Titan Multimodal Embeddings Model
  • Amazon Titan Image Generator
  • Claude 3 Sonnet

Set up a JupyterLab space on SageMaker Studio

JupyterLab space is a private or shared space within Sagemaker Studio that manages the storage and compute resources needed to run the JupyterLab application.

To set up a JupyterLab space

  1. Sign in to your AWS account and open the AWS Management Console. Go to SageMaker Studio.
  2. Select your user profile and choose Open Studio.
  3. From Applications in the top left, choose JupyterLab.
  4. If you already have a JupyterLab space, choose Run. If you do not, choose Create JupyterLab Space to create one. Enter a name and choose Create Space.
  5. Change the instance to t3.large and choose Run Space.
  6. Within a minute, you should see that the JupyterLab space is ready. Choose Open JupyterLab.
  7. In the JupyterLab launcher window, choose Terminal.
  8. Run the following command on the terminal to download the sample code from Github:
    git clone https://github.com/aws-samples/Build-a-multimodal-social-media-content-generator-using-Amazon-Bedrock.git

Generate sample posts and compute multimodal embeddings

In the code repository, we provide some sample product images (bag, car, perfume, and candle) that were created using the Amazon Titan Image Generator model. Next, you can generate some synthetic social media posts using the notebook: synthetic-data-generation.ipynb by using the following steps. The generated posts’ texts are saved in the metadata.jsonl file (if you prepared your own product images and post texts, you can skip this step). Then, compute multimodal embeddings for the pairs of images and generated texts. Finally, ingest the multimodal embeddings into a vector store on Amazon OpenSearch Serverless.

To generate sample posts

  1. In JupyterLab, choose File Browser and navigate to the folder social-media-generator/embedding-generation.
  2. Open the notebook synthetic-data-generation.ipynb.
  3. Choose the default Python 3 kernel and Data Science 3.0 image, then follow the instructions in the notebook.
  4. At this stage, you will have sample posts that are created and available in data_mapping.csv.
  5. Open the notebook multimodal_embedding_generation.ipynb. The notebook first creates the multimodal embeddings for the post-image pair. It then ingests the computed embeddings into a vector store on Amazon OpenSearch Serverless.
  6. At the end of the notebook, you should be able to perform a simple query to the collection as shown in the following example:
query_prompt = "christmas tree, holiday, bags"
similar_items = find_similar_items_from_query(
                    query_prompt = query_prompt, k=3, num_results=5, 
                    index_name=index_name, dataset = df, 
                    open_search_client = oss_client)

The preparation steps are complete. If you want to try out the solution directly, you can skip to Run the solution with Streamlit App to quickly test the solution in your SageMaker environment. However, if you want a more detailed understanding of each step’s code and explanations, continue reading.

Generate a social media post (image and text) using FMs

In this solution, we use FMs through Amazon Bedrock for content creation. We start by enhancing the input product image using the Amazon Titan Image Generator model, which adds a dynamically relevant background around the target product.

The get_titan_ai_request_body function creates a JSON request body for the Titan Image Generator model, using its Outpainting feature. It accepts four parameters: outpaint_prompt (for example, “Christmas tree, holiday decoration” or “Mother’s Day, flowers, warm lights”), negative_prompt (elements to exclude from the generated image), mask_prompt (specifies areas to retain, such as “bag” or “car”), and image_str (the input image encoded as a base64 string).

The generate_image function requires model_id and body (the request body from get_titan_ai_request_body). It invokes the model using bedrock.invoke_model and returns the response containing the base64-encoded generated image.

Finally, the code snippet calls get_titan_ai_request_body with the provided prompts and input image string, then passes the request body to generate_image, resulting in the enhanced image.

def get_titan_ai_request_body(outpaint_prompt, negative_prompt, mask_prompt, image_str=None):
  
    seed = random.randint(0, 2147483647)
    body = {
        "taskType": "OUTPAINTING",
        "outPaintingParams": {
            "text": outpaint_prompt,
            "negativeText": negative_prompt,
            "image": image_str,
            "maskPrompt": mask_prompt,
            "outPaintingMode": "PRECISE" # or DEFAULT
        },
        "imageGenerationConfig": {
            "numberOfImages": 1,
            "quality": "premium",
            "cfgScale": 8,
            "seed": seed,
        }
    }
return json.dumps(body)

def generate_image(model_id, body):
    """
    Args:
    model_id (str): The model ID to use.
    body (str) : The request body to use.
    Returns:
    image_bytes (bytes): The image generated by the model.
    """
    logger.info("Generating image with model %s", model_id)
    
    accept = "application/json"
    content_type = "application/json"
    
    response = bedrock.invoke_model(
        body=body, modelId=model_id, accept=accept, contentType=content_type
    )
    response_body = json.loads(response.get("body").read())
return response_body

body = get_titan_ai_request_body(outpaint_prompt, negative_prompt, mask_prompt, image_str = image_str)
response = generate_image(model_id =MODEL_IMAGE, body = body)
image_enhanced = base64_to_image(response["images"][0])

The following images showcase the enhanced versions generated based on input prompts like “Christmas tree, holiday decoration, warm lights,” a selected position (such as bottom-middle), and a brand (“Luxury Brand”). These settings influence the output images. If the generated image is unsatisfactory, you can repeat the process until you achieve the desired outcome.

Next, generate the post text, taking into consideration the user inputs, brand guidelines (provided in the brand_guideline.csv file, which you can replace with your own data), and the enhanced image generated from the previous step.

The generate_text_with_claude function is the higher-level function that handles the image and text input, prepares the necessary data, and calls generate_vision_answer to interact with the Amazon Bedrock model (Claude 3 models) and receive the desired response. The generate_vision_answer function performs the core interaction with the Amazon Bedrock model, processes the model’s response, and returns it to the caller. Together, they enable generating text responses based on combined image and text inputs.

In the following code snippet, an initial post prompt is constructed using formatting placeholders for various elements such as role, product name, target brand, tone, hashtag, copywriting, and brand messaging. These elements are provided in the brand_guideline.csv file to make sure that the generated text aligns with the brand preferences and guidelines. This initial prompt is then passed to the generate_text_with_claude function, along with the enhanced image to generate the final post text.

def generate_vision_answer(bedrock:boto3.client, messages:list, model_id:str, claude_config:dict,system_prompt:str):
    """
    Generates a vision answer using the specified model and configuration.
    """
    body={'messages': [messages],**claude_config, "system": system_prompt}
    bedrock = boto3.client(service_name='bedrock-runtime')
    
    response = bedrock.invoke_model(modelId=model_id, body=json.dumps(body))   
    response = json.loads(response['body'].read().decode('utf-8'))
    print("Claude vision answer OK")
    formated_response= post_process_answer(response['content'][0]['text'])
    
    return formated_response

def generate_text_with_claude(image, prompt):
    '''
    Generate text with Claude for post generation and historical posts analysis
    '''
    with BytesIO() as byte_io:
        image.save(byte_io, format="PNG")
        image_bytes = byte_io.getvalue()

    messages={"role": "user", "content": [
    {
            "type": "image",
            "source": {
            "type": "base64",
            "media_type": "image/jpeg",
            "data": base64.b64encode(image_bytes).decode(),
            }
    },
    {"type": "text", 
        "text": prompt}
    ]}

    claude_text = generate_vision_answer(bedrock, messages, MODEL_TEXT, CLAUDE_CONFIG, SYSTEM_PROMPT)   
    return claude_text

initial_post_prompt = PROMPT_TEXT.format(
                        role=role, product_name=product_input, target_brand=brand, 
                        tone=tone, hashtag = hashtag, copywriting= copywriting, 
                        brand_messageing = brand_messageing)
        
post_text = generate_text_with_claude(
                    image = image_enhanced, 
                    prompt=initial_post_prompt)

The following example shows the generated post text. It provides a detailed description of the product, aligns well with the brand guidelines, and incorporates elements from the image (such as the Christmas tree). Additionally, we instructed the model to include hashtags and emojis where appropriate, and the results demonstrate that it followed the prompt instructions effectively.

Post text:

Elevate your style with Luxury Brand’s latest masterpiece. Crafted with timeless elegance and superior quality, this exquisite bag embodies unique craftsmanship. Indulge in the epitome of sophistication and let it be your constant companion for life’s grandest moments. 🎄✨ #LuxuryBrand #TimelessElegance #ExclusiveCollection

Retrieve and analyze the top three relevant posts

The next step involves using the generated image and text to search for the top three similar historical posts from a vector database. We use the Amazon Titan Multimodal Embeddings model to create embedding vectors, which are stored in Amazon OpenSearch Serverless. The relevant historical posts, which might have many likes, are displayed on the application webpage to give users an idea of what successful social media posts look like. Additionally, we analyze these retrieved posts and provide actionable improvement recommendations for the user. The following code snippet shows the implementation of this step.

The code defines two functions: find_similar_items and process_images. find_similar_items performs semantic search using the k-nearest neighbors (kNN) algorithm on the input image prompt. It computes a multimodal embedding for the image and query prompt, constructs an OpenSearch kNN query, runs the search, and retrieves the top matching images and post texts. process_images analyzes a list of similar images in parallel using multiprocessing. It generates analysis texts for the images by calling generate_text_with_claude with an analysis prompt, running the calls in parallel, and collecting the results.

In the snippet, find_similar_items is called to retrieve the top three similar images and post texts based on the input image and a combined query prompt. process_images is then called to generate analysis texts for the first three similar images in parallel, displaying the results simultaneously.

def find_similar_items(image_bytes: str, query_prompt:str, k: int, num_results: int, index_name: str, dataset, open_search_client  ) -> []:
    """
    Main semantic search capability using knn on input image prompt.
    Args:
        k: number of top-k similar vectors to retrieve from OpenSearch index
        num_results: number of the top-k similar vectors to retrieve
        index_name: index name in OpenSearch
    """
    query_emb = get_titan_multimodal_embedding(image_bytes=image_bytes, description = query_prompt, dimension=1024)["embedding"]

    body = {
        "size": num_results,
        "_source": {
            "exclude": ["image_vector"],
        },
        "query": {
            "knn": {
                "image_vector": {
                    "vector": query_emb,
                    "k": k,
                }
            }
        },
    }     
        
    res = open_search_client.search(index=index_name, body=body)
    images = []
    texts = []
    
    for hit in res["hits"]["hits"]:
        id_ = hit["_id"]
        file_name = hit["_source"]["file_name"]
        post_text = hit["_source"]["post_text"]
        image = get_image(file_name = file_name, dataset = dataset)

        image.name_and_score = f'{hit["_score"]}:{hit["_source"]["file_name"]}'
        images.append(image)

        texts.append(f"Post Text: {post_text}")
                    
    return images, texts

def process_images(_similar_items, PROMPT_ANALYSIS):
    pool = multiprocessing.Pool(processes=3)  # Create a pool of 3 worker processes
    args = [(image, PROMPT_ANALYSIS) for image in _similar_items[:3]]
    results = pool.starmap(generate_text_with_claude, args)  # Execute the function calls in parallel
    # Unpack the results
    analysis_text_0, analysis_text_1, analysis_text_2 = results
    # Close the pool and wait for the tasks to finish
    pool.close()
    pool.join()
    return analysis_text_0, analysis_text_1, analysis_text_2

similar_images, post_texts = find_similar_items(
                                    image_bytes=image_enhanced_bytes, query_prompt=text_input + " " + post_text,
                                    k=5, num_results=3, index_name=index_name, dataset=mapping_table,
                                    open_search_client=oss_client)

analysis_text_0, analysis_text_1, analysis_text_2 = process_images(similar_images, PROMPT_ANALYSIS)

An example of historical post retrieval and analysis is shown in the following screenshot. Post images are listed on the left. On the right, the full text content of each post is retrieved and displayed. We then use an LLM model to generate a comprehensive scene description for the post image, which can serve as a prompt to inspire image generation. Next, the LLM model generates automatic recommendations for improvement. In this solution, we use the Claude 3 Sonnet model for text generation.

As the final step, the solution incorporates the recommendations and refines the post text to make it more appealing and likely to attract more attention from social media users.

Run the solution with Streamlit App

You can download the solution from this Git repository. Use the following steps to run the Streamlit application and quickly test out the solution in your SageMaker Studio environment.

  1. In SageMaker Studio, choose SageMaker Classic, then start an instance under your user profile.
  2. After you have the JupyterLab environment running, clone the code repository and navigate to the streamlit-app folder in a terminal:
    cd streamlit-app/
    sh setup.sh 
    sh run.sh 
    

  3. You will see a webpage link generated in the terminal, which will look similar to the following:

https://[USER-PROFILE-ID].studio.[REGION].sagemaker.aws/jupyter/default/proxy/8501/

  1. To check the status of the Streamlit application, run sh status.sh in the terminal.
  2. To shut down the application, run sh cleanup.sh.

With the Streamlit app downloaded, you can begin by providing initial prompts and selecting the products you want to retain in the image. You have the option to upload an image from your local machine, plug in your camera to take an initial product picture on the fly, or quickly test the solution by selecting a pre-uploaded image example. You can then optionally adjust the product’s location in the image by setting its position. Next, select the brand for the product. In the demo, we use the luxury brand and the fast fashion brand, each with its own preferences and guidelines. Finally, choose the image style. Choose Submit to start the process.

The application will automatically handle post-image and text generation, retrieve similar posts for analysis, and refine the final post. This end-to-end process can take approximately 30 seconds. If you aren’t satisfied with the result, you can repeat the process a few times. An end-to-end demo is shown below.

Inspiration from historical posts using image similarity search

If you find yourself lacking ideas for initial prompts to create the enhanced image, consider using a reverse search approach. During the retrieve and analyze posts step mentioned earlier, scene descriptions are also generated, which can serve as inspiration. You can modify these descriptions as needed and use them to generate new images and accompanying text. This method effectively uses existing content to stimulate creativity and enhance the application’s output.

In the preceding example, the top three similar images to our generated images show perfume pictures posted to social media by users. This insight helps brands understand their target audience and the environments in which their products are used. By using this information, brands can create dynamic and engaging content that resonates with their users. For instance, in the example provided, “a hand holding a glass perfume bottle in the foreground, with a scenic mountain landscape visible in the background,” is unique and visually more appealing than a dull picture of “a perfume bottle standing on a branch in a forest.” This illustrates how capturing the right scene and context can significantly enhance the attractiveness and impact of social media content.

Clean up

When you finish experimenting with this solution, use the following steps to clean up the AWS resources to avoid unnecessary costs:

  1. Navigate to the Amazon S3 console and delete the S3 bucket and data created for this solution.
  2. Navigate to the Amazon OpenSearch Service console, choose Serverless, and then select Collection. Delete the collection that was created for storing the historical post embedding vectors.
  3. Navigate to the Amazon SageMaker console. Choose Admin configurations and select Domains. Select your user profile and delete the running application from Spaces and Apps.

Conclusion

In this blog post, we introduced a multimodal social media content generator solution that uses FMs from Amazon Bedrock, such as the Amazon Titan Image Generator, Claude 3, and Amazon Titan Multimodal Embeddings. The solution streamlines the content creation process, enabling brands and influencers to produce engaging and brand-consistent content rapidly. You can try out the solution using this code sample.

The solution involves enhancing product images with relevant backgrounds using the Amazon Titan Image Generator, generating brand-aligned text descriptions through Claude 3, and retrieving similar historical posts using Amazon Titan Multimodal Embeddings. It provides actionable recommendations to refine content for better audience resonance. This multimodal AI approach addresses challenges in rapid content production, personalization, and brand consistency, empowering creators to boost creativity and engagement while maintaining brand identity.

We encourage brands, influencers, and content teams to explore this solution and use the capabilities of FMs to streamline their content creation processes. Additionally, we invite developers and researchers to build upon this solution, experiment with different models and techniques, and contribute to the advancement of multimodal AI in the realm of social media content generation.

See this announcement blog post for information about the Amazon Titan Image Generator and Amazon Titan Multimodal Embeddings model. For more information, see Amazon Bedrock and Amazon Titan in Amazon Bedrock.


About the Authors

Ying Hou, PhD, is a Machine Learning Prototyping Architect at AWS, specialising in building GenAI applications with customers, including RAG and agent solutions. Her expertise spans GenAI, ASR, Computer Vision, NLP, and time series prediction models. Outside of work, she enjoys spending quality time with her family, getting lost in novels, and hiking in the UK’s national parks.

Bishesh Adhikari, is a Senior ML Prototyping Architect at AWS with over a decade of experience in software engineering and AI/ML. Specializing in GenAI, LLMs, NLP, CV, and GeoSpatial ML, he collaborates with AWS customers to build solutions for challenging problems through co-development. His expertise accelerates customers’ journey from concept to production, tackling complex use cases across various industries. In his free time, he enjoys hiking, traveling, and spending time with family and friends.

Read More

Elevate RAG for numerical analysis using Amazon Bedrock Knowledge Bases

Elevate RAG for numerical analysis using Amazon Bedrock Knowledge Bases

In the realm of generative artificial intelligence (AI), Retrieval Augmented Generation (RAG) has emerged as a powerful technique, enabling foundation models (FMs) to use external knowledge sources for enhanced text generation.

Amazon Bedrock is a fully managed service that offers a choice of high-performing FMs from leading AI companies like AI21 Labs, Anthropic, Cohere, Meta, Mistral AI, Stability AI, and Amazon through a single API, along with a broad set of capabilities to build generative AI applications with security, privacy, and responsible AI. Amazon Bedrock Knowledge Bases is a fully managed capability that helps you implement the entire RAG workflow—from ingestion to retrieval and prompt augmentation—without having to build custom integrations to data sources and manage data flows. However, RAG has had its share of challenges, especially when it comes to using it for numerical analysis. This is the case when you have information embedded in complex nested tables. Latest innovations in  Amazon Bedrock Knowledge Base provide a resolution to this issue.

In this post, we explore how Amazon Bedrock Knowledge Bases address the use case of numerical analysis across a number of documents.

The power of RAG and its limitations

With RAG, an information retrieval component is introduced that utilizes the user input to first pull relevant information from a data source. The user query and the relevant information are both given to the large language model (LLM). The LLM uses the new knowledge and its training data to create better responses.

Although this approach holds a lot of promise for textual documents, the presence of non-textual elements, such as tables, pose a significant challenge. One issue is that the table structure by itself can be difficult to interpret when directly queried against documents in PDFs or Word. This can be addressed by transforming the data into a format such as text, markdown, or HTML.

Another issue relates to search, retrieval, and chunking of documents that contain tables. The first step in RAG is to chunk a document so you can transform that chunk of data into a vector for a meaningful representation of text. However, when you apply this method to a table, even if converted into a text format, there is a risk that the vector representation doesn’t capture all the relationships in the table. As a result, when you try to retrieve information, a lot of information is missed. Because this information isn’t retrieved, the LLM doesn’t provide accurate answers to your questions.

Amazon Bedrock Knowledge Bases provide three capabilities to resolve this issue:

  • Hybrid search – A hybrid search retrieves information based on semantic meaning through vector representations as well as through keywords. As a result, information on particular key fields that was being missed earlier using purely semantic search is retrieved, and the LLM is able to accurately provide the correct answers. For more information on Amazon Bedrock’s hybrid search capability, see Amazon Bedrock Knowledge Bases now supports hybrid search.
  • Chunking data in fixed sizes – You can specify a fixed size for the data that is eventually transformed into a vector. Small sizes imply smaller amounts of data and vice versa.
  • Retrieving a large number of chunks from your search results – These are the number of chunks retrieved as the result of your search. The greater the number of results retrieved, the more context provided to the LLM for an answer.

Using a combination of these features can enhance numerical analysis of information across multiple documents that contain data in tables. In the next section, we demonstrate this approach using a set of earnings documents from Amazon.

Solution overview

The following diagram illustrates the high-level architecture of our solution for analyzing numerical documents.

The user call flow consists of the following steps:

  1. The process begins with the user uploading one or more documents. This action initiates the workflow.
  2. The Streamlit application, which designed to facilitate user interaction, takes these uploaded documents and stores them in an Amazon Simple Storage Service (Amazon S3) bucket.
  3. After the documents are successfully copied to the S3 bucket, the event automatically invokes an AWS Lambda
  4. The Lambda function invokes the Amazon Bedrock knowledge base API to extract embeddings—essential data representations—from the uploaded documents. These embeddings are structured information that capture the core features and meanings of the documents.
  5. With the documents processed and stored, the GUI of the application becomes interactive. Users can now engage with the application by asking questions in natural language through the user-friendly interface.
  6. When a user submits a question, the application converts this query into query embeddings. These embeddings encapsulate the essence of the user’s question, which helps with retrieving the relevant context from the knowledge base.
  1. you can use the Retrieve API to query your knowledge base with information retrieved directly from the knowledge base. The RetrieveAndGenerate API uses the retrieved results to augment the foundation model (FM) prompt and returns the response.
  2. Using a hybrid search method that combines keyword-based and semantic-based techniques, the application searches its knowledge base for relevant information related to the user’s query. This search aims to find contextual answers that match both the explicit terms and the intended meaning of the question.
  3. When relevant context is identified, the application forwards this information—both the user’s query and the retrieved context—to the LLM module.
  4. The LLM module processes the provided query and context to generate a response.
  5. The application delivers the generated response back to the user through its GUI. This completes the loop of interaction, where the user’s initial query results in a comprehensive and contextually relevant response derived from the uploaded documents and the application’s knowledge base.

In the following sections, we walk through the steps to create an S3 bucket and knowledge base, deploy the Streamlit application with AWS CloudFormation, and test the solution.

Prerequisites

You should have the following prerequisites:

  • An AWS account with necessary permissions
  • Access to launch AWS CloudFormation
  • Access to the Anthropic Claude 3 Sonnet and Amazon Titan Text Embeddings v2 models on Amazon Bedrock
  • The CloudFormation template downloaded to your local computer

Create an S3 bucket

Complete the following steps to create your S3 bucket:

  1. On the Amazon S3 console, choose Buckets in the navigation pane.
  2. Choose Create bucket.
  3. Enter a unique bucket name that follows the S3 bucket naming rules.
  4. Choose the AWS Region where you want to create the bucket. It is recommended to choose Region that is geographically close to you.
  5. Leave the other settings at their default values and choose Create bucket.

Create a knowledge base

Complete the following steps to create a knowledge base with default settings:

  1. On the Amazon Bedrock console, choose Knowledge bases under Builder tools in the navigation pane.
  2. Choose Create knowledge base.
  3. In the Provide knowledge base details section, provide the following information:
  4. In the Choose data source section, select the radio button for Amazon S3 and choose Next
  5. In the Configure data source section, provide the following information
    • For S3 URI, enter the S3 path for the bucket you created.
    • For chunking and parsing configurations, select the radio button for Custom
    • For Chunking strategy, choose Fixed-size chunking.
    • For Max tokens, enter 250.
    • For Overlap percentage between chunks, enter 30.
    • Leave everything as default and choose Next.

  1. In the Select embeddings model and configure vector store section, provide the following information:
    • For Embeddings model, choose Titan Text Embeddings v2.
    • Under Vector database, select Quick create a new vector store.
    • Leave everything else as default and choose Next.

  1. Review the knowledge base settings and choose Create knowledge base.

  1. Amazon Bedrock will now provision the necessary resources and set up the knowledge base for you as shown in the screen below (Note: This process may take a few minutes to complete). Note the knowledge base ID as shown

  1. Click on the data source name and note the Data source ID as shown

Create the Streamlit application

After the knowledge base is setup using the above 9 steps, complete the following steps to create the Streamlit application using the CloudFormation template:

  1. On the AWS CloudFormation console, choose Stacks in the navigation pane.
  2. Choose Create stack.
  3. Select With new resources (standard).
  4. For the template source, choose Upload a template file.
  5. Choose Choose file and upload the template you downloaded earlier.
  6. Enter a name for your stack.
  7. Configure the following parameters:
    • KnowledgeBase Configuration
      1. For KnowledgeBasedID, enter the knowledge base ID that you saved earlier.
      2. For DatasourceID, enter the data source ID that you saved earlier.
    • S3Bucket Configuration
      1. For RAGDocumentInput, enter the name of the bucket you created.
    • S3Bucket Configuration
      1. For SubnetId, choose your public subnet
      2. For VpcId, choose the VPC ID in which you want to deploy the Streamlit application.
      3.  For YourPublicIP, enter the public IP address from where you access the Streamlit application.
    • S3Bucket Configuration
      1. For InstanceType and LatestLinuxAMI, you can use the default values
  8. Review the stack details and select the checkbox in the Capabilities section:
    • I acknowledge that AWS CloudFormation might create IAM resources
  9. Choose Create stack to initiate the stack creation process. This will take few minutes to complete.

When the stack is complete, you can refer to the stack’s Outputs tab for the Streamlit application URL.

Now that we have deployed the Streamlit application, let’s see how users can interact with it and ask questions.

Test the solution

We can divide the UI experience into two phases: document ingestion and document querying.

The document ingestion workflow consists of the following steps:

  1. Users use the Streamlit application to upload documents. For testing, we can use Amazon earnings for the last 16 quarters. The application then uploads the documents to an S3 bucket.
  2. The upload event invokes a Lambda function. This function starts the ingestion job to sync the data to the knowledge base. Using the specified chunking strategy, the knowledge base converts the documents in the S3 bucket to vector embeddings, which are stored in the default Amazon OpenSearch serverless vector database. This process will take about 15 minutes to complete. You can follow the steps mentioned here to check the sync history which contains the details about when the data source was synced, Please wait for sync to complete to start asking questions in the streamlit application

The document querying workflow consists of the following steps:

  1. Users interact with a Streamlit application running on Amazon Elastic Compute Cloud (Amazon EC2 ), where they enter questions (queries) about the documents and receive responses.
  2. The application passes the queries to the knowledge base.
  3. The knowledge base returns the most relevant text. In this case, because we’re using hybrid search, it returns keyword-based as well as semantic search-based results. In the hybrid search, we maximize the number of chunks returned (to 100).
  4. The application invokes the specified LLM in Amazon Bedrock and passes the retrieved results as context, along with the original query, to the LLM.
  5. The LLM model processes the prompts and context, and returns responses.
  6. The Streamlit application displays the results to the user.

To test this setup, we deployed this solution in an AWS account and analyzed Amazon earnings for the last 16 quarters. The results from a semantic search with default chunking vs. a hybrid search with fixed size chunking and maximum results are displayed in the following table.

Question Semantic Search with Default Chunking Hybrid Search with Fixed Size Chunking
What is the change in total operating expenses for technology and infrastructure from 2020 to 2023? Based on the information provided in the search results, I could not find the exact change in total operating expenses for technology and infrastructure from 2020 to 2023. The search results provide some financial data, but do not break down the operating expenses into specific categories like technology and infrastructure. Without that level of detail, I cannot determine the exact change in those expenses over that time period

Based on the financial information provided, Amazon’s operating expenses for technology and infrastructure increased from

42.7billion in 2020 to 85.6 billion in 2023, an increase of $42.9 billion.

What are the international operating expenses in 2020, 2021 and 2022? Based on the search results, I could not find any information about Amazon’s international operating expenses for the years 2020, 2021, and 2022. The financial information provided in the search results focuses on Amazon’s overall operating expenses, cash flows, and other financial metrics, but does not break down the expenses by international operations.

The international operating expenses for Amazon were:

2020: 103,695 million 2021:128,711 million 2022: $125,753 million

As seen in the results, the hybrid search option provides correct answers to all the questions.

Clean up

To prevent incurring costs after deploying and testing this solution, delete the S3 bucket, knowledge base, and CloudFormation stack.

Conclusion

In this post, we discussed how Amazon Bedrock Knowledge Bases provides a powerful solution that enables numerical analysis on documents. You can deploy this solution in an AWS account and use it to analyze different types of documents. As we continue to push the boundaries of generative AI, solutions like this will play a pivotal role in bridging the gap between unstructured data and actionable insights, enabling organizations to unlock the full potential of their data assets.

To further explore the advanced RAG capabilities of Amazon Bedrock Knowledge Bases, refer to the following resources:


About the Authors

Sanjeev Pulapaka is a Principal Solutions architect and the Single Threaded Leader for AI/ML in the US federal civilian team at AWS. He advises customers on AI/ML-related solutions that advance their mission. Sanjeev has extensive experience in leading, architecting, and implementing high-impact technology solutions that address diverse business needs in multiple sectors, including commercial, federal, and state and local governments. He has an undergraduate degree in engineering from the Indian Institute of Technology and an MBA from the University of Notre Dame.

Muhammad Qazafi is a Solutions Architect based in the US. He assists customers in designing, developing, and implementing secure, scalable, and innovative solutions on AWS. His objective is to help customers achieve measurable business outcomes through the effective utilization of AWS services. With over 15 years of experience, Muhammad brings a wealth of knowledge and expertise across a diverse range of industries. This extensive experience enables him to understand the unique challenges faced by different businesses and help customers create solutions on AWS.

Venkata Kampana is a Senior Solutions architect in the AWS Health and Human Services team and is based in Sacramento, California. In this role, he helps public sector customers achieve their mission objectives with well-architected solutions on AWS.

Read More

Llama 3.2 models from Meta are now available in Amazon SageMaker JumpStart

Llama 3.2 models from Meta are now available in Amazon SageMaker JumpStart

Today, we are excited to announce the availability of Llama 3.2 models in Amazon SageMaker JumpStart. Llama 3.2 offers multi-modal vision and lightweight models representing Meta’s latest advancement in large language models (LLMs), providing enhanced capabilities and broader applicability across various use cases. With a focus on responsible innovation and system-level safety, these new models demonstrate state-of-the-art performance on a wide range of industry benchmarks and introduce features that help you build a new generation of AI experiences. SageMaker JumpStart is a machine learning (ML) hub that provides access to algorithms, models, and ML solutions so you can quickly get started with ML.

In this post, we show how you can discover and deploy the Llama 3.2 11B Vision model using SageMaker JumpStart. We also share the supported instance types and context for all the Llama 3.2 models available in SageMaker JumpStart. Although not highlighted in this blog, you can also use the lightweight models along with fine-tuning using SageMaker JumpStart.

Llama 3.2 models are available in SageMaker JumpStart initially in the US East (Ohio) AWS Region. Please note that Meta has restrictions on your usage of the multi-modal models if you are located in the European Union. See Meta’s community license agreement for more details.

Llama 3.2 overview

Llama 3.2 represents Meta’s latest advancement in LLMs. Llama 3.2 models are offered in various sizes, from small and medium-sized multi-modal models. The larger Llama 3.2 models come in two parameter sizes—11B and 90B—with 128,000 context length, and are capable of sophisticated reasoning tasks including multi-modal support for high resolution images. The lightweight text-only models come in two parameter sizes—1B and 3B—with 128,000 context length, and are suitable for edge devices. Additionally, there is a new safeguard Llama Guard 3 11B Vision parameter model, which is designed to support responsible innovation and system-level safety.

Llama 3.2 is the first Llama model to support vision tasks, with a new model architecture that integrates image encoder representations into the language model. With a focus on responsible innovation and system-level safety, Llama 3.2 models help you build and deploy cutting-edge generative AI models to ignite new innovations like image reasoning and are also more accessible for on-edge applications. The new models are also designed to be more efficient for AI workloads, with reduced latency and improved performance, making them suitable for a wide range of applications.

SageMaker JumpStart overview

SageMaker JumpStart offers access to a broad selection of publicly available foundation models (FMs). These pre-trained models serve as powerful starting points that can be deeply customized to address specific use cases. You can now use state-of-the-art model architectures, such as language models, computer vision models, and more, without having to build them from scratch.

With SageMaker JumpStart, you can deploy models in a secure environment. The models can be provisioned on dedicated SageMaker Inference instances, including AWS Trainium and AWS Inferentia powered instances, and are isolated within your virtual private cloud (VPC). This enforces data security and compliance, because the models operate under your own VPC controls, rather than in a shared public environment. After deploying an FM, you can further customize and fine-tune it using the extensive capabilities of Amazon SageMaker, including SageMaker Inference for deploying models and container logs for improved observability. With SageMaker, you can streamline the entire model deployment process.

Prerequisites

To try out the Llama 3.2 models in SageMaker JumpStart, you need the following prerequisites:

Discover Llama 3.2 models in SageMaker JumpStart

SageMaker JumpStart provides FMs through two primary interfaces: SageMaker Studio and the SageMaker Python SDK. This provides multiple options to discover and use hundreds of models for your specific use case.

SageMaker Studio is a comprehensive IDE that offers a unified, web-based interface for performing all aspects of the ML development lifecycle. From preparing data to building, training, and deploying models, SageMaker Studio provides purpose-built tools to streamline the entire process. In SageMaker Studio, you can access SageMaker JumpStart to discover and explore the extensive catalog of FMs available for deployment to inference capabilities on SageMaker Inference.

In SageMaker Studio, you can access SageMaker JumpStart by choosing JumpStart in the navigation pane or by choosing JumpStart from the Home page.

Alternatively, you can use the SageMaker Python SDK to programmatically access and use SageMaker JumpStart models. This approach allows for greater flexibility and integration with existing AI/ML workflows and pipelines. By providing multiple access points, SageMaker JumpStart helps you seamlessly incorporate pre-trained models into your AI/ML development efforts, regardless of your preferred interface or workflow.

Deploy Llama 3.2 multi-modality models for inference using SageMaker JumpStart

On the SageMaker JumpStart landing page, you can discover all public pre-trained models offered by SageMaker. You can choose the Meta model provider tab to discover all the Meta models available in SageMaker.

If you’re using SageMaker Classic Studio and don’t see the Llama 3.2 models, update your SageMaker Studio version by shutting down and restarting. For more information about version updates, refer to Shut down and Update Studio Classic Apps.

You can choose the model card to view details about the model such as license, data used to train, and how to use. You can also find two buttons, Deploy and Open Notebook, which help you use the model.

When you choose either button, a pop-up window will show the End-User License Agreement (EULA) and acceptable use policy for you to accept.

Upon acceptance, you can proceed to the next step to use the model.

Deploy Llama 3.2 11B Vision model for inference using the Python SDK

When you choose Deploy and accept the terms, model deployment will start. Alternatively, you can deploy through the example notebook by choosing Open Notebook. The notebook provides end-to-end guidance on how to deploy the model for inference and clean up resources.

To deploy using a notebook, you start by selecting an appropriate model, specified by the model_id. You can deploy any of the selected models on SageMaker.

You can deploy a Llama 3.2 11B Vision model using SageMaker JumpStart with the following SageMaker Python SDK code:

from sagemaker.jumpstart.model import JumpStartModel
model = JumpStartModel(model_id = "meta-vlm-llama-3-2-11b-vision")
predictor = model.deploy(accept_eula=accept_eula)

This deploys the model on SageMaker with default configurations, including default instance type and default VPC configurations. You can change these configurations by specifying non-default values in JumpStartModel. To successfully deploy the model, you must manually set accept_eula=True as a deploy method argument. After it’s deployed, you can run inference against the deployed endpoint through the SageMaker predictor:

payload = {
    "messages": [
        {"role": "system", "content": "You are a helpful assistant"},
        {"role": "user", "content": "How are you doing today"},
        {"role": "assistant", "content": "Good, what can i help you with today?"},
        {"role": "user", "content": "Give me 5 steps to become better at tennis?"}
    ],
    "temperature": 0.6,
    "top_p": 0.9,
    "max_tokens": 512,
    "logprobs": False
}
response = predictor.predict(payload)
response_message = response['choices'][0]['message']['content']

Recommended instances and benchmark

The following table lists all the Llama 3.2 models available in SageMaker JumpStart along with the model_id, default instance types, and the maximum number of total tokens (sum of number of input tokens and number of generated tokens) supported for each of these models. For increased context length, you can modify the default instance type in the SageMaker JumpStart UI.

Model Name Model ID Default instance type Supported instance types
Llama-3.2-1B meta-textgeneration-llama-3-2-1b,
meta-textgenerationneuron-llama-3-2-1b
ml.g6.xlarge (125K context length),
ml.trn1.2xlarge (125K context length)
All g6/g5/p4/p5 instances;
ml.inf2.xlarge, ml.inf2.8xlarge, ml.inf2.24xlarge, ml.inf2.48xlarge, ml.trn1.2xlarge, ml.trn1.32xlarge, ml.trn1n.32xlarge
Llama-3.2-1B-Instruct meta-textgeneration-llama-3-2-1b-instruct,
meta-textgenerationneuron-llama-3-2-1b-instruct
ml.g6.xlarge (125K context length),
ml.trn1.2xlarge (125K context length)
All g6/g5/p4/p5 instances;
ml.inf2.xlarge, ml.inf2.8xlarge, ml.inf2.24xlarge, ml.inf2.48xlarge, ml.trn1.2xlarge, ml.trn1.32xlarge, ml.trn1n.32xlarge
Llama-3.2-3B meta-textgeneration-llama-3-2-3b,
meta-textgenerationneuron-llama-3-2-3b
ml.g6.xlarge (125K context length),
ml.trn1.2xlarge (125K context length)
All g6/g5/p4/p5 instances;
ml.inf2.xlarge, ml.inf2.8xlarge, ml.inf2.24xlarge, ml.inf2.48xlarge, ml.trn1.2xlarge, ml.trn1.32xlarge, ml.trn1n.32xlarge
Llama-3.2-3B-Instruct meta-textgeneration-llama-3-2-3b-instruct,
meta-textgenerationneuron-llama-3-2-3b-instruct
ml.g6.xlarge (125K context length),
ml.trn1.2xlarge (125K context length)
All g6/g5/p4/p5 instances;
ml.inf2.xlarge, ml.inf2.8xlarge, ml.inf2.24xlarge, ml.inf2.48xlarge, ml.trn1.2xlarge, ml.trn1.32xlarge, ml.trn1n.32xlarge
Llama-3.2-11B-Vision meta-vlm-llama-3-2-11b-vision ml.p4d.24xlarge (125K context length) p4d.24xlarge,
p4de.24xlarge,
p5.48xlarge
Llama-3.2-11B-Vision-Instruct meta-vlm-llama-3-2-11b-vision-instruct ml.p4d.24xlarge (125K context length) p4d.24xlarge,
p4de.24xlarge,
p5.48xlarge
Llama-3.2-90B-Vision meta-vlm-llama-3-2-90b-vision ml.p5.24xlarge (125K context length) p4d.24xlarge,
p4de.24xlarge,
p5.48xlarge
Llama-3.2-90B-Vision-Instruct meta-vlm-llama-3-2-90b-vision-instruct ml.p5.24xlarge (125K context length) p4d.24xlarge,
p4de.24xlarge,
p5.48xlarge
Llama-Guard-3-11B-Vision meta-vlm-llama-guard-3-11b-vision ml.p4d.24xlarge p4d.24xlarge,
p4de.24xlarge,
p5.48xlarge

Llama 3.2 models have been evaluated on over 150 benchmark datasets, demonstrating competitive performance with leading FMs.

Inference and example prompts for Llama-3.2 11B Vision

You can use Llama 3.2 11B and 90B models for text and image or vision reasoning use cases. You can perform a variety of tasks, such as image captioning, image text retrieval, visual question answering and reasoning, document visual question answering, and more. Input payload to the endpoint looks like the following code examples.

Text-only input

The following is an example of text-only input:

{
    'messages': [
        {'role': 'system', 'content': 'You are a helpful assistant'}, 
        {'role': 'user', 'content': 'How are you doing today'}, 
        {'role': 'assistant', 'content': 'Good, what can i help you with today?'}, 
        {'role': 'user', "content": 'Give me 5 steps to become better at tennis?'}
    ], 
    'temperature': 0.6, 
    'top_p': 0.9, 
    'max_tokens': 512, 
    'logprobs': False
}

This produces the following response:

Here are 5 steps to help you improve your tennis game:

**Step 1: Develop a Consistent Serve**

A strong serve is the foundation of a good tennis game. Practice your serve regularly to develop a consistent and powerful shot. Focus on proper footwork, grip, and body positioning. Experiment with different serves, such as topspin and slice, to keep your opponent guessing.

**Step 2: Improve Your Footwork and Agility**

Good footwork is essential for getting into position to hit shots and covering the court. Practice shuffling, sliding, and turning to improve your agility and reaction time. Incorporate exercises like ladder drills and cone drills into your training to enhance your foot speed and agility.

**Step 3: Focus on Your Volley and Overhead Shots**

The volley and overhead shots are critical for winning points at the net. Practice hitting volleys and overheads with both forehand and backhand grips. Focus on keeping your racquet close to the ball and using your wrist and arm to generate power.

**Step 4: Work on Your Return of Serve**

The return of serve is a crucial shot in tennis, as it sets up the point for your opponent. Practice returning serves with different types of spin, such as topspin and slice. Focus on getting your racquet head to the ball early and using your legs to generate power.

**Step 5: Analyze Your Game and Practice with a Purpose**

To improve your game, you need to identify areas for improvement and practice with a purpose. Record your matches and analyze your game to identify weaknesses and areas for improvement. Create a practice plan that targets specific areas, such as your forehand or backhand, and focus on making progress in those areas.

Remember, improvement takes time and practice. Stay committed, and with consistent effort, you'll see improvement in your tennis game!

Single-image input

You can set up vision-based reasoning tasks with Llama 3.2 models with SageMaker JumpStart as follows:

import requests
import base64

def url_to_base64(image_url):
    # Download the image
    response = requests.get(image_url)
    if response.status_code != 200:
        return None

    # Encode the image content to base64
    image_base64 = base64.b64encode(response.content).decode('utf-8')
    return image_base64

Let’s load an image from the open source MATH-Vision dataset:

url = "https://raw.githubusercontent.com/mathvision-cuhk/MATH-V/refs/heads/main/images/13.jpg"
image_data = url_to_base64(url)

We can structure the message object with our base64 image data:

{
  "messages": [
    {
      "role": "user",
      "content": [
        {
          "type": "text",
          "text": "Which of these figures differs from the rest four?"
        },
        {
          "type": "image_url",
          "image_url": {
            "url": f"""data:image/jpeg;base64,{image_data}"""
          }
        }
      ]
    }
  ],
  "temperature": 0.4,
  "top_p": 0.95,
  "max_tokens": 512,
  "logprobs": False
}

This produces the following response:

The figure labeled E is the one that differs from the rest.

Multi-image input

The following code is an example of multi-image input:

image_data1 = url_to_base64("https://d2908q01vomqb2.cloudfront.net/887309d048beef83ad3eabf2a79a64a389ab1c9f/2023/04/27/DBBLOG-2448-SingleAZInstance.png")
image_data2 = url_to_base64("https://d2908q01vomqb2.cloudfront.net/887309d048beef83ad3eabf2a79a64a389ab1c9f/2023/04/27/DBBLOG-2448-MultiAZInstance.png")
payload = {
  "messages": [
    {
      "role": "user",
      "content": [
        {
          "type": "text",
          "text": "There are two images. Describe each one and why they are related"
        },
        {
          "type": "image_url",
          "image_url": {
            "url": f"""data:image/jpeg;base64,{image_data1}"""
          }
        },
        {
          "type": "image_url",
          "image_url": {
            "url": f"""data:image/jpeg;base64,{image_data2}"""
          }
        }
      ]
    }
  ],
  "temperature": 0.6,
  "top_p": 0.9,
  "max_tokens": 512,
  "logprobs": False
}

This produces the following response:

The two images depict different architectures for a distributed database system. The main difference between them is the location of the database instances and how they are connected.

**Image 1: Centralized Architecture**

In this image, there is a single database instance (DB1) that serves as the central repository for all data. This instance is connected to multiple application servers (App1, App2, App3) through a network. The application servers interact with the database instance to read and write data.

**Image 2: Distributed Architecture**

In this image, there are multiple database instances (DB1, DB2, DB3) that are distributed across different locations. Each database instance is connected to a local application server (App1, App2, App3) through a network. The application servers interact with their local database instances to read and write data.

The key differences between the two images are:

* **Scalability**: The distributed architecture (Image 2) allows for easier scalability, as new database instances can be added or removed as needed, without affecting the overall system.
* **Availability**: The distributed architecture provides higher availability, as if one database instance goes down, the other instances can still serve requests.
* **Performance**: The centralized architecture (Image 1) may experience performance issues if the central database instance becomes a bottleneck, while the distributed architecture can distribute the load across multiple instances.
* **Data consistency**: The centralized architecture may have easier data consistency management, as all data is stored in a single location, while the distributed architecture requires more complex mechanisms to ensure data consistency across multiple instances.

In summary, the centralized architecture is suitable for small to medium-sized applications with low traffic, while the distributed architecture is more suitable for large-scale applications with high traffic and scalability requirements.

Clean up

To avoid incurring unnecessary costs, when you’re done, delete the SageMaker endpoints using the following code snippets:

predictor.delete_model()
predictor.delete_endpoint()

Alternatively, to use the SageMaker console, complete the following steps:

  1. On the SageMaker console, under Inference in the navigation pane, choose Endpoints.
  2. Search for the embedding and text generation endpoints.
  3. On the endpoint details page, choose Delete.
  4. Choose Delete again to confirm.

Conclusion

In this post, we explored how SageMaker JumpStart empowers data scientists and ML engineers to discover, access, and deploy a wide range of pre-trained FMs for inference, including Meta’s most advanced and capable models to date. Get started with SageMaker JumpStart and Llama 3.2 models today. For more information about SageMaker JumpStart, see Train, deploy, and evaluate pretrained models with SageMaker JumpStart and Getting started with Amazon SageMaker JumpStart.


About the Authors

Supriya Puragundla is a Senior Solutions Architect at AWS
Armando Diaz is a Solutions Architect at AWS
Sharon Yu is a Software Development Engineer at AWS
Siddharth Venkatesan is a Software Development Engineer at AWS
Tony Lian is a Software Engineer at AWS
Evan Kravitz is a Software Development Engineer at AWS
Jonathan Guinegagne is a Senior Software Engineer at AWS
Tyler Osterberg is a Software Engineer at AWS
Sindhu Vahini Somasundaram is a Software Development Engineer at AWS
Hemant Singh is an Applied Scientist at AWS
Xin Huang is a Senior Applied Scientist at AWS
Adriana Simmons is a Senior Product Marketing Manager at AWS
June Won is a Senior Product Manager at AWS
Karl Albertsen is a Head of ML Algorithm and JumpStart at AWS

Read More

Vision use cases with Llama 3.2 11B and 90B models from Meta

Vision use cases with Llama 3.2 11B and 90B models from Meta

Today, we are excited to announce the availability of Llama 3.2 in Amazon SageMaker JumpStart and Amazon Bedrock. The Llama 3.2 models are a collection of state-of-the-art pre-trained and instruct fine-tuned generative AI models that come in various sizes—in lightweight text-only 1B and 3B parameter models suitable for edge devices, to small and medium-sized 11B and 90B parameter models capable of sophisticated reasoning tasks, including multimodal support for high-resolution images. SageMaker JumpStart is a machine learning (ML) hub that provides access to algorithms, models, and ML solutions so you can quickly get started with ML. Amazon Bedrock is a fully managed service that offers a choice of high-performing foundation models (FMs) from leading AI companies, like Meta, through a single API, along with a broad set of capabilities you need to build generative AI applications with security, privacy, and responsible AI.

In this post, we demonstrate how you can use Llama 3.2 11B and 90B models for a variety of vision-based use cases. This is the first time Meta’s Llama models have been released with vision capabilities. These new capabilities expand the usability of Llama models from their traditional text-only applications. The vision-based use cases that we discuss in this post include document visual question answering, extracting structured entity information from images, and image captioning.

Overview of Llama 3.2 11B and 90B Vision models

The Llama 3.2 collection of multimodal and multilingual large language models (LLMs) is a collection of pre-trained and instruction-tuned generative models in a variety of sizes. The 11B and 90B models are multimodal—they support text in/text out, and text+image in/text out.

Llama 3.2 11B and 90B are the first Llama models to support vision tasks, with a new model architecture that integrates image encoder representations into the language model. The new models are designed to be more efficient for AI workloads, with reduced latency and improved performance, making them suitable for a wide range of applications. All Llama 3.2 models support a 128,000 context length, maintaining the expanded token capacity introduced in Llama 3.1. Additionally, the models offer improved multilingual support for eight languages, including English, German, French, Italian, Portuguese, Hindi, Spanish, and Thai.

Llama 3.2 models are available today for inferencing in SageMaker JumpStart and Amazon Bedrock. With SageMaker JumpStart, you can access Llama 3.2 models initially in the US East (Ohio) AWS region and support the required instance types. Meta’s Llama 3.2 90B and 11B models are also available in Amazon Bedrock in the US West (Oregon) Region, and in the US East (Ohio, N. Virginia) Regions via cross-region inference. Llama 3.2 1B and 3B models are available in the US West (Oregon) and Europe (Frankfurt) Regions, and in the US East (Ohio, N. Virginia) and Europe (Ireland, Paris) Regions via cross-region inference with planned expanded regional availability in the future.

Solution overview

In the following sections, we walk through how to configure Llama 3.2 vision models in Amazon Bedrock and Amazon SageMaker JumpStart for vision-based reasoning. We also demonstrate use cases for document question answering, entity extraction, and caption generation.

For the examples shown in this post, we use the Llama 3.2 90B model unless otherwise noted. The fashion images are from the Fashion Product Images Dataset. Caption generation images are from Human Preference Synthetic Dataset. The interior design and real estate images are from the Interior design dataset.

Prerequisites

The following prerequisites are needed to implement the steps outlined in this post:

For information about how to set up Llama 3.2 model access for Amazon Bedrock, see launch post. For details on creating model endpoints in SageMaker JumpStart, refer to the launch post.

Configure Llama 3.2 for vision-based reasoning in Amazon Bedrock

To set up vision-based reasoning tasks with Llama 3.2 models in Amazon Bedrock, use the following code snippet:

import boto3
import json
import base64
from botocore.config import Config

# Initialize the Bedrock client
config = Config(
            region_name = os.getenv("BEDROCK_REGION", "us-west-2"),
            )
bedrock_runtime = boto3.client('bedrock-runtime', config=config)
MODEL_ID = " us.meta.llama3-2-90b-instruct-v1:0"

Amazon Bedrock supports the messages object as part of the Converse API. With the Converse API, you don’t have to convert the image into base64 (compared to SageMaker JumpStart).

You can read the image with the following code:

# Read and encode the image
image_path = "<your_file_path>"  # Replace with the actual path to your image
try:
    # Open the image file and read its contents
    with open(image_path, "rb") as image_file:
        image_bytes = image_file.read()
    # Encode the image bytes to base64
    image_data = image_bytes
except FileNotFoundError:
    print(f"Image file not found at {image_path}")
    image_data = None 

Use the following code to create a messages object:

# Construct the messages for the model input

# Construct the messages for the model input
messages = [    
    {
        "role": "user",
        "content": [
            {                
                "text": prompt
            },
            {                
                "image": {
                    "format": "<your_file_format>",
                    "source": {
                        "bytes":image_data
                }
            }
        ]
    }
]

Invoke the Amazon Bedrock Converse API as follows:

try:
    # Invoke the SageMaker endpoint
    response = bedrock_runtime.converse(
        modelId=MODEL_ID, # MODEL_ID defined at the beginning
        messages=[
            messages
        ],
        inferenceConfig={
        "maxTokens": 4096,
        "temperature": 0,
        "topP": .1
        },        
    )
    
    # Read the response 
    print(response['output']['message']['content'][0]['text'])

except Exception as e:
    print(f"An error occurred while invoking the endpoint: {str(e)}")

Configure Llama 3.2 for vision-based reasoning in SageMaker

You can set up vision-based reasoning tasks with Llama 3.2 vision models with a SageMaker endpoint with the following code snippet (please refer to Llama 3.2 in SageMaker JumpStart blog to setup the inference endpoint):

import boto3
import json
import base64

# Initialize the SageMaker runtime client
sagemaker_runtime = boto3.client('sagemaker-runtime')
endpoint_name = '<model-endpoint>'  # Replace with your actual endpoint name

SageMaker JumpStart deployment can also take in a Messages API style messages object as the input (similar to the Amazon Bedrock Converse API). First, the image needs to be read into a base64 format before sending it through the messages object.

Read the image with the following code:

# Read and encode the image
image_path = "<your_file_path>"  # Replace with the actual path to your image
try:
    # Open the image file and read its contents
    with open(image_path, "rb") as image_file:
        image_bytes = image_file.read()
    # Encode the image bytes to base64
    image_data = base64.b64encode(image_bytes).decode('utf-8')
    image_media_type = 'image/jpeg'  # Adjust if using a different image format
except FileNotFoundError:
    print(f"Image file not found at {image_path}")
    image_data = None
    image_media_type = None

Create a messages object with the following code:

# Create a data URL for the image
my_url = f"""data:image/jpeg;base64,{image_data}"""

# Construct the messages for the model input
messages = [    
    {
        "role": "user",
        "content": [
            {
                "type": "text",
                "text": prompt
            },
            {
                "type": "image_url",
                "image_url": {
                    "url": my_url
                }
            }
        ]
    }
]

In the preceding code, prompt is the question we ask about the reasoning of the model with the image.

After you create the messages object, you can send that as payload to the SageMaker endpoint:

try:
    # Invoke the SageMaker endpoint
    response = sagemaker_runtime.invoke_endpoint(
        EndpointName=endpoint_name,
        ContentType='application/json',
        Body=json.dumps(payload)
    )
    
    # Read the response body
    response_body = response['Body'].read()
    
    if response_body:
        try:
            # Parse the JSON response
            result = json.loads(response_body.decode('utf-8'))
            # Print the model's response
            print(result['choices'][0]['message']['content'])
        except json.JSONDecodeError as json_err:
            print(f"Failed to parse JSON: {json_err}")
            print(f"Raw response: {response_body['choices'][0]['message']['content']}")
    else:
        print("The response body is empty")

except Exception as e:
    print(f"An error occurred while invoking the endpoint: {str(e)}")

Document question answering

In this section, we present example use cases for document question answering. We focus on two use cases: first on answering questions based on financial results slides that contain complex graphical information, and secondly solving math-based problems by looking at an image.

Financial results slides Q&A

Llama 3.2 vision models can reason and answer questions based on images with complicated information in the image such as plots, graphs, and tables. Let’s use the model for a financial analyst assistant to analyze slides with different types of financial information to answer our questions. We use the Amazon Q2 2024 Financial Results Conference Calls Slides document to ask Llama 3.2 some questions based on this slide deck.

For our first prompt, we ask “How is the sales trending over the last twelve months?” with the following slide.

We get the following response:
The sales are trending up 12% year over year. The bar graph shows that the sales have increased from $538,046 in Q2 2023 to $604,334 in Q1 2024, and further to $604,334 in Q2 2024. This indicates a steady growth in sales over the last twelve months.

Next, we ask “Can you please help me understand this slide? How are the operating and free cash flow trending the last 12 month period? What is the significant contributor to the free cash flow trend?” We use the following slide.

We get the following response:
The slide presents a table with two main sections: "Operating Cash Flow" and "Free Cash Flow Less Equipment Finance Leases and Principal Repayments of All Other Finance Leases and Financing Obligations." The table is divided into five columns, each representing a quarter from Q2 2023 to Q2 2024.

**Operating Cash Flow Trend:**

*   Q2 2023: $61,841

*   Q3 2023: $71,654

*   Q4 2023: $84,946

*   Q1 2024: $99,147

*   Q2 2024: $107,952

The operating cash flow has consistently increased over the five quarters, indicating a positive trend.

**Free Cash Flow Trend:**

*   Q2 2023: $6,734

*   Q3 2023: $20,241

*   Q4 2023: $35,549

*   Q1 2024: $48,778

*   Q2 2024: $51,448

The free cash flow has also shown an upward trend, with a significant increase from Q2 2023 to Q2 2024.

**Significant Contributor to Free Cash Flow Trend:**

The significant contributor to the free cash flow trend is the "Principal repayments of all other finance leases" and "Principal repayments of financing obligations." These two items have consistently increased over the five quarters, contributing to the overall growth in free cash flow.

In summary, both operating cash flow and free cash flow have demonstrated a positive trend over the last 12-month period. The significant contributor to the free cash flow trend is the increase in principal repayments of all other finance leases and financing obligations.

Visual math question answering

Another interesting problem for visual question answering is answering visual math and logic puzzles. Llama 3.2 vision models are highly capable in solving and answering visual math questions. We show a few examples here.

For our first prompt, we use the following image and ask “The kangaroo is inside how many circles?”

We get the following response:
The kangaroo is inside 3 circles.

Next, we use the following image and ask “Which of these figures differ from the other four?”

We get the following response:
The figure labeled E is the only one with a different shape for the skirt. The other figures have triangular skirts, while figure E has a rectangular skirt.

Entity extraction

Extracting information about products is an important use case for the retail industry. On platforms supporting third-party sellers, product images are the most available resources to gather information about the product. Therefore, accurately extracting relevant information about the product is essential for the success of the product. In this section, we show a couple of examples of clothing product images, and how you can use Llama 3.2 to extract relevant information about the product that can support it on an ecommerce application.

We first use the following images and the prompt “Can you please extract all information about the product in the image? Think very carefully in a step-by-step manner to construct a json file to organize all the information about the product. Please consider formulating the json file so that the extracted information can be stored in a database for further use in a retail ecommerce application. Return the json file as the output.”

For the pink dress, we get the following output:
Based on the image, here is a JSON representation of the product information:

```
{
"product_id": "unknown",
"name": "Pink Floral Dress",
"description": "A sleeveless dress with a pink floral pattern and a pink trim at the bottom",
"category": "Dresses",
"sub_category": "Casual Dresses",
"color": "Pink",
"size": "unknown",
"price": "unknown",
"brand": "unknown",
"material": "unknown",
"features": [
"Sleeveless",
"Floral pattern",
"Pink trim at the bottom"
],
"images": [
{
"url": "image_url",
"alt_text": "Pink Floral Dress"
}
]
}
```

Please note that some information such as product_id, size, price, brand, and material are unknown as they are not visible in the image. Also, the image_url is a placeholder and should be replaced with the actual URL of the image.

For the blue cap, we get the following output:

Based on the image, here is a JSON representation of the product information:

```json
{
"product_id": "NIKE-CAP-001",
"name": "Nike Cap",
"description": "A blue baseball cap with a white Nike logo",
"category": "Accessories",
"subcategory": "Hats",
"brand": "Nike",
"color": "Blue",
"size": "One Size",
"material": "Cotton",
"price": 29.99,
"image_url": "https://example.com/nike-cap.jpg",
"features": [
"Adjustable strap in the back",
"Embroidered Nike logo on the front"
]
}
```

This JSON object includes the following fields:

* `product_id`: a unique identifier for the product
* `name`: the name of the product
* `description`: a brief description of the product
* `category` and `subcategory`: the categories and subcategories that the product belongs to
* `brand`: the brand of the product
* `color`: the color of the product
* `size`: the size of the product
* `material`: the material that the product is made of
* `price`: the price of the product
* `image_url`: the URL of the product image
* `features`: an array of features that describe the product

Note that this is just one possible way to represent the product information in JSON. Depending on the specific requirements of your application, you may need to add or remove fields, or use different data types for certain fields.

Caption generation

Another important and useful capability of vision models is to generate captions automatically for images. In this section, we showcase caption generation with a simple caption, a more creative and funnier caption, and a marketing style caption.

Simple captioning

For simple captioning, we use the following image and the prompt “Please provide a caption for this image. Be creative but respectful.”

We get the following response:

The image shows a young woman sitting in a dentist's chair, with a female dentist examining her teeth. The woman is wearing a bib to protect her clothes from any water or other substances that may be used during the examination. The dentist is wearing a white lab coat and a mask, and she is holding a dental mirror and a probe. The background of the image is a dental office, with various dental tools and equipment visible.

The overall atmosphere of the image is one of professionalism and care. The dentist appears to be focused on her work, and the woman seems to be relaxed and comfortable. The image suggests that the woman is receiving a routine dental checkup or cleaning, and that the dentist is taking good care of her.

Creative captioning

For our second example, we want a more creative and funny tone. We use the same image and the prompt “Please provide a short caption for this image. Be creative and funny but respectful. This is meant to be shared on a social network application.”

We get the following response:

"Brace yourself for a sparkling smile, because this dentist is about to make your teeth shine brighter than a disco ball!"

Listing generation

To generate a real estate listing, we use the following image and the prompt “You are a real estate marketing expert. Please create a listing description with the attached image of a property.”

We get the following output:

**Welcome to Your Dream Home**

This stunning property boasts an open-plan living area that seamlessly blends style and functionality. The spacious living room, dining area, and kitchen are perfect for entertaining, with plenty of natural light pouring in through the large windows. The sleek, modern kitchen features high-end appliances and ample counter space, making meal prep a breeze.

The living room is a cozy retreat, with a comfortable sofa and chairs arranged around a stylish coffee table. The adjacent dining area is perfect for hosting dinner parties, with a large table that seats eight.

But the real showstopper is the stunning view of the water, which can be enjoyed from the comfort of your own home. Imagine sipping your morning coffee or enjoying a glass of wine in the evening while taking in the breathtaking scenery.

This property is the perfect blend of form and function, offering the ultimate in comfort and style. Don't miss out on this incredible opportunity to make it your own.

**Key Features:**

* Open-plan living area with kitchen, dining area, and living room
* Modern kitchen with high-end appliances and ample counter space
* Comfortable living room with sofa and chairs
* Large dining table that seats eight
* Stunning view of the water
* Perfect for entertaining

**Don't miss out on this incredible opportunity. Contact us today to schedule a viewing.**

Conclusion

In this post, we demonstrated the diverse vision capabilities of the Llama 3.2 11B and 90B models from Meta. Llama 3.2 vision models enable you to solve multiple use cases, including document understanding, math and logic puzzle solving, entity extraction, and caption generation. These capabilities can drive productivity in a number of enterprise use cases, including ecommerce (retail), marketing, and much more.

To learn more about Llama 3.2 features and capabilities in Amazon Bedrock, refer to the launch post, product page, and documentation. To learn more about using Llama 3.2 in SageMaker JumpStart, see the launch post, and for more information about using foundation models in SageMaker JumpStart, check out product page and documentation.

We can’t wait to see what you build with the Llama 3.2 models on AWS!


About the Authors

Dr. Natarajan Chennimalai Kumar is a Principal Solutions Architect in the 3rd Party Model Provider team at AWS, working closely with the Llama partner engineering team at Meta to enable AWS customers use Llama models. He holds a PhD from University of Illinois at Urbana-Champaign. He is based in the Bay Area in California. Outside of work, he enjoys watching shows with his kids, playing tennis, and traveling with his family.

Sebastian Bustillo is a Solutions Architect at AWS. He focuses on AI/ML technologies with a profound passion for generative AI and compute accelerators. At AWS, he helps customers unlock business value through generative AI. When he’s not at work, he enjoys brewing a perfect cup of specialty coffee and exploring the outdoors with his wife.

Marco Punio is a Sr. Specialist Solutions Architect focused on generative AI strategy, applied AI solutions, and conducting research to help customers hyperscale on AWS. As a member of the 3rd Party Model Provider Applied Sciences Solutions Architecture team at AWS, he is a Global Lead for the Meta – AWS Partnership and technical strategy. Based in Seattle, WA, Marco enjoys writing, reading, exercising, and building applications in his free time.

Armando Diaz is a Solutions Architect at AWS. He focuses on generative AI, AI/ML, and data analytics. At AWS, Armando helps customers integrating cutting-edge generative AI capabilities into their systems, fostering innovation and competitive advantage. When he’s not at work, he enjoys spending time with his wife and family, hiking, and traveling the world.

Read More

How generative AI is transforming legal tech with AWS

How generative AI is transforming legal tech with AWS

Legal professionals often spend a significant portion of their work searching through and analyzing large documents to draw insights, prepare arguments, create drafts, and compare documents. The rise of generative artificial intelligence (AI) has brought an inflection of foundation models (FMs). These FMs, with simple instructions (prompts), can perform various tasks such as drafting emails, extracting key terms from contracts or briefs, summarizing documents, searching through multiple documents, and more. As a result, these models are fit for legal tech. Goldman Sachs estimated that generative AI could automate 44% of legal tasks in the US. A special report published by Thompson Reuters reported that generative AI awareness is significantly higher among legal professionals, with 91% of respondents saying they have heard of or read about these tools.

However, such models alone are not sufficient due to legal and ethical concerns around data privacy. Security and confidentiality are of paramount importance in the legal field. Legal tech professionals, like any other business handling sensitive customer information, require robust security and confidentiality practices. Advancements in AI and natural language processing (NLP) show promise to help lawyers with their work, but the legal industry also has valid questions around the accuracy and costs of these new techniques, as well as how customer data will be kept private and secure. AWS AI and machine learning (ML) services help address these concerns within the industry.

In this post, we share how legal tech professionals can build solutions for different use cases with generative AI on AWS.

AI/ML on AWS

AI and ML have been a focus for Amazon for over 25 years, and many of the capabilities customers use with Amazon are driven by ML. Ecommerce recommendation engines, Just Walk Out technology, Alexa devices, and route optimizations are some examples. These capabilities are built using the AWS Cloud. At AWS, we have played a key role in and making ML accessible to anyone who wants to use it, including more than 100,000 customers of all sizes and industries. Thomson Reuters, Booking.com, and Merck are some of the customers who are using the generative AI capabilities of AWS services to deliver innovative solutions.

AWS makes it straightforward to build and scale generative AI customized for your data, your use cases, and your customers. AWS gives you the flexibility to choose different FMs that work best for your needs. Your organization can use generative AI for various purposes like chatbots, intelligent document processing, media creation, and product development and design. You can now apply that same technology to the legal field.

When you’re building generative AI applications, FMs are part of the architecture and not the entire solution. There are other components involved, such as knowledge bases, data stores, and document repositories. It’s important to understand how your enterprise data is integrating with different components and the controls that can be put in place.

Security and your data on AWS

Robust security and confidentiality are foundations to the legal tech domain. At AWS, security is our top priority. AWS is architected to be the most secure global cloud infrastructure on which to build, migrate, and manage applications and workloads. This is backed by our deep set of over 300 cloud security tools and the trust of our millions of customers, including the most security sensitive organizations like government, healthcare, and financial services.

Security is a shared responsibility model. Core security disciplines, like identity and access management, data protection, privacy and compliance, application security, and threat modeling, are still critically important for generative AI workloads, just as they are for any other workload. For example, if your generative AI applications is accessing a database, you’ll need to know what the data classification of the database is, how to protect that data, how to monitor for threats, and how to manage access. But beyond emphasizing long-standing security practices, it’s crucial to understand the unique risks and additional security considerations that generative AI workloads bring. To learn more, refer to Securing generative AI: An introduction to the Generative AI Security Scoping Matrix.

Sovereignty has been a priority for AWS since the very beginning, when we were the only major cloud provider to allow you to control the location and movement of your customer data and address stricter data residency requirements. The AWS Digital Sovereignty Pledge is our commitment to offering AWS customers the most advanced set of sovereignty controls and features available in the cloud. We are committed to expanding our capabilities to allow you to meet your digital sovereignty needs, without compromising on the performance, innovation, security, or scale of the AWS Cloud.

AWS generative AI approach for legal tech

AWS solutions enable legal professionals to refocus their expertise on high-value tasks. On AWS, generative AI solutions are now within reach for legal teams of all sizes. With virtually unlimited cloud computing capacity, the ability to fine-tune models for specific legal tasks, and services tailored for confidential client data, AWS provides the ideal environment for applying generative AI in legal tech.

In the following sections, we share how we’re working with several legal customers on different use cases that are focused on improving the productivity of various tasks in legal firms.

Boost productivity to allow a search based on context and conversational Q&A

Legal professionals store their information in different ways, such as on premises, in the cloud, or a combination of the two. It can take hours or days to consolidate the documents prior to reviewing them if they are scattered across different locations. The industry relies on tools where searching is limited to each domain, and may not flexible enough for users to search for information.

To address this issue, AWS used AI/ML and search engines to provide a managed service where users can ask a human-like, open-ended generative AI-powered assistant to answer questions based on data and information. Users can prompt the assistant to extract key attributes that serve as metadata, find relevant documents, and answer legal questions and terms inquiries. What used to take hours can now be done in a matter of minutes, and based on what we have learned with our customers, AWS generative AI has been able to improve productivity of resources by up to a 15% increase compared to manual processes during its initial phases.

Boost productivity with legal document summarization

Legal tech workers can realize a benefit from the generation of first draft that can then be reviewed and revised by the process owner. Multiple use cases are being implemented under this category:

  • Contract summarization for tax approval
  • Approval attachment summarization
  • Case summarization

The summarization of documents can either use existing documents and videos from your document management system or allow users to upload a document and ask questions in real time. Instead of writing the summary, generative AI uses FMs to create the content so the lawyer can review the final content. This approach reduces these laborious tasks to 5–10 minutes instead of 20–60 minutes.

Boost attorney productivity by drafting and reviewing legal documents using generative AI

Generative AI can help boost attorney productivity by automating the creation of legal documents. Tasks like drafting contracts, briefs, and memos can be time-consuming for attorneys. With generative AI, attorneys can describe the key aspects of a document in plain language and instantly generate an initial draft. This new approach uses generative AI to use templates and chatbot interactions to add allowed text to an initial validation prior to legal review.

Another use case is to improve reviewing contracts using generative AI. Attorneys spend valuable time negotiating contracts. Generative AI can streamline this process by reviewing and redlining contracts, and identify potential discrepancies and conflicting provisions. Given a set of documents, this functionality allows attorneys to ask open-ended questions based on the documents along with follow-up questions, enabling human-like conversational experiences with enterprise data.

Start your AWS generative AI journey today

We are at the beginning of a new and exciting foray into generative AI, and we have just scratched the surface of some potential applications in the legal field—from text summarization, drafting legal documents, or searching based on context. The AWS generative AI stack offers you the infrastructure to build and train your own FMs, services to build with existing FMs, or applications that use other FMs. You can start with the following services:

  • Amazon Q Business is a new type of generative AI-powered assistant. It can be tailored to your business to have conversations, solve problems, generate content, and take actions using the data and expertise found in your company’s information repositories, code bases, and enterprise systems. Amazon Q Business provides quick, relevant, and actionable information and advice to help streamline tasks, speed up decision-making and problem-solving, and help spark creativity and innovation.
  • Amazon Bedrock is a fully managed service that offers a choice of high-performing FMs from leading AI companies like AI21 Labs, Anthropic, Cohere, Meta, Mistral AI, Stability AI, and Amazon through a single API, along with a broad set of capabilities to build generative AI applications with security, privacy, and responsible AI. With Amazon Bedrock, you can experiment with and evaluate top FMs for your use case, privately customize them with your data using techniques such as fine-tuning and Retrieval Augmented Generation (RAG), and build agents that perform tasks using your enterprise systems and data sources.

In upcoming posts, we will dive deeper into different architectural patterns that describe how to use AWS generative AI services to solve for these different use cases.

Conclusion

Generative AI solutions are empowering legal professionals to reduce the difficulty in finding documents and performing summarization, and allow your business to standardize and modernize contract generation and revisions. These solutions do not envision to replace law experts, but instead increase their productivity and time working on practicing law.

We are excited about how legal professionals can build with generative AI on AWS. Start exploring our services and find out where generative AI could benefit your organization. Our mission is to make it possible for developers of all skill levels and for organizations of all sizes to innovate using generative AI in a secure and scalable manner. This just the beginning of what we believe will be the next wave of generative AI, powering new possibilities in legal tech.

Resources


About the Authors

Victor FissVictor Fiss a Sr. Solution Architect Leader at AWS, helping customers in their cloud journey from infrastructure to generative AI solutions at scale. In his free time, he enjoys hiking and playing with his family.

Vineet KachhawahaVineet Kachhawaha is a Sr. Solutions Architect at AWS focusing on AI/ML and generative AI. He co-leads the AWS for Legal Tech team within AWS. He is passionate about working with enterprise customers and partners to design, deploy, and scale AI/ML applications to derive business value.

Pallavi NargundPallavi Nargund is a Principal Solutions Architect at AWS. She is a generative AI lead for East – Greenfield. She leads the AWS for Legal Tech team. She is passionate about women in technology and is a core member of Women in AI/ML at Amazon. She speaks at internal and external conferences such as AWS re:Invent, AWS Summits, and webinars. Pallavi holds a Bachelor’s of Engineering from the University of Pune, India. She lives in Edison, New Jersey, with her husband, two girls, and a Labrador pup.

Read More

Deploy generative AI agents in your contact center for voice and chat using Amazon Connect, Amazon Lex, and Amazon Bedrock Knowledge Bases

Deploy generative AI agents in your contact center for voice and chat using Amazon Connect, Amazon Lex, and Amazon Bedrock Knowledge Bases

This post is co-written with Vraj Shah and Chaitanya Hari from DoorDash.

DoorDash connects consumers with their favorite local businesses in more than 30 countries across the globe. Recently, they faced a significant challenge in handling the high volume of calls from its contractor delivery workers, known as Dashers. With a user base of over 37 million active consumers and 2 million monthly active Dashers at the end of 2023, the company recognized the need to reduce the burden on its live agents by providing a more efficient self-service experience for Dashers.

To address this challenge, the contact center team at DoorDash wanted to harness the power of generative AI to deploy a solution quickly, and at scale, while maintaining their high standards for issue resolution and customer satisfaction. Dashers, who generally prefer calling into support rather than texting while they’re on the road, require fast and reliable assistance, with minimal response latency. This low latency requirement became a critical factor in DoorDash’s quest for an effective, voice-enabled self-service solution.

Working with the AWS Generative AI Innovation Center, DoorDash built a solution to provide Dashers with a low-latency self-service voice experience to answer frequently asked questions, reducing the need for live agent assistance, in just 2 months.

The solution uses Amazon Lex, a voice-enabled conversational AI service; Amazon Bedrock, a fully managed service that makes foundation models (FMs) from leading AI startups and Amazon available through an API, so you can choose from a wide range of FMs to find the model that is best suited for your use case; and Amazon Bedrock Knowledge Bases, a fully managed service that connects large language models (LLMs) to your data sources. It’s a fully serverless architecture that uses Amazon OpenSearch Serverless, which can run petabyte-scale workloads, without you having to manage the underlying infrastructure.

The solution fields hundreds of thousands of calls per day, responding to Dashers with answers to their questions in 2.5 seconds or less. It also enables operational capabilities including automated testing, conversation analytics, monitoring and observability, and LLM hallucination prevention and detection.

“We wanted to empower Dashers to get help with their most common questions and issues as quickly and efficiently as possible, saving them time, effort, and increasing their trust in DoorDash’s self-service capabilities,”

– Chaitanya Hari, Contact Center Product Lead at DoorDash.

In this post, we show how you can deploy generative AI agents in your contact center using AWS services.

Solution overview

For this post, we use a fictional hotel chain example called Example Corp Hospitality Group. The following is a sample interaction between a customer and the LLM-powered automated agent:

Customer: Hi, we’re looking for a great place for a family vacation

Agent: Example Corp Family Getaways offers family-friendly hotels and resorts located near top attractions like theme parks, national parks, and historic sites. The resorts feature spacious accommodations, activities for all ages, and diverse dining options to create unforgettable family memories.

Customer: Where are they located?

Agent: Example Corp Family Getaways offers family-friendly resorts in a variety of popular vacation destinations, including Orlando, Florida; Anaheim, California; Williamsburg, Virginia; Pigeon Forge, Tennessee; San Antonio, Texas; Scottsdale, Arizona; Branson, Missouri; Wisconsin Dells, Wisconsin; and Hershey, Pennsylvania.

Customer: Pigeon Forge sounds interesting.

Agent: Our Pigeon Forge resort offers families a chance to experience the great outdoors while enjoying the excitement of Dollywood and other nearby attractions. Guests can enjoy our indoor water park, mini-golf course, and nightly s’mores roasts around the fire pit.

Customer: Do they have kids’ menus?

Agent: Example Corp Family Getaways resorts in Pigeon Forge, Tennessee feature family-friendly dining options, including dedicated kids’ menus with favorites like pizza, chicken nuggets, and mac and cheese.

You can deploy the solution in your own AWS account and try the example solution. The following diagram illustrates the solution architecture.

Solution architecture diagram

We will walk you through deploying and testing these major components of the solution:

  1. An AWS CloudFormation stack to set up an Amazon Bedrock knowledge base, where you store the content used by the solution to answer questions.
  2. A CloudFormation stack to create an Amazon Lex bot and an AWS Lambda fulfillment function, which implement the core Retrieval Augmented Generation (RAG) question answering capability.
  3. An optional CloudFormation stack to deploy a data pipeline to enable a conversation analytics dashboard.
  4. An optional CloudFormation stack to enable an asynchronous LLM hallucination detection feature.
  5. Optional Jupyter notebooks in Amazon SageMaker that provide an automated testing capability that compares generated answers to ground truth answers, providing pass/fail grades with explanations.

Everything you need is also provided as open source in our GitHub repo.

Prerequisites

You need to have an AWS account and an AWS Identity and Access Management (IAM) role and user with permissions to create and manage the necessary resources and components for this application. If you don’t have an AWS account, see How do I create and activate a new Amazon Web Services account?

This solution uses Amazon Bedrock LLMs to find answers to questions from your knowledge base. Before proceeding, if you have not previously done so, request access to at least the following Amazon Bedrock models:

  • Amazon Titan Embeddings G1 – Text
  • Cohere Embed English v3 and Cohere Embed Multilingual v3
  • Anthropic’s Claude 3 Haiku and Anthropic’s Claude 3 Sonnet

If you’ll be integrating with Amazon Connect, make sure you have an instance available in your account. If you don’t already have one, you can create one. If you plan to deploy the conversation analytics stack, you need Amazon QuickSight, so make sure you have enabled it in your AWS account. 

At the time of writing, this solution is available in the following AWS Regions: Asia Pacific (Singapore, Sydney, Tokyo), Canada (Central), Europe (Frankfurt, London), US East (N. Virginia), and US West (Oregon).

Deploy the Amazon Bedrock knowledge base

You can use the provided CloudFormation stack for the Amazon Bedrock knowledge base instances you may need using Amazon Simple Storage Service (Amazon S3) as a data source. Complete the following steps to set up your knowledge base:

  1. Sign in to your AWS account, then choose Launch Stack to deploy the CloudFormation template:

Launch Knowledge Base stack

  1. Provide a stack name, for example contact-center-kb.
  2. Provide the name for an existing S3 bucket, for example contact-center-kb-(your-account-number). This is where the content for the demo solution will be stored. Create this S3 bucket if you don’t already have one.
  3. Do not specify an S3 prefix.
  4. Choose an embedding model, such as amazon.titan-embed-text-v2:0.
  5. Choose the Fixed-sized chunking chunking strategy.
  6. For the maximum tokens per chunk entry, use 600 for the Amazon Titan embeddings model. (If you are using the Cohere embeddings model, use 512). This represents about a full page of text.
  7. For the percentage overlap, use 10%.
  8. Leave the four entries for Index Details at their default values (index name, vector field name, metadata field name, and text field name).
  9. Choose Next.
  10. On the Configure stack options page, choose Next
  11. On the Review and create page, acknowledge the IAM capabilities message and choose Submit.

The stack will take about 10 minutes to deploy.

Upload the sample content and test your knowledge base

The demonstration sample for the solution includes an LLM-based hotel-bot that can answer questions about the fictional hotel chain Example Corp Hospitality Group. You need to load the content for this hotel chain into the S3 bucket that you specified for the knowledge base stack. You can find the S3 bucket used by the CloudFormation stack on the Outputs tab for the stack.

  1. Either using the AWS Command Line Interface (AWS CLI) or the AWS Management Console, upload the following folders from the content section of the GitHub repo:
    • corporate
    • family-getaways
    • luxury-suites
    • party-times
    • seaside-resorts
    • waypoint-inns

You can choose either the PDF versions or the Word document versions (Word versions recommended). When you’re done, the top level of your S3 bucket should contain six folders, each containing a single Word or PDF document.

  1. On the Amazon Bedrock console, choose Knowledge bases in the navigation pane.
  2. Choose your new knowledge base to open it.

A message appears that says “One or more data sources have not been synced.”

  1. Select the data source and choose Sync.

The sync process should only take a minute or two.

After your data source has been synced, you can try some question answering on the Amazon Bedrock console. Make sure you have enabled all the models approved by your organization on the Amazon Bedrock Model access page.

Select an LLM model, such as Anthropic’s Claude 3 Haiku on Amazon Bedrock, and start asking questions! You might want to peruse the sample documents you uploaded for some ideas about questions to ask.

Knowledge base test example

Deploy the hallucination detection stack (optional)

If you want to use the optional asynchronous hallucination detection feature, deploy this stack. Otherwise, move on to the next section. You can use this CloudFormation stack for any RAG-based solution requiring asynchronous hallucination detection.

  1. Choose Launch Stack:

Launch Hallucination Detection stack

  1. Provide a stack name, for example contact-center-hallucination-detection.
  2. Specify an LLM to perform the hallucination detection. At the time of writing, there are seven LLMs that are recommended for hallucination detection. For the demo solution, choose the default (Claude V3 Sonnet).
  3. Optionally, create an Amazon Key Management Service (AWS KMS) customer managed key (CMK) to encrypt the Amazon Simple Queue Service (Amazon SQS) queue and the Amazon CloudWatch Logs log group for the Lambda function (recommended for production).

There are two types of Amazon CloudWatch alarms in this stack:

  • ERROR alarms – For code issues with the Lambda function that does the hallucination detection work
  • WARNING alarms – For when the Lambda function actually detects a hallucination

Both alarm types are optional, but recommended.

  1. Choose yes to enable or no to disable the alarms.
  2. For the alarms that you enable, you can specify an optional email address or distribution list to receive email notifications about the alarms.
  3. Choose Next.
  4. On the Configure stack options page, choose Next
  5. On the Review and create page, acknowledge the IAM capabilities message and choose Submit.

The stack will take about a minute or two to deploy.

When the stack is complete, you can review the resources it creates on the Resources tab for the CloudFormation stack. In particular, review the Lambda function code.

If you entered email addresses for the alarm notifications, you should receive email requests asking you to confirm the subscriptions. Confirm them to receive email notifications about alarms that may occur.

Deploy the RAG solution stack

If you’re integrating with Amazon Connect, make sure you have an instance available in your account. If you don’t already have one, you can create one. Then complete the following steps to deploy the Amazon Lex bot and Lambda fulfillment function:

  1. Choose Launch Stack:

  1. Provide a stack name, for example contact-center-rag-solution.
  2. Provide a name for the Amazon Lex bot, for example hotel-bot.
  3. Specify the number of conversation turns to retain for context. This can be optimized for different use cases and datasets. For the hotel-bot demo, try the default of 4.
  4. Optionally, specify an existing CloudWatch Logs log group ARN for the Amazon Lex conversation logs. You’ll need this if you’re planning to deploy the conversation analytics stack. Create a log group if you don’t already have one.
  5. Optionally, enter a value for Lambda provisioned concurrency units for the Amazon Lex bot handler function. If set to a non-zero number, this will prevent Lambda cold starts and is recommended for production and for internal testing. For development, 0 or 1 is recommended.
  6. Optionally, select the option to create a KMS CMK to encrypt the CloudWatch Logs log groups for the Lambda functions (recommended for production).
  7. If you’re integrating with Amazon Connect, provide the Amazon Connect instance ARN, as well as the name for a new contact flow that the stack will create for you.
  8. Provide the knowledge base ID from the knowledge base stack you just created. You can find this on the Outputs tab of the knowledge base stack.
  9. Provide the S3 bucket used by the knowledge base stack (also referenced on the Outputs tab).
  10. If you created the hallucination detection stack, enter the SQS queue name. You can find this on the Outputs tab of the hallucination detection stack.
  11. If you opted for a KMS key for your hallucination detection stack, enter the KMS key ARN.
  12. Choose Next.
  13. On the Configure stack options page, choose Next
  14. On the Review and create page, acknowledge the IAM capabilities message and choose Submit.

The stack will take a few minutes to complete.

To try the RAG solution, navigate to the Amazon Lex console and open the hotel-bot bot. The bot has a single language section for the English language. Choose Intents in the navigation pane to check out the intents for this sample bot. They include the following:

  • Intents related to questions about the hotel chain and its various hotel brands – This includes Accommodations, Amenities, CorporateOverview, Locations, Parking, and more. These intents are routed to the RAG solution by Amazon Lex. Technically, intents like these could be omitted, allowing the FallbackIntent to handle requests of this nature. However, including these intents (and their sample utterances) provides Amazon Lex with information about the “language” of your solution domain, allowing it to better optimize its speech-to-text engine and improve speech transcription accuracy. In addition, including these intents is useful for conversation analytics.
  • SwitchBrand – This intent is designed to improve conversation flow by allowing the user to say things like “What about at your other hotels?” in the middle of a conversation.
  • Booking – This demonstrates an example of routing the caller to a live agent queue.
  • SpeakToAgent – This intent is for when a caller specifically requests a live agent.
  • Welcome, Goodbye, and Help – These conversation support intents are for starting and ending the conversation, or asking what the bot can do.
  • FallbackIntent – This is the standard intent for questions or requests that don’t match other intents. In this example solution, such requests are also routed to the RAG solution to allow the LLM to answer based on the content in the knowledge base.
  • SelectKnowledgeBase and SelectLLM – These allow the user to direct the RAG solution to use a different knowledge base instance (if more than one is available) or a different LLM. These intents are designed for testing purposes, and should normally be included only in non-production deployments. You can test the RAG solution with any of the LLMs available on Amazon Bedrock. You can also switch to a different knowledge base or LLM mid-conversation, if desired.
  • ToggleLLMGuardrails and ToggleLLMContext – These allow the user to turn the prompt-based LLM guardrails off or on, and to disable or enable the retrieval of information from the knowledge base. These intents are designed for testing purposes, and should normally be included only in non-production environments. You can turn these settings off and on mid-conversation, if desired.

You can choose Test on the Amazon Lex console to try the solution.

Amazon Lex test example

Try some sample conversations, for example:

  • Ask “We’re looking for a nice place for a family vacation” and the bot will respond “Example Corp Family Getaways offers family-friendly accommodations…”
  • Ask “Where are they located?” and the bot will respond “Example Corp Family Getaways has locations in…”
  • Ask “Tell me more about the one in Pigeon Forge” and the bot will respond “The Example Corp Family Getaways resort in Pigeon Forge, Tennessee is…”

You can refer to the sample documents you uploaded for some ideas about questions to ask.

If you deployed the hallucination detection stack, you can look at its assessment of the answers you got when you tested. From the hallucination detection stack details page, on the Resources tab, choose the HallucinationDetectionFunctionLogGroup entry. This opens the CloudWatch Logs log group for the Lambda hallucination detection function. You can inspect the log statements to observe the hallucination detection process in action, as shown in the following screenshot.

Hallucination detection example

If you’re integrating with Amazon Connect, there will be a new contact flow in the Amazon Connect instance you specified, as shown in the following screenshot.

Amazon Connect contact flow example

To test using voice, just claim a phone number, associate it with this contact flow, and give it a call!

Deploy the conversation analytics stack (optional)

This stack uses QuickSight for analytics, so make sure you have already enabled it in your AWS account before deploying this stack.

  1. Choose Launch Stack:

  1. Provide a stack name, for example contact-center-analytics.
  2. Provide the name (not the ARN) of the Amazon Lex conversation logs log group. This is the same CloudWatch Logs log group you used for the the RAG solution CloudFormation stack.
  3. Choose an option for purging source log streams from the log group. For testing, choose no.
  4. Choose an option for redacting sensitive data using from the conversation logs. For testing, choose no.
  5. Leave the personally identifiable information (PII) entity types and confidence score thresholds at their default values.
  6. Choose an option for allowing unredacted logs for the Lambda function in the data pipeline. For testing, choose yes.
  7. Select an option for creating a KMS CMK.

If you create a CMK, it will be used to encrypt the data in the S3 bucket that this stack creates, where the normalized conversation data is housed. This allows you to control which IAM principals are allowed to decrypt the data and view it. This setting is recommended for production.

  1. Select the options for enabling CloudWatch alarms for ERRORS and WARNINGS in the Amazon Lex data pipeline. It is recommended to enable these alarms.
  2. For the alarms that you enable, you can specify an optional email address or distribution list to receive email notifications about the alarms.
  3. Choose Next.
  4. On the Configure stack options page, choose Next
  5. On the Review and create page, acknowledge the IAM capabilities message and choose Submit.

The stack should about 5 minutes to complete.

The following diagram illustrates the architecture of the stack.

As Amazon Lex writes conversation log entries to CloudWatch Logs (1), they are picked up by Amazon Data Firehose and streamed to an S3 bucket (2). Along the way, a Lambda transformation function (3) simplifies the JSON structure of the data to make it more user-friendly for querying purposes. The Lambda function can also redact sensitive data using Amazon Comprehend (4), and optionally purge the entries from the CloudWatch Logs log group as it consumes them.

On a scheduled basis (every 5 minutes), an AWS Glue crawler (5) inspects new data in the S3 bucket, and updates a data schema that is used by Amazon Athena (6) to provide a SQL interface to the data. This allows tools like QuickSight (7) to create near real-time dashboards, analytics, and visualizations of the data.

Set up the QuickSight dashboard (optional)

Before you create the QuickSight dashboard, make sure to return to the Amazon Lex console and ask a few questions, in order to generate some data for the dashboard. It will take about 5 minutes for the pipeline to process this new conversation data and make it available to QuickSight.

To set up dashboards and visualizations in QuickSight, complete the following steps:

  1. On the QuickSight console, choose the user profile icon and choose Manage QuickSight.
  2. Under Security & permissions, choose Manage in the QuickSight access to AWS services
  3. Under Amazon S3, choose Select S3 buckets.
  4. Enable access to the S3 bucket created by the conversation analytics stack (it will have a name with a 12-character unique identifier prepended to lex-conversation-logs). You don’t need to enable write permissions.
  5. Choose Finish, then choose Save.
  6. Choose the QuickSight menu icon to return to the main page in QuickSight.
  7. In the navigation pane, choose Datasets.
  8. Choose New dataset.
  9. From the list of dataset sources, choose Athena.
  10. Enter a data source name (for example contact-center-analytics).
  11. Choose Create data source.
  12. In the Choose your table window, choose your database, select your lex_conversation_logs table, and choose Edit/Preview data.

Quicksight select database table example

This opens your new QuickSight dataset. You can review the various attributes available, and see some results from your testing.

Quicksight dataset example

For improved speed in displaying the data, you can select the SPICE option for Query mode, but that will mean you need to refresh SPICE (or set up an hourly auto-update schedule) when you want to see data updates based on additional testing.

  1. For now, leave the setting as Direct query.
  2. When you’re ready, choose PUBLISH & VISUALIZE.
  3. In the New sheet window, keep the defaults and choose CREATE.

This opens the analysis page, where you can start creating visualizations.

Quicksight analysis example

Automated testing notebooks (optional)

To try the automated testing capability, you need a SageMaker Jupyter notebook. Alternatively, you can run the notebooks locally in your integrated development environment (IDE) or other environment that supports Jupyter notebooks.

  1. On the SageMaker console, under Notebook in the navigation pane, choose Notebook instances.
  2. Choose Create notebook instance.
  3. Give your notebook a name, such as contact-center-rag-testing.
  4. To enable multi-threaded testing, it’s recommended to select a larger instance, such as ml.m5.2xlarge (which has 8 vCPUs) or ml.m5.4xlarge (which has 16 vCPUs). Don’t forget to stop them when they’re not in use.
  5. Keep the default setting for Platform identifier (Amazon Linux 2, Jupyter Lab 3).
  6. Under Additional configuration, increase the Volume size in GB setting to 50 GB.
  7. In the Permissions and encryption section, under IAM role, choose Create a new role in the drop down list (don’t use the role creation wizard).
  8. In the Create an IAM role window, you can specify any S3 buckets you want to provide access to (none are needed for this solution).
  9. Choose Create role.

Amazon Sagemaker create role example

  1. Choose Create notebook instance.

It will take several minutes for your notebook instance to become available. While it’s being created, you can update the IAM role to add some inline policies you’ll need for accessing Amazon Bedrock and Amazon Lex.

  1. On the Notebook instances page, open your notebook instance (for example, contact-center-rag-testing) and then choose the entry under IAM role ARN to open the role.
  2. Add the following inline policies (available in the notebooks/iam-roles folder in the GitHub repository):

You can revise these roles to limit resource access as needed.

  1. After your notebook instance has started, choose Open Jupyter to open the notebook.
  2. Upload the following to your notebook instance (if desired, you can zip the files locally, upload the zip archive, and then unzip it in SageMaker):
    1. bedrock_helpers.py – This script configures LLM instances for the notebooks.
    2. bedrock_utils – You should make sure to upload all subfolders and files, and confirm that the folder structure is correct.
    3. run_tests.ipynb – This notebook runs a set of test cases.
    4. generate_ground_truths.ipynb – Given a set of questions, this notebook generates potential ground truth answers.
    5. test-runs – This folder should contain Excel workbooks.
  3. Open the run_tests.ipynb notebook.
  4. In the second cell, replace the bot_id and bot_alias_id values with the values for your Amazon Lex bot (you can find these on the Outputs tab of the RAG solution stack).
  5. After you updated these values, choose Restart & Run All on the Kernel

If you’re using a ml.m5.2xlarge instance type, it should take about a minute to run the 50 test cases in the test-runs/test-cases-claude-haiku-2024-09-02.xlsx workbook. When it’s complete, you should find a corresponding test-results workbook in the test-runs folder in your notebook.

Sample test results

After a few minutes, you can also see the test results in your conversation analytics dashboard.

Quicksight test run example

Adapt the solution to your use case

You can adapt this solution to your specific use cases with minimal work:

  • Replace the Amazon Bedrock Knowledge Bases sample content with your content – Replace the content in the S3 bucket and organize it into a folder structure that makes sense for your use case. You can create a new knowledge base for your content.
  • Replace the intents in the Amazon Lex bot with intents for your use case – Modify the Amazon Lex bot definition to reflect the interactions you want to enable for your use case.
  • Modify the LLM prompts in the bedrock_utils code – In the Amazon Lex bot fulfillment Lambda function, review the LLM prompt definitions in the bedrock_utils folder. For example, provide a use case-specific definition for the role of the LLM-based agent.
  • Modify the bot handler code if necessary – In the Amazon Lex bot fulfillment Lambda function, review the code in the TopicIntentHandler.py function. For the knowledge base search, this code provides an example that uses the sample hotel brands as topics. You can replace this metadata search query with one appropriate for your use cases.

Clean up

Congratulations! You have completed all the steps for setting up your voice-enabled contact center generative AI agent solution using AWS services.

When you no longer need the solution deployed in your AWS account, you can delete the CloudFormation stacks that you deployed, as well as the SageMaker notebook instance if you created one.

Conclusion

The contact center generative AI agent solution offers a scalable, cost-effective approach to automate Q&A conversations in your contact center, using AWS services like Amazon Bedrock, Amazon Bedrock Knowledge Bases, OpenSearch Serverless, and Amazon Lex.

The solution code is provided as open source—use it as a starting point for your own solution, and help us make it better by contributing back fixes and features through GitHub pull requests. Browse to the GitHub repository to explore the code, and check the CHANGELOG for the latest changes and the README for the latest documentation updates.

For expert assistance, the AWS Generative AI Innovation Center, AWS Professional Services, and our AWS Partners are here to help.


About the Authors

Vraj Shah is a Connect Developer at DoorDash.

Chaitanya Hari is a Voice/Contact Center Product Lead at DoorDash.

Marcelo Silva PhotoMarcelo Silva is a Principal Product Manager at Amazon Web Services, leading strategy and growth for Amazon Bedrock Knowledge Bases and Amazon Lex.

Adam Diesterhaft is a Sr. Pursuit Solutions Architect on the Amazon Connect team.

Brian Yost is a Principal Deep Learning Architect in the AWS Generative AI Innovation Center.

Read More