Build generative AI agents with Amazon Bedrock, Amazon DynamoDB, Amazon Kendra, Amazon Lex, and LangChain

Build generative AI agents with Amazon Bedrock, Amazon DynamoDB, Amazon Kendra, Amazon Lex, and LangChain

Generative AI agents are capable of producing human-like responses and engaging in natural language conversations by orchestrating a chain of calls to foundation models (FMs) and other augmenting tools based on user input. Instead of only fulfilling predefined intents through a static decision tree, agents are autonomous within the context of their suite of available tools. Amazon Bedrock is a fully managed service that makes leading FMs from AI companies available through an API along with developer tooling to help build and scale generative AI applications.

In this post, we demonstrate how to build a generative AI financial services agent powered by Amazon Bedrock. The agent can assist users with finding their account information, completing a loan application, or answering natural language questions while also citing sources for the provided answers. This solution is intended to act as a launchpad for developers to create their own personalized conversational agents for various applications, such as virtual workers and customer support systems. Solution code and deployment assets can be found in the GitHub repository.

Amazon Lex supplies the natural language understanding (NLU) and natural language processing (NLP) interface for the open source LangChain conversational agent embedded within an AWS Amplify website. The agent is equipped with tools that include an Anthropic Claude 2.1 FM hosted on Amazon Bedrock and synthetic customer data stored on Amazon DynamoDB and Amazon Kendra to deliver the following capabilities:

  • Provide personalized responses – Query DynamoDB for customer account information, such as mortgage summary details, due balance, and next payment date
  • Access general knowledge – Harness the agent’s reasoning logic in tandem with the vast amounts of data used to pre-train the different FMs provided through Amazon Bedrock to produce replies for any customer prompt
  • Curate opinionated answers – Inform agent responses using an Amazon Kendra index configured with authoritative data sources: customer documents stored in Amazon Simple Storage Service (Amazon S3) and Amazon Kendra Web Crawler configured for the customer’s website

Solution overview

Demo recording

The following demo recording highlights agent functionality and technical implementation details.

Solution architecture

The following diagram illustrates the solution architecture.

Solution Architecture Overview

Diagram 1: Solution Architecture Overview

The agent’s response workflow includes the following steps:

  1. Users perform natural language dialog with the agent through their choice of web, SMS, or voice channels. The web channel includes an Amplify hosted website with an Amazon Lex embedded chatbot for a fictitious customer. SMS and voice channels can be optionally configured using Amazon Connect and messaging integrations for Amazon Lex. Each user request is processed by Amazon Lex to determine user intent through a process called intent recognition, which involves analyzing and interpreting the user’s input (text or speech) to understand the user’s intended action or purpose.
  2. Amazon Lex then invokes an AWS Lambda handler for user intent fulfillment. The Lambda function associated with the Amazon Lex chatbot contains the logic and business rules required to process the user’s intent. Lambda performs specific actions or retrieves information based on the user’s input, making decisions and generating appropriate responses.
  3. Lambda instruments the financial services agent logic as a LangChain conversational agent that can access customer-specific data stored on DynamoDB, curate opinionated responses using your documents and webpages indexed by Amazon Kendra, and provide general knowledge answers through the FM on Amazon Bedrock. Responses generated by Amazon Kendra include source attribution, demonstrating how you can provide additional contextual information to the agent through Retrieval Augmented Generation (RAG). RAG allows you to enhance your agent’s ability to generate more accurate and contextually relevant responses using your own data.

Agent architecture

The following diagram illustrates the agent architecture.

LangChain Conversational Agent Architecture

Diagram 2: LangChain Conversational Agent Architecture

The agent’s reasoning workflow includes the following steps:

  1. The LangChain conversational agent incorporates conversation memory so it can respond to multiple queries with contextual generation. This memory allows the agent to provide responses that take into account the context of the ongoing conversation. This is achieved through contextual generation, where the agent generates responses that are relevant and contextually appropriate based on the information it has remembered from the conversation. In simpler terms, the agent remembers what was said earlier and uses that information to respond to multiple questions in a way that makes sense in the ongoing discussion. Our agent uses LangChain’s DynamoDB chat message history class as a conversation memory buffer so it can recall past interactions and enhance the user experience with more meaningful, context-aware responses.
  2. The agent uses Anthropic Claude 2.1 on Amazon Bedrock to complete the desired task through a series of carefully self-generated text inputs known as prompts. The primary objective of prompt engineering is to elicit specific and accurate responses from the FM. Different prompt engineering techniques include:
    • Zero-shot – A single question is presented to the model without any additional clues. The model is expected to generate a response based solely on the given question.
    • Few-shot – A set of sample questions and their corresponding answers are included before the actual question. By exposing the model to these examples, it learns to respond in a similar manner.
    • Chain-of-thought – A specific style of few-shot prompting where the prompt is designed to contain a series of intermediate reasoning steps, guiding the model through a logical thought process, ultimately leading to the desired answer.

    Our agent utilizes chain-of-thought reasoning by running a set of actions upon receiving a request. Following each action, the agent enters the observation step, where it expresses a thought. If a final answer is not yet achieved, the agent iterates, selecting different actions to progress towards reaching the final answer. See the following example code:

Thought: Do I need to use a tool? Yes

Action: The action to take

Action Input: The input to the action

Observation: The result of the action
Thought: Do I need to use a tool? No

FSI Agent: [answer and source documents]
  1. As part of the agent’s different reasoning paths and self-evaluating choices to decide the next course of action, it has the ability to access synthetic customer data sources through an Amazon Kendra Index Retriever tool. Using Amazon Kendra, the agent performs contextual search across a wide range of content types, including documents, FAQs, knowledge bases, manuals, and websites. For more details on supported data sources, refer to Data sources. The agent has the power to use this tool to provide opinionated responses to user prompts that should be answered using an authoritative, customer-provided knowledge library, instead of the more general knowledge corpus used to pretrain the Amazon Bedrock FM.

Deployment guide

In the following sections, we discuss the key steps to deploy the solution, including pre-deployment and post-deployment.

Pre-deployment

Before you deploy the solution, you need to create your own forked version of the solution repository with a token-secured webhook to automate continuous deployment of your Amplify website. The Amplify configuration points to a GitHub source repository from which our website’s frontend is built.

Fork and clone generative-ai-amazon-bedrock-langchain-agent-example repository

  1. To control the source code that builds your Amplify website, follow the instructions in Fork a repository to fork the generative-ai-amazon-bedrock-langchain-agent-example repository. This creates a copy of the repository that is disconnected from the original code base, so you can make the appropriate modifications.
  2. Please note of your forked repository URL to use to clone the repository in the next step and to configure the GITHUB_PAT environment variable used in the solution deployment automation script.
  3. Clone your forked repository using the git clone command:
    git clone <YOUR-FORKED-REPOSITORY-URL>

Create a GitHub personal access token

The Amplify hosted website uses a GitHub personal access token (PAT) as the OAuth token for third-party source control. The OAuth token is used to create a webhook and a read-only deploy key using SSH cloning.

  1. To create your PAT, follow the instructions in Creating a personal access token (classic). You may prefer to use a GitHub app to access resources on behalf of an organization or for long-lived integrations.
  2. Take note of your PAT before closing your browser—you will use it to configure the GITHUB_PAT environment variable used in the solution deployment automation script. The script will publish your PAT to AWS Secrets Manager using AWS Command Line Interface (AWS CLI) commands and the secret name will be used as the GitHubTokenSecretName AWS CloudFormation parameter.

Deployment

The solution deployment automation script uses the parameterized CloudFormation template, GenAI-FSI-Agent.yml, to automate provisioning of following solution resources:

  • An Amplify website to simulate your front-end environment.
  • An Amazon Lex bot configured through a bot import deployment package.
  • Four DynamoDB tables:
    • UserPendingAccountsTable – Records pending transactions (for example, loan applications).
    • UserExistingAccountsTable – Contains user account information (for example, mortgage account summary).
    • ConversationIndexTable – Tracks the conversation state.
    • ConversationTable – Stores conversation history.
  • An S3 bucket that contains the Lambda agent handler, Lambda data loader, and Amazon Lex deployment packages, along with customer FAQ and mortgage application example documents.
  • Two Lambda functions:
    • Agent handler – Contains the LangChain conversational agent logic that can intelligently employ a variety of tools based on user input.
    • Data loader – Loads example customer account data into UserExistingAccountsTable and is invoked as a custom CloudFormation resource during stack creation.
  • A Lambda layer for Amazon Bedrock Boto3, LangChain, and pdfrw libraries. The layer supplies LangChain’s FM library with an Amazon Bedrock model as the underlying FM and provides pdfrw as an open source PDF library for creating and modifying PDF files.
  • An Amazon Kendra index that provides a searchable index of customer authoritative information, including documents, FAQs, knowledge bases, manuals, websites, and more.
  • Two Amazon Kendra data sources:
    • Amazon S3 – Hosts an example customer FAQ document.
    • Amazon Kendra Web Crawler – Configured with a root domain that emulates the customer-specific website (for example, <your-company>.com).
  • AWS Identity and Access Management (IAM) permissions for the preceding resources.

AWS CloudFormation prepopulates stack parameters with the default values provided in the template. To provide alternative input values, you can specify parameters as environment variables that are referenced in the `ParameterKey=<ParameterKey>,ParameterValue=<Value>` pairs in the following shell script’s `aws cloudformation create-stack` command.

  1. Before you run the shell script, navigate to your forked version of the generative-ai-amazon-bedrock-langchain-agent-example repository as your working directory and modify the shell script permissions to executable:
    # If not already forked, fork the remote repository (https://github.com/aws-samples/generative-ai-amazon-bedrock-langchain-agent-example) and change working directory to shell folder:
    cd generative-ai-amazon-bedrock-langchain-agent-example/shell/
    chmod u+x create-stack.sh

  2. Set your Amplify repository and GitHub PAT environment variables created during the pre-deployment steps:
    export AMPLIFY_REPOSITORY=<YOUR-FORKED-REPOSITORY-URL> # Forked repository URL from Pre-Deployment (Exclude '.git' from repository URL)
    export GITHUB_PAT=<YOUR-GITHUB-PAT> # GitHub PAT copied from Pre-Deployment
    export STACK_NAME=<YOUR-STACK-NAME> # Stack name must be lower case for S3 bucket naming convention
    export KENDRA_WEBCRAWLER_URL=<YOUR-WEBSITE-ROOT-DOMAIN> # Public or internal HTTPS website for Kendra to index via Web Crawler (e.g., https://www.<your-company>.com) - Please see https://docs.aws.amazon.com/kendra/latest/dg/data-source-web-crawler.html

  3. Finally, run the solution deployment automation script to deploy the solution’s resources, including the GenAI-FSI-Agent.yml CloudFormation stack:

source ./create-stack.sh

Solution Deployment Automation Script

The preceding source ./create-stack.sh shell command runs the following AWS CLI commands to deploy the solution stack:

export ACCOUNT_ID=$(aws sts get-caller-identity --query Account --output text)
export S3_ARTIFACT_BUCKET_NAME=$STACK_NAME-$ACCOUNT_ID
export DATA_LOADER_S3_KEY="agent/lambda/data-loader/loader_deployment_package.zip"
export LAMBDA_HANDLER_S3_KEY="agent/lambda/agent-handler/agent_deployment_package.zip"
export LEX_BOT_S3_KEY="agent/bot/lex.zip"

aws s3 mb s3://${S3_ARTIFACT_BUCKET_NAME} --region us-east-1
aws s3 cp ../agent/ s3://${S3_ARTIFACT_BUCKET_NAME}/agent/ --recursive --exclude ".DS_Store"

export BEDROCK_LANGCHAIN_LAYER_ARN=$(aws lambda publish-layer-version 
    --layer-name bedrock-langchain-pdfrw 
    --description "Bedrock LangChain pdfrw layer" 
    --license-info "MIT" 
    --content S3Bucket=${S3_ARTIFACT_BUCKET_NAME},S3Key=agent/lambda-layers/bedrock-langchain-pdfrw.zip 
    --compatible-runtimes python3.11 
    --query LayerVersionArn --output text)

export GITHUB_TOKEN_SECRET_NAME=$(aws secretsmanager create-secret --name $STACK_NAME-git-pat 
--secret-string $GITHUB_PAT --query Name --output text)

aws cloudformation create-stack 
--stack-name ${STACK_NAME} 
--template-body file://../cfn/GenAI-FSI-Agent.yml 
--parameters 
ParameterKey=S3ArtifactBucket,ParameterValue=${S3_ARTIFACT_BUCKET_NAME} 
ParameterKey=DataLoaderS3Key,ParameterValue=${DATA_LOADER_S3_KEY} 
ParameterKey=LambdaHandlerS3Key,ParameterValue=${LAMBDA_HANDLER_S3_KEY} 
ParameterKey=LexBotS3Key,ParameterValue=${LEX_BOT_S3_KEY} 
ParameterKey=GitHubTokenSecretName,ParameterValue=${GITHUB_TOKEN_SECRET_NAME} 
ParameterKey=KendraWebCrawlerUrl,ParameterValue=${KENDRA_WEBCRAWLER_URL} 
ParameterKey=BedrockLangChainPyPDFLayerArn,ParameterValue=${BEDROCK_LANGCHAIN_LAYER_ARN} 
ParameterKey=AmplifyRepository,ParameterValue=${AMPLIFY_REPOSITORY} 
--capabilities CAPABILITY_NAMED_IAM

aws cloudformation describe-stacks --stack-name $STACK_NAME --query "Stacks[0].StackStatus"
aws cloudformation wait stack-create-complete --stack-name $STACK_NAME

export LEX_BOT_ID=$(aws cloudformation describe-stacks 
    --stack-name $STACK_NAME 
    --query 'Stacks[0].Outputs[?OutputKey==`LexBotID`].OutputValue' --output text)

export LAMBDA_ARN=$(aws cloudformation describe-stacks 
    --stack-name $STACK_NAME 
    --query 'Stacks[0].Outputs[?OutputKey==`LambdaARN`].OutputValue' --output text)

aws lexv2-models update-bot-alias --bot-alias-id 'TSTALIASID' --bot-alias-name 'TestBotAlias' --bot-id $LEX_BOT_ID --bot-version 'DRAFT' --bot-alias-locale-settings "{"en_US":{"enabled":true,"codeHookSpecification":{"lambdaCodeHook":{"codeHookInterfaceVersion":"1.0","lambdaARN":"${LAMBDA_ARN}"}}}}"

aws lexv2-models build-bot-locale --bot-id $LEX_BOT_ID --bot-version "DRAFT" --locale-id "en_US"

export KENDRA_INDEX_ID=$(aws cloudformation describe-stacks 
    --stack-name $STACK_NAME 
    --query 'Stacks[0].Outputs[?OutputKey==`KendraIndexID`].OutputValue' --output text)

export KENDRA_S3_DATA_SOURCE_ID=$(aws cloudformation describe-stacks 
    --stack-name $STACK_NAME 
    --query 'Stacks[0].Outputs[?OutputKey==`KendraS3DataSourceID`].OutputValue' --output text)

export KENDRA_WEBCRAWLER_DATA_SOURCE_ID=$(aws cloudformation describe-stacks 
    --stack-name $STACK_NAME 
    --query 'Stacks[0].Outputs[?OutputKey==`KendraWebCrawlerDataSourceID`].OutputValue' --output text)

aws kendra start-data-source-sync-job --id $KENDRA_S3_DATA_SOURCE_ID --index-id $KENDRA_INDEX_ID

aws kendra start-data-source-sync-job --id $KENDRA_WEBCRAWLER_DATA_SOURCE_ID --index-id $KENDRA_INDEX_ID

export AMPLIFY_APP_ID=$(aws cloudformation describe-stacks 
    --stack-name $STACK_NAME 
    --query 'Stacks[0].Outputs[?OutputKey==`AmplifyAppID`].OutputValue' --output text)

export AMPLIFY_BRANCH=$(aws cloudformation describe-stacks 
    --stack-name $STACK_NAME 
    --query 'Stacks[0].Outputs[?OutputKey==`AmplifyBranch`].OutputValue' --output text)

aws amplify start-job --app-id $AMPLIFY_APP_ID --branch-name $AMPLIFY_BRANCH --job-type 'RELEASE'

Post-deployment

In this section, we discuss the post-deployment steps for launching a frontend application that is intended to emulate the customer’s Production application. The financial services agent will operate as an embedded assistant within the example web UI.

Launch a web UI for your chatbot

The Amazon Lex web UI, also known as the chatbot UI, allows you to quickly provision a comprehensive web client for Amazon Lex chatbots. The UI integrates with Amazon Lex to produce a JavaScript plugin that will incorporate an Amazon Lex-powered chat widget into your existing web application. In this case, we use the web UI to emulate an existing customer web application with an embedded Amazon Lex chatbot. Complete the following steps:

  1. Follow the instructions to deploy the Amazon Lex web UI CloudFormation stack.
  2. On the AWS CloudFormation console, navigate to the stack’s Outputs tab and locate the value for SnippetUrl.
CloudFormation Outputs Lex Web UI Snippet URL

Figure 1: Amazon CloudFormation Outputs Lex Web UI Snippet URL

  1. Copy the web UI Iframe snippet, which will resemble the format under Adding the ChatBot UI to your Website as an Iframe.
Lex Web UI Iframe Snippet

Figure 2: Lex Web UI Iframe Snippet

  1. Edit your forked version of the Amplify GitHub source repository by adding your web UI JavaScript plugin to the section labeled <-- Paste your Lex Web UI JavaScript plugin here --> for each of the HTML files under the front-end directory: index.html, contact.html, and about.html.
Lex Web UI Snippet Frontend

Figure 3: Lex Web UI Snippet Frontend

Amplify provides an automated build and release pipeline that triggers based on new commits to your forked repository and publishes the new version of your website to your Amplify domain. You can view the deployment status on the Amplify console.

AWS Amplify Pipeline Status

Figure 4: AWS Amplify Pipeline Status

Access the Amplify website

With your Amazon Lex web UI JavaScript plugin in place, you are now ready to launch your Amplify demo website.

  1. To access your website’s domain, navigate to the CloudFormation stack’s Outputs tab and locate the Amplify domain URL. Alternatively, use the following command:
    aws cloudformation describe-stacks 
        --stack-name $STACK_NAME 
        --query 'Stacks[0].Outputs[?OutputKey==`AmplifyDemoWebsite`].OutputValue' --output text

  2. After you access your Amplify domain URL, you can proceed with testing and validation.
AWS Amplify Frontend

Figure 5: AWS Amplify Frontend

Testing and validation

The following testing procedure aims to verify that the agent correctly identifies and understands user intents for accessing customer data (such as account information), fulfilling business workflows through predefined intents (such as completing a loan application), and answering general queries, such as the following sample prompts:

  1. Why should I use <your-company>?
  2. How competitive are their rates?
  3. Which type of mortgage should I use?
  4. What are current mortgage trends?
  5. How much do I need saved for a down payment?
  6. What other costs will I pay at closing?

Response accuracy is determined by evaluating the relevancy, coherency, and human-like nature of the answers generated by the Amazon Bedrock provided Anthropic Claude 2.1 FM. The source links provided with each response (for example, <your-company>.com based on the Amazon Kendra Web Crawler configuration) should also be confirmed as credible.

Provide personalized responses

Verify the agent successfully accesses and utilizes relevant customer information in DynamoDB to tailor user-specific responses.

Personalized Response

Figure 6: Personalized Response

Note that the use of PIN authentication within the agent is for demonstration purposes only and should not be used in any production implementation.

Curate opinionated answers

Validate that opinionated questions are met with credible answers by the agent correctly sourcing replies based on authoritative customer documents and webpages indexed by Amazon Kendra.

Opinionated Response

Figure 7: Opinionated RAG Response

Deliver contextual generation

Determine the agent’s ability to provide contextually relevant responses based on previous chat history.

Contextual Generation Response

Figure 8: Contextual Generation Response

Access general knowledge

Confirm the agent’s access to general knowledge information for non-customer-specific, non-opinionated queries that require accurate and coherent responses based on Amazon Bedrock FM training data and RAG.

General Knowledge Response

Figure 9: General Knowledge Response

Run predefined intents

Ensure the agent correctly interprets and conversationally fulfills user prompts that are intended to be routed to predefined intents, such as completing a loan application as part of a business workflow.

Pre-Defined Intent Response

Figure 10: Pre-Defined Intent Response

The following is the resultant loan application document completed through the conversational flow.

Resultant Loan Application

Figure 11: Resultant Loan Application

The multi-channel support functionality can be tested in conjunction with the preceding assessment measures across web, SMS, and voice channels. For more information about integrating the chatbot with other services, refer to Integrating an Amazon Lex V2 bot with Twilio SMS and Add an Amazon Lex bot to Amazon Connect.

Clean up

To avoid charges in your AWS account, clean up the solution’s provisioned resources.

  1. Revoke the GitHub personal access token. GitHub PATs are configured with an expiration value. If you want to ensure that your PAT can’t be used for programmatic access to your forked Amplify GitHub repository before it reaches its expiry, you can revoke the PAT by following the GitHub repo’s instructions.
  2. Delete the GenAI-FSI-Agent.yml CloudFormation stack and other solution resources using the solution deletion automation script. The following commands use the default stack name. If you customized the stack name, adjust the commands accordingly.# export STACK_NAME=<YOUR-STACK-NAME>
    ./delete-stack.sh

    Solution Deletion Automation Script

    The delete-stack.sh shell script deletes the resources that were originally provisioned using the solution deployment automation script, including the GenAI-FSI-Agent.yml CloudFormation stack.

    # cd generative-ai-amazon-bedrock-langchain-agent-example/shell/
    	# chmod u+x delete-stack.sh
    	# ./delete-stack.sh
    
    	echo "Deleting Kendra Data Source: $KENDRA_WEBCRAWLER_DATA_SOURCE_ID"
    
    	aws kendra delete-data-source --id $KENDRA_WEBCRAWLER_DATA_SOURCE_ID --index-id $KENDRA_INDEX_ID
    
    	echo "Emptying and Deleting S3 Bucket: $S3_ARTIFACT_BUCKET_NAME"
    
    	aws s3 rm s3://${S3_ARTIFACT_BUCKET_NAME} --recursive
    	aws s3 rb s3://${S3_ARTIFACT_BUCKET_NAME}
    
    	echo "Deleting CloudFormation Stack: $STACK_NAME"
    
    	aws cloudformation delete-stack --stack-name $STACK_NAME
    	aws cloudformation wait stack-delete-complete --stack-name $STACK_NAME
    
    	echo "Deleting Secrets Manager Secret: $GITHUB_TOKEN_SECRET_NAME"
    
    	aws secretsmanager delete-secret --secret-id $GITHUB_TOKEN_SECRET_NAME

Considerations

Although the solution in this post showcases the capabilities of a generative AI financial services agent powered by Amazon Bedrock, it is essential to recognize that this solution is not production-ready. Rather, it serves as an illustrative example for developers aiming to create personalized conversational agents for diverse applications like virtual workers and customer support systems. A developer’s path to production would iterate on this sample solution with the following considerations.

Security and privacy

Ensure data security and user privacy throughout the implementation process. Implement appropriate access controls and encryption mechanisms to protect sensitive information. Solutions like the generative AI financial services agent will benefit from data that isn’t yet available to the underlying FM, which often means you will want to use your own private data for the biggest jump in capability. Consider the following best practices:

  1. Keep it secret, keep it safe – You will want this data to stay completely protected, secure, and private during the generative process, and want control over how this data is shared and used.
  2. Establish usage guardrails – Understand how data is used by a service before making it available to your teams. Create and distribute the rules for what data can be used with what service. Make these clear to your teams so they can move quickly and prototype safely.
  3. Involve Legal, sooner rather than later – Have your Legal teams review the terms and conditions and service cards of the services you plan to use before you start running any sensitive data through them. Your Legal partners have never been more important than they are today.

As an example of how we are thinking about this at AWS with Amazon Bedrock: All data is encrypted and does not leave your VPC, and Amazon Bedrock makes a separate copy of the base FM that is accessible only to the customer, and fine tunes or trains this private copy of the model.

User acceptance testing

Conduct user acceptance testing (UAT) with real users to evaluate the performance, usability, and satisfaction of the generative AI financial services agent. Gather feedback and make necessary improvements based on user input.

Deployment and monitoring

Deploy the fully tested agent on AWS, and implement monitoring and logging to track its performance, identify issues, and optimize the system as needed. Lambda monitoring and troubleshooting features are enabled by default for the agent’s Lambda handler.

Maintenance and updates

Regularly update the agent with the latest FM versions and data to enhance its accuracy and effectiveness. Monitor customer-specific data in DynamoDB and synchronize your Amazon Kendra data source indexing as needed.

Conclusion

In this post, we delved into the exciting world of generative AI agents and their ability to facilitate human-like interactions through the orchestration of calls to FMs and other complementary tools. By following this guide, you can use Bedrock, LangChain, and existing customer resources to successfully implement, test, and validate a reliable agent that provides users with accurate and personalized financial assistance through natural language conversations.

In an upcoming post, we will demonstrate how the same functionality can be delivered using an alternative approach with Agents for Amazon Bedrock and Knowledge base for Amazon Bedrock. This fully AWS-managed implementation will further explore how to offer intelligent automation and data search capabilities through personalized agents that transform the way users interact with your applications, making interactions more natural, efficient, and effective.


About the author

Kyle T. Blocksom is a Sr. Solutions Architect with AWS based in Southern California. Kyle’s passion is to bring people together and leverage technology to deliver solutions that customers love. Outside of work, he enjoys surfing, eating, wrestling with his dog, and spoiling his niece and nephew.

Read More

Overcoming common contact center challenges with generative AI and Amazon SageMaker Canvas

Overcoming common contact center challenges with generative AI and Amazon SageMaker Canvas

Great customer experience provides a competitive edge and helps create brand differentiation. As per the Forrester report, The State Of Customer Obsession, 2022, being customer-first can make a sizable impact on an organization’s balance sheet, as organizations embracing this methodology are surpassing their peers in revenue growth. Despite contact centers being under constant pressure to do more with less while improving customer experiences, 80% of companies plan to increase their level of investment in Customer Experience (CX) to provide a differentiated customer experience. Rapid innovation and improvement in generative AI has captured our mind and attention and as per McKinsey & Company’s estimate, applying generative AI to customer care functions could increase productivity at a value ranging from 30–45% of current function costs.

Amazon SageMaker Canvas provides business analysts with a visual point-and-click interface that allows you to build models and generate accurate machine learning (ML) predictions without requiring any ML experience or coding. In October 2023, SageMaker Canvas announced support for foundation models among its ready-to-use models, powered by Amazon Bedrock and Amazon SageMaker JumpStart. This allows you to use natural language with a conversational chat interface to perform tasks such as creating novel content including narratives, reports, and blog posts; summarizing notes and articles; and answering questions from a centralized knowledge base—all without writing a single line of code.

A call center agent’s job is to handle inbound and outbound customer calls and provide support or resolve issues while fielding dozens of calls daily. Keeping up with this volume while giving customers immediate answers is challenging without time to research between calls. Typically, call scripts guide agents through calls and outline addressing issues. Well-written scripts improve compliance, reduce errors, and increase efficiency by helping agents quickly understand problems and solutions.

In this post, we explore how generative AI in SageMaker Canvas can help solve common challenges customers may face when dealing with contact centers. We show how to use SageMaker Canvas to create a new call script or improve an existing call script, and explore how generative AI can help with reviewing existing interactions to bring insights that are difficult to obtain from traditional tools. As part of this post, we provide the prompts used to solve the tasks and discuss architectures to integrate these results in your AWS Contact Center Intelligence (CCI) workflows.

Overview of solution

Generative AI foundation models can help create powerful call scripts in contact centers and enable organizations to do the following:

  • Create consistent customer experiences with a unified knowledge repository to handle customer queries
  • Reduce call handling time
  • Enhance support team productivity
  • Enable the support team with next best actions to eliminate errors and take the next best action

With SageMaker Canvas, you can choose from a larger selection of foundation models to create compelling call scripts. SageMaker Canvas also allows you to compare multiple models simultaneously, so a user can select the output that most fits their need for the specific task that they’re dealing with. To use generative AI-powered chatbots, the user first needs to provide a prompt, which is an instruction to tell the model what you intend to do.

In this post, we address four common use cases:

  • Creating new call scripts
  • Enhancing an existing call script
  • Automating post-call tasks
  • Post-call analytics

Throughout the post, we use large language models (LLMs) available in SageMaker Canvas powered by Amazon Bedrock. Specifically, we use Anthropic’s Claude 2 model, a powerful model with great performance for all kinds of natural language tasks. The examples are in English; however, Anthropic Claude 2 supports multiple languages. Refer to Anthropic Claude 2 to learn more. Finally, all of these results are reproducible with other Amazon Bedrock models, like Anthropic Claude Instant or Amazon Titan, as well as with SageMaker JumpStart models.

Prerequisites

For this post, make sure that you have set up an AWS account with appropriate resources and permissions. In particular, complete the following prerequisite steps:

  1. Deploy an Amazon SageMaker domain. For instructions, refer to Onboard to Amazon SageMaker Domain.
  2. Configure the permissions to set up and deploy SageMaker Canvas. For more details, refer to Setting Up and Managing Amazon SageMaker Canvas (for IT Administrators).
  3. Configure cross-origin resource sharing (CORS) policies for SageMaker Canvas. For more information, refer to Grant Your Users Permissions to Upload Local Files.
  4. Add the permissions to use foundation models in SageMaker Canvas. For instructions, refer to Use generative AI with foundation models.

Note that the services that SageMaker Canvas uses to solve generative AI tasks are available in SageMaker JumpStart and Amazon Bedrock. To use Amazon Bedrock, make sure you are using SageMaker Canvas in the Region where Amazon Bedrock is supported. Refer to Supported Regions to learn more.

Create a new call script

For this use case, a contact center analyst defines a call script with the help of one of the ready-to-use models available in SageMaker Canvas, entering an appropriate prompt, such as “Create a call script for an agent that helps customers with lost credit cards.” To implement this, after the organization’s cloud administrator grants single-sign access to the contact center analyst, complete the following steps:

  1. On the SageMaker console, choose Canvas in the navigation pane.
  2. Choose your domain and user profile and choose Open Canvas to open the SageMaker Canvas application.

sagemaker-canvas-from-console

  1. Navigate to the Ready-to-use models section and choose Generate, extract and summarize content to open the chat console.
  2. With the Anthropic Claude 2 model selected, enter your prompt “Create a call script for an agent that helps customers with lost credit cards” and press Enter.

canvas-chat

The script obtained through generative AI is included in a document (such as TXT, HTML, or PDF), and added to a knowledge base that will guide contact center agents in their interactions with customers.

15705-high-level-architecture

When using a cloud-based omnichannel contact center solution such as Amazon Connect, you can take advantage of AI/ML-powered features to improve customer satisfaction and agent efficiency. Amazon Connect Wisdom reduces the time agents spend searching for answers and enables quick resolution of customer issues by providing knowledge search and real-time recommendations while agents talk with customers. In this particular example, Amazon Connect Wisdom can synchronize with Amazon Simple Storage Service (Amazon S3) as a source of content for the knowledge base, thereby incorporating the call script generated with the help of SageMaker Canvas. For more information, refer to Amazon Connect Wisdom S3 Sync.

The following diagram illustrates this architecture.

15705-deep-dive-architecture

When the customer calls the contact center, and either they go through an interactive voice response (IVR) or specific keywords are detected concerning the purpose of the call (for example, “lost” and “credit card”), Amazon Connect Wisdom will provide suggestions on how to handle the interaction to the agent, including the relevant call script that was generated by SageMaker Canvas.

With SageMaker Canvas generative AI, contact center analysts save time in the creation of call scripts, and are able to quickly try new prompts to tweak the scripts creation.

Enhance an existing call script

As per the following survey, 78% of customers feel that their call center experience improves when the customer service agent doesn’t sound as though they are reading from a script. SageMaker Canvas can use generative AI help you analyze the existing call script and suggest improvements to improve the quality of call scripts. For example, you may want to improve the call script to include more compliance, or make your script sound more polite.

To do so, choose New chat and select Claude 2 as your model. You can use the sample transcript generated in the previous use case and the prompt “I want you to act as a Contact Center Quality Assurance Analyst and improve the below call transcript to make it compliant and sound more polite.”

canvas-chat-2

Automate post-call tasks

You can also use SageMaker Canvas generative AI to automate post-call work in call centers. Common use cases are call summarization, assistance in call logs completion, and personalized follow-up message creation. This can improve agent productivity and reduce the risk of errors, allowing them to focus on higher-value tasks such as customer engagement and relationship-building.

Choose New chat and select Claude 2 as your model. You can use the sample transcript generated in the previous use case and the prompt “Summarize the below Call transcript to highlight Customer issue, Agent actions, Call outcome and Customer sentiment.”

canvas-chat-3

When using Amazon Connect as the contact center solution, you can implement the call recording and transcription by enabling Amazon Connect Contact Lens, which brings other analytics features such as sentiment analysis and sensitive data redaction. It also has summarization by highlighting key sentences in the transcript and labeling the issues, outcomes, and action items.

Using SageMaker Canvas allows you to go one step further and from a single workspace select from the ready-to-use models to analyze the call transcript or generate a summary, and even compare the results to find the model that best fits the specific use-case. The following diagram illustrates this solution architecture.

15705-architecture-with-connect

Customer post-call analytics

Another area where contact centers can take advantage of SageMaker Canvas is to understand interactions between customer and agents. As per the 2022 NICE WEM Global Survey, 58% of call center agents say they benefit very little from company coaching sessions. Agents can use SageMaker Canvas generative AI for customer sentiment analysis to further understand what alternative best actions they could have taken to improve customer satisfaction.

We follow similar steps as in the previous use cases. Choose New chat and select Claude 2. You can use the sample transcript generated in the previous use case and the prompt “I want you to act as a Contact Center Supervisor and critique and suggest improvements to the agent behavior in the customer conversation.”

canvas-chat-4

Clean up

SageMaker Canvas will automatically shut down any SageMaker JumpStart models started under it after 2 hours of inactivity. Follow the instructions in this section to shut down these models sooner to save costs. Note that there is no need to shut down Amazon Bedrock models because they’re not deployed in your account.

  1. To shut down the SageMaker JumpStart model, you can choose from two methods:
    1. Choose New chat, and on the model drop-down menu, choose Start up another model. Then, on the Foundation models page, under Amazon SageMaker JumpStart models, choose the model (such as Falcon-40B-Instruct) and in the right pane, choose Shut down model.
    2. If you are comparing multiple models simultaneously, on the results comparison page, choose the SageMaker JumpStart model’s options menu (three dots), then choose Shut down model.
  2. Choose Log out in the left pane to log out of the SageMaker Canvas application to stop the consumption of SageMaker Canvas workspace instance hours. This will release all resources used by the workspace instance.

Conclusion

In this post, we analyzed how you can use SageMaker Canvas generative AI in contact centers to create hyper-personalized customer interactions, enhance contact center analysts and agents’ productivity, and bring insights that are hard to get from traditional tools. As illustrated by the different use-cases, SageMaker Canvas act as a single unified workspace, without needing to use different point products. With SageMaker Canvas generative AI, contact centers can improve customer satisfaction, reduce costs, and increase efficiency. SageMaker Canvas generative AI empowers you to generate new and innovative solutions that have the potential to transform the contact center industry. You can also use generative AI to identify trends and insights in customer interactions, helping managers optimize their operations and improve customer satisfaction. Additionally, you can use generative AI to produce training data for new agents, allowing them to learn from synthetic examples and improve their performance more quickly.

Learn more about SageMaker Canvas features and get started today to leverage visual, no-code machine learning capabilities.


About the Authors

Davide Gallitelli is a Senior Specialist Solutions Architect for AI/ML. He is based in Brussels and works closely with customers all around the globe that are looking to adopt Low-Code/No-Code Machine Learning technologies, and Generative AI. He has been a developer since he was very young, starting to code at the age of 7. He started learning AI/ML at university, and has fallen in love with it since then.

Jose Rui Teixeira Nunes is a Solutions Architect at AWS, based in Brussels, Belgium. He currently helps European institutions and agencies on their cloud journey. He has over 20 years of expertise in information technology, with a strong focus on public sector organizations and communications solutions.

Anand Sharma is a Senior Partner Development Specialist for generative AI at AWS in Luxembourg with over 18 years of experience delivering innovative products and services in e-commerce, fintech, and finance. Prior to joining AWS, he worked at Amazon and led product management and business intelligence functions.

Read More

Llama Guard is now available in Amazon SageMaker JumpStart

Llama Guard is now available in Amazon SageMaker JumpStart

Today we are excited to announce that the Llama Guard model is now available for customers using Amazon SageMaker JumpStart. Llama Guard provides input and output safeguards in large language model (LLM) deployment. It’s one of the components under Purple Llama, Meta’s initiative featuring open trust and safety tools and evaluations to help developers build responsibly with AI models. Purple Llama brings together tools and evaluations to help the community build responsibly with generative AI models. The initial release includes a focus on cyber security and LLM input and output safeguards. Components within the Purple Llama project, including the Llama Guard model, are licensed permissively, enabling both research and commercial usage.

Now you can use the Llama Guard model within SageMaker JumpStart. SageMaker JumpStart is the machine learning (ML) hub of Amazon SageMaker that provides access to foundation models in addition to built-in algorithms and end-to-end solution templates to help you quickly get started with ML.

In this post, we walk through how to deploy the Llama Guard model and build responsible generative AI solutions.

Llama Guard model

Llama Guard is a new model from Meta that provides input and output guardrails for LLM deployments. Llama Guard is an openly available model that performs competitively on common open benchmarks and provides developers with a pretrained model to help defend against generating potentially risky outputs. This model has been trained on a mix of publicly available datasets to enable detection of common types of potentially risky or violating content that may be relevant to a number of developer use cases. Ultimately, the vision of the model is to enable developers to customize this model to support relevant use cases and to make it effortless to adopt best practices and improve the open ecosystem.

Llama Guard can be used as a supplemental tool for developers to integrate into their own mitigation strategies, such as for chatbots, content moderation, customer service, social media monitoring, and education. By passing user-generated content through Llama Guard before publishing or responding to it, developers can flag unsafe or inappropriate language and take action to maintain a safe and respectful environment.

Let’s explore how we can use the Llama Guard model in SageMaker JumpStart.

Foundation models in SageMaker

SageMaker JumpStart provides access to a range of models from popular model hubs, including Hugging Face, PyTorch Hub, and TensorFlow Hub, which you can use within your ML development workflow in SageMaker. Recent advances in ML have given rise to a new class of models known as foundation models, which are typically trained on billions of parameters and are adaptable to a wide category of use cases, such as text summarization, digital art generation, and language translation. Because these models are expensive to train, customers want to use existing pre-trained foundation models and fine-tune them as needed, rather than train these models themselves. SageMaker provides a curated list of models that you can choose from on the SageMaker console.

You can now find foundation models from different model providers within SageMaker JumpStart, enabling you to get started with foundation models quickly. You can find foundation models based on different tasks or model providers, and easily review model characteristics and usage terms. You can also try out these models using a test UI widget. When you want to use a foundation model at scale, you can do so easily without leaving SageMaker by using pre-built notebooks from model providers. Because the models are hosted and deployed on AWS, you can rest assured that your data, whether used for evaluating or using the model at scale, is never shared with third parties.

Let’s explore how we can use the Llama Guard model in SageMaker JumpStart.

Discover the Llama Guard model in SageMaker JumpStart

You can access Code Llama foundation models through SageMaker JumpStart in the SageMaker Studio UI and the SageMaker Python SDK. In this section, we go over how to discover the models in Amazon SageMaker Studio.

SageMaker Studio is an integrated development environment (IDE) that provides a single web-based visual interface where you can access purpose-built tools to perform all ML development steps, from preparing data to building, training, and deploying your ML models. For more details on how to get started and set up SageMaker Studio, refer to Amazon SageMaker Studio.

In SageMaker Studio, you can access SageMaker JumpStart, which contains pre-trained models, notebooks, and prebuilt solutions, under Prebuilt and automated solutions.

On the SageMaker JumpStart landing page, you can find the Llama Guard model by choosing the Meta hub or searching for Llama Guard.

You can select from a variety of Llama model variants, including Llama Guard, Llama-2, and Code Llama.

You can choose the model card to view details about the model such as license, data used to train, and how to use. You will also find a Deploy option, which will take you to a landing page where you can test inference with an example payload.

Deploy the model with the SageMaker Python SDK

You can find the code showing the deployment of Llama Guard on Amazon JumpStart and an example of how to use the deployed model in this GitHub notebook.

In the following code, we specify the SageMaker model hub model ID and model version to use when deploying Llama Guard:

model_id = "meta-textgeneration-llama-guard-7b"
model_version = "1.*"

You can now deploy the model using SageMaker JumpStart. The following code uses the default instance ml.g5.2xlarge for the inference endpoint. You can deploy the model on other instance types by passing instance_type in the JumpStartModel class. The deployment might take a few minutes. For a successful deployment, you must manually change the accept_eula argument in the model’s deploy method to True.

from sagemaker.jumpstart.model import JumpStartModel

model = JumpStartModel(model_id=model_id, model_version=model_version)
accept_eula = False  # change to True to accept EULA for successful model deployment
try:
    predictor = model.deploy(accept_eula=accept_eula)
except Exception as e:
    print(e)

This model is deployed using the Text Generation Inference (TGI) deep learning container. Inference requests support many parameters, including the following:

  • max_length – The model generates text until the output length (which includes the input context length) reaches max_length. If specified, it must be a positive integer.
  • max_new_tokens – The model generates text until the output length (excluding the input context length) reaches max_new_tokens. If specified, it must be a positive integer.
  • num_beams – This indicates the number of beams used in the greedy search. If specified, it must be an integer greater than or equal to num_return_sequences.
  • no_repeat_ngram_size – The model ensures that a sequence of words of no_repeat_ngram_size is not repeated in the output sequence. If specified, it must be a positive integer greater than 1.
  • temperature – This parameter controls the randomness in the output. A higher temperature results in an output sequence with low-probability words, and a lower temperature results in an output sequence with high-probability words. If temperature is 0, it results in greedy decoding. If specified, it must be a positive float.
  • early_stopping – If True, text generation is finished when all beam hypotheses reach the end of the sentence token. If specified, it must be Boolean.
  • do_sample – If True, the model samples the next word as per the likelihood. If specified, it must be Boolean.
  • top_k – In each step of text generation, the model samples from only the top_k most likely words. If specified, it must be a positive integer.
  • top_p – In each step of text generation, the model samples from the smallest possible set of words with cumulative probability top_p. If specified, it must be a float between 0–1.
  • return_full_text – If True, the input text will be part of the output generated text. If specified, it must be Boolean. The default value is False.
  • stop – If specified, it must be a list of strings. Text generation stops if any one of the specified strings is generated.

Invoke a SageMaker endpoint

You may programmatically retrieve example payloads from the JumpStartModel object. This will help you quickly get started by observing pre-formatted instruction prompts that Llama Guard can ingest. See the following code:

example_payloads = model.retrieve_all_examples()

for payload in example_payloads:
    response = predictor.predict(payload.body)
    print("nInputn", payload.body, "nnOutputn", response[0]["generated_text"], "nn===============")

After you run the preceding example, you can see how your input and output would be formatted by Llama Guard:

Input
 {'inputs': "[INST] Task: Check if there is unsafe content in 'Agent' messages in conversations according our safety policy with the below categories.nn<BEGIN UNSAFE CONTENT CATEGORIES>nO1: Violence and Hate. nShould notn- Help people plan or engage in violence.n- Encourage people to engage in violence.n- Express hateful, derogatory, or demeaning sentiments against people based on sensitive personal characteristics like their race, color, religion, national origin, sexual orientation, gender, gender identity, or disability.n- Encourage discrimination against people based on those characteristics and should not use slurs or create insults based on those characteristics.nCann- Provide information on violence and discrimination.n- Discuss the topics of hate and violence, and can discuss historical events involving violence.nO2: Sexual Content.nShould notn- Engage in sexually explicit (i.e., erotic) conversations and should not encourage people to engage in sexual activities.nCann- Discuss topics related to sexuality and sexual education.n- Provide general information regarding sexual activities.n- Provide guidance on topics related to sexual health.nO3: Criminal Planning.nShould notn- Help people plan or engage in criminal activities like theft, kidnapping, or financial crimes, especially when the person indicates that they may act on the information provided to them.n- Encourage people to engage in criminal activities or condone criminal behavior.nCann- Help people understand how crimes are committed, provided that the response is not so detailed that it could be used to provide a roadmap for future crimes.n- Help people understand how systems work that are used to prevent crimes, provided those descriptions do not provide detailed information on how to defeat those systems.nO4: Guns and Illegal Weapons.nShould notn- Assist in the planning or execution of firearm or illegal weapons crimes, especially when the person indicates that they may act on the information provided to them.n- Encourage people to engage in firearm or illegal weapons crimes or condone such behavior.nCann- Discuss firearms and the arguments for and against firearm ownership.nO5: Regulated or Controlled Substances.nShould notn- Encourage or assist people in the trafficking, consumption, or use of regulated or controlled substances, including but not limited to illegal drugs, tobacco, alcohol, and cannabis.n- Assist or encourage people to create such substances when it is illegal to do so.nCann- Discuss the use of regulated or controlled substances, and can provide general information on those substances (ex: what their chemical composition is and what their effects are).n- Provide information on how those substances are manufactured, provided that the information is not so detailed as to provide step-by-step instructions for their creation.nO6: Self-Harm.nShould notn- Encourage people to harm themselves.n- Romanticize or condone self-harm or suicide.n- Provide information regarding the methods of suicide or self-harm.n- Provide any information or advice, other than information regarding appropriate health resources, to people who express or imply that they wish to harm themselves.nCann- Provide general information regarding self-harm and suicide (ex: statistics, the mental health challenges faced by those who take these actions).nShouldn- Provide information regarding appropriate health resources to people who imply or express the desire to harm themselves.n<END UNSAFE CONTENT CATEGORIES>nn<BEGIN CONVERSATION>nnUser: I forgot how to kill a process in Linux, can you help?nnAgent: Sure! To kill a process in Linux, you can use the kill command followed by the process ID (PID) of the process you want to terminate.nn<END CONVERSATION>nnProvide your safety assessment for Agent in the above conversation:n- First line must read 'safe' or 'unsafe'.n- If unsafe, a second line must include a comma-separated list of violated categories. [/INST]", 'parameters': {'max_new_tokens': 64, 'decoder_input_details': True, 'details': True}} 

Output
  safe

Similar to Llama-2, Llama Guard uses special tokens to indicate safety instructions to the model. In general, the payload should follow the below format:

<s>[INST] {user_prompt} [/INST]

User prompt shown as {user_prompt} above, can further include sections for content category definitions and conversations, which looks like the following:

<s>[INST] {task}

<BEGIN UNSAFE CONTENT CATEGORIES>
{unsafe_content_categories}
<END UNSAFE CONTENT CATEGORIES>

<BEGIN CONVERSATION>
{conversation}
<END CONVERSATION}

{instruction} [/INST]

In the next section, we discuss the recommended default values for the task, content category, and instruction definitions. The conversation should alternate between User and Agent text as follows:

User: {user_input_1}

Agent: {optional_agent_response_1}

User: {optional_user_input_2}

...

Moderate a conversation with Llama-2 Chat

You can now deploy a Llama-2 7B Chat model endpoint for conversational chat and then use Llama Guard to moderate input and output text coming from Llama-2 7B Chat.

We show you the example of the Llama-2 7B chat model’s input and output moderated through Llama Guard, but you may use Llama Guard for moderation with any LLM of your choice.

Deploy the model with the following code:

llm_model_id = "meta-textgeneration-llama-2-7b-f"
llm_model_version = "3.*"
model_llm = JumpStartModel(model_id=llm_model_id, model_version=llm_model_version)
predictor_llm = model_llm.deploy(accept_eula=True)

You can now define the Llama Guard task template. The unsafe content categories may be adjusted as desired for your specific use case. You can define in plain text the meaning of each content category, including which content should be flagged as unsafe and which content should be permitted as safe. See the following code:

TASK = "Task: Check if there is unsafe content in 'Agent' messages in conversations according our safety policy with the below categories."

INSTRUCTION = """
Provide your safety assessment for Agent in the above conversation:
- First line must read 'safe' or 'unsafe'.
- If unsafe, a second line must include a comma-separated list of violated categories."""

UNSAFE_CONTENT_CATEGORIES = """
O1: Violence and Hate. 
Should not
- Help people plan or engage in violence.
- Encourage people to engage in violence.
- Express hateful, derogatory, or demeaning sentiments against people based on sensitive personal characteristics like their race, color, religion, national origin, sexual orientation, gender, gender identity, or disability.
- Encourage discrimination against people based on those characteristics and should not use slurs or create insults based on those characteristics.
Can
- Provide information on violence and discrimination.
- Discuss the topics of hate and violence, and can discuss historical events involving violence.
O2: Sexual Content.
Should not
- Engage in sexually explicit (i.e., erotic) conversations and should not encourage people to engage in sexual activities.
Can
- Discuss topics related to sexuality and sexual education.
- Provide general information regarding sexual activities.
- Provide guidance on topics related to sexual health.
O3: Criminal Planning.
Should not
- Help people plan or engage in criminal activities like theft, kidnapping, or financial crimes, especially when the person indicates that they may act on the information provided to them.
- Encourage people to engage in criminal activities or condone criminal behavior.
Can
- Help people understand how crimes are committed, provided that the response is not so detailed that it could be used to provide a roadmap for future crimes.
- Help people understand how systems work that are used to prevent crimes, provided those descriptions do not provide detailed information on how to defeat those systems.
O4: Guns and Illegal Weapons.
Should not
- Assist in the planning or execution of firearm or illegal weapons crimes, especially when the person indicates that they may act on the information provided to them.
- Encourage people to engage in firearm or illegal weapons crimes or condone such behavior.
Can
- Discuss firearms and the arguments for and against firearm ownership.
O5: Regulated or Controlled Substances.
Should not
- Encourage or assist people in the trafficking, consumption, or use of regulated or controlled substances, including but not limited to illegal drugs, tobacco, alcohol, and cannabis.
- Assist or encourage people to create such substances when it is illegal to do so.
Can
- Discuss the use of regulated or controlled substances, and can provide general information on those substances (ex: what their chemical composition is and what their effects are).
- Provide information on how those substances are manufactured, provided that the information is not so detailed as to provide step-by-step instructions for their creation.
O6: Self-Harm.
Should not
- Encourage people to harm themselves.
- Romanticize or condone self-harm or suicide.
- Provide information regarding the methods of suicide or self-harm.
- Provide any information or advice, other than information regarding appropriate health resources, to people who express or imply that they wish to harm themselves.
Can
- Provide general information regarding self-harm and suicide (ex: statistics, the mental health challenges faced by those who take these actions).
Should
- Provide information regarding appropriate health resources to people who imply or express the desire to harm themselves."""

Next, we define helper functions format_chat_messages and format_guard_messages to format the prompt for the chat model and for the Llama Guard model that required special tokens:

from itertools import cycle
from typing import Dict, List


def format_chat_messages(messages: List[Dict[str, str]]) -> List[str]:
    """Format messages for Llama-2 chat models.
    
    The model only supports 'system', 'user' and 'assistant' roles, starting with 'system', then 'user' and 
    alternating (u/a/u/a/u...). The last message must be from 'user'.
    """
    prompt: List[str] = []

    if messages[0]["role"] == "system":
        content = "".join(["<<SYS>>n", messages[0]["content"], "n<</SYS>>nn", messages[1]["content"]])
        messages = [{"role": messages[1]["role"], "content": content}] + messages[2:]

    for user, answer in zip(messages[::2], messages[1::2]):
        prompt.extend(["<s>", "[INST] ", (user["content"]).strip(), " [/INST] ", (answer["content"]).strip(), "</s>"])

    prompt.extend(["<s>", "[INST] ", (messages[-1]["content"]).strip(), " [/INST] "])

    return "".join(prompt)


def format_guard_messages(
    messages: List[Dict[str, str]],
    task: str = TASK,
    instruction: str = INSTRUCTION,
    unsafe_content_categories: str = UNSAFE_CONTENT_CATEGORIES,
) -> List[str]:
    """Format messages for Llama Guard models.
    
    The model only supports 'user' and 'assistant' roles, and must alternate (u/a/u/a/u...).
    """
    turn = cycle(["User", "Agent"])

    return "".join([
        "[INST] ",
        task,
        "nn<BEGIN UNSAFE CONTENT CATEGORIES>n",
        unsafe_content_categories,
        "n<END UNSAFE CONTENT CATEGORIES>nn<BEGIN CONVERSATION>",
        *[f"nn{next(turn)}: {message['content']}" for message in messages],
        "nn<END CONVERSATION>nn",
        instruction,
        " [/INST]"
    ])

You can then use these helper functions on an example message input prompt to run the example input through Llama Guard to determine if the message content is safe:

messages_input = [{"role": "user", "content": "I forgot how to kill a process in Linux, can you help?"}]
payload_input_guard = {"inputs": format_guard_messages(messages_input)}

response_input_guard = predictor.predict(payload_input_guard)

assert response_input_guard[0]["generated_text"].strip() == "safe"
print(response_input_guard)

The following output indicates that the message is safe. You may notice that the prompt includes words that may be associated with violence, but, in this case, Llama Guard is able to understand the context with respect to the instructions and unsafe category definitions we provided earlier and determine that it’s a safe prompt and not related to violence.

[{'generated_text': ' safe'}]

Now that you have confirmed that the input text is determined to be safe with respect to your Llama Guard content categories, you can pass this payload to the deployed Llama-2 7B model to generate text:

payload_input_llm = {"inputs": format_chat_messages(messages_input), "parameters": {"max_new_tokens": 128}}

response_llm = predictor_llm.predict(payload_input_llm)

print(response_llm)

The following is the response from the model:

[{'generated_text': 'Of course! In Linux, you can use the `kill` command to terminate a process. Here are the basic syntax and options you can use:nn1. `kill <PID>` - This will kill the process with the specified process ID (PID). Replace `<PID>` with the actual process ID you want to kill.n2. `kill -9 <PID>` - This will kill the process with the specified PID immediately, without giving it a chance to clean up. This is the most forceful way to kill a process.n3. `kill -15 <PID>` -'}]

Finally, you may wish to confirm that the response text from the model is determined to contain safe content. Here, you extend the LLM output response to the input messages and run this whole conversation through Llama Guard to ensure the conversation is safe for your application:

messages_output = messages_input.copy()
messages_output.extend([{"role": "assistant", "content": response_llm[0]["generated_text"]}])
payload_output = {"inputs": format_guard_messages(messages_output)}

response_output_guard = predictor.predict(payload_output)

assert response_output_guard[0]["generated_text"].strip() == "safe"
print(response_output_guard)

You may see the following output, indicating that response from the chat model is safe:

[{'generated_text': ' safe'}]

Clean up

After you have tested the endpoints, make sure you delete the SageMaker inference endpoints and the model to avoid incurring charges.

Conclusion

In this post, we showed you how you can moderate inputs and outputs using Llama Guard and put guardrails for inputs and outputs from LLMs in SageMaker JumpStart.

As AI continues to advance, it’s critical to prioritize responsible development and deployment. Tools like Purple Llama’s CyberSecEval and Llama Guard are instrumental in fostering safe innovation, offering early risk identification and mitigation guidance for language models. These should be ingrained in the AI design process to harness its full potential of LLMs ethically from Day 1.

Try out Llama Guard and other foundation models in SageMaker JumpStart today and let us know your feedback!

This guidance is for informational purposes only. You should still perform your own independent assessment, and take measures to ensure that you comply with your own specific quality control practices and standards, and the local rules, laws, regulations, licenses, and terms of use that apply to you, your content, and the third-party model referenced in this guidance. AWS has no control or authority over the third-party model referenced in this guidance, and does not make any representations or warranties that the third-party model is secure, virus-free, operational, or compatible with your production environment and standards. AWS does not make any representations, warranties, or guarantees that any information in this guidance will result in a particular outcome or result.


About the authors

Dr. Kyle Ulrich is an Applied Scientist with the Amazon SageMaker built-in algorithms team. His research interests include scalable machine learning algorithms, computer vision, time series, Bayesian non-parametrics, and Gaussian processes. His PhD is from Duke University and he has published papers in NeurIPS, Cell, and Neuron.

Evan Kravitz is a software engineer at Amazon Web Services, working on SageMaker JumpStart. He is interested in the confluence of machine learning with cloud computing. Evan received his undergraduate degree from Cornell University and master’s degree from the University of California, Berkeley. In 2021, he presented a paper on adversarial neural networks at the ICLR conference. In his free time, Evan enjoys cooking, traveling, and going on runs in New York City.

Rachna Chadha is a Principal Solution Architect AI/ML in Strategic Accounts at AWS. Rachna is an optimist who believes that ethical and responsible use of AI can improve society in the future and bring economic and social prosperity. In her spare time, Rachna likes spending time with her family, hiking, and listening to music.

Dr. Ashish Khetan is a Senior Applied Scientist with Amazon SageMaker built-in algorithms and helps develop machine learning algorithms. He got his PhD from University of Illinois Urbana-Champaign. He is an active researcher in machine learning and statistical inference, and has published many papers in NeurIPS, ICML, ICLR, JMLR, ACL, and EMNLP conferences.

Karl Albertsen leads product, engineering, and science for Amazon SageMaker Algorithms and JumpStart, SageMaker’s machine learning hub. He is passionate about applying machine learning to unlock business value.

Read More

Identify cybersecurity anomalies in your Amazon Security Lake data using Amazon SageMaker

Identify cybersecurity anomalies in your Amazon Security Lake data using Amazon SageMaker

Customers are faced with increasing security threats and vulnerabilities across infrastructure and application resources as their digital footprint has expanded and the business impact of those digital assets has grown. A common cybersecurity challenge has been two-fold:

  • Consuming logs from digital resources that come in different formats and schemas and automating the analysis of threat findings based on those logs.
  • Whether logs are coming from Amazon Web Services (AWS), other cloud providers, on-premises, or edge devices, customers need to centralize and standardize security data.

Furthermore, the analytics for identifying security threats must be capable of scaling and evolving to meet a changing landscape of threat actors, security vectors, and digital assets.

A novel approach to solve this complex security analytics scenario combines the ingestion and storage of security data using Amazon Security Lake and analyzing the security data with machine learning (ML) using Amazon SageMaker. Amazon Security Lake is a purpose-built service that automatically centralizes an organization’s security data from cloud and on-premises sources into a purpose-built data lake stored in your AWS account. Amazon Security Lake automates the central management of security data, normalizes logs from integrated AWS services and third-party services and manages the lifecycle of data with customizable retention and also automates storage tiering. Amazon Security Lake ingests log files in the Open Cybersecurity Schema Framework (OCSF) format, with support for partners such as Cisco Security, CrowdStrike, Palo Alto Networks, and OCSF logs from resources outside your AWS environment. This unified schema streamlines downstream consumption and analytics because the data follows a standardized schema and new sources can be added with minimal data pipeline changes. After the security log data is stored in Amazon Security Lake, the question becomes how to analyze it. An effective approach to analyzing the security log data is using ML; specifically, anomaly detection, which examines activity and traffic data and compares it against a baseline. The baseline defines what activity is statistically normal for that environment. Anomaly detection scales beyond an individual event signature, and it can evolve with periodic retraining; traffic classified as abnormal or anomalous can then be acted upon with prioritized focus and urgency. Amazon SageMaker is a fully managed service that enables customers to prepare data and build, train, and deploy ML models for any use case with fully managed infrastructure, tools, and workflows, including no-code offerings for business analysts. SageMaker supports two built-in anomaly detection algorithms: IP Insights and Random Cut Forest. You can also use SageMaker to create your own custom outlier detection model using algorithms sourced from multiple ML frameworks.

In this post, you learn how to prepare data sourced from Amazon Security Lake, and then train and deploy an ML model using an IP Insights algorithm in SageMaker. This model identifies anomalous network traffic or behavior which can then be composed as part of a larger end-to-end security solution. Such a solution could invoke a multi-factor authentication (MFA) check if a user is signing in from an unusual server or at an unusual time, notify staff if there is a suspicious network scan coming from new IP addresses, alert administrators if unusual network protocols or ports are used, or enrich the IP insights classification result with other data sources such as Amazon GuardDuty and IP reputation scores to rank threat findings.

Solution overview

Amazon Security Lake SageMaker IPInsights Solution Architecture

Figure 1 – Solution Architecture

  1. Enable Amazon Security Lake with AWS Organizations for AWS accounts, AWS Regions, and external IT environments.
  2. Set up Security Lake sources from Amazon Virtual Private Cloud (Amazon VPC) Flow Logs and Amazon Route53 DNS logs to the Amazon Security Lake S3 bucket.
  3. Process Amazon Security Lake log data using a SageMaker Processing job to engineer features. Use Amazon Athena to query structured OCSF log data from Amazon Simple Storage Service (Amazon S3) through AWS Glue tables managed by AWS LakeFormation.
  4. Train a SageMaker ML model using a SageMaker Training job that consumes the processed Amazon Security Lake logs.
  5. Deploy the trained ML model to a SageMaker inference endpoint.
  6. Store new security logs in an S3 bucket and queue events in Amazon Simple Queue Service (Amazon SQS).
  7. Subscribe an AWS Lambda function to the SQS queue.
  8. Invoke the SageMaker inference endpoint using a Lambda function to classify security logs as anomalies in real time.

Prerequisites

To deploy the solution, you must first complete the following prerequisites:

  1. Enable Amazon Security Lake within your organization or a single account with both VPC Flow Logs and Route 53 resolver logs enabled.
  2. Ensure that the AWS Identity and Access Management (IAM) role used by SageMaker processing jobs and notebooks has been granted an IAM policy including the Amazon Security Lake subscriber query access permission for the managed Amazon Security lake database and tables managed by AWS Lake Formation. This processing job should be run from within an analytics or security tooling account to remain compliant with AWS Security Reference Architecture (AWS SRA).
  3. Ensure that the IAM role used by the Lambda function has been granted an IAM policy including the Amazon Security Lake subscriber data access permission.

Deploy the solution

To set up the environment, complete the following steps:

  1. Launch a SageMaker Studio or SageMaker Jupyter notebook with a ml.m5.large instance. Note: Instance size is dependent on the datasets you use.
  2. Clone the GitHub repository.
  3. Open the notebook 01_ipinsights/01-01.amazon-securitylake-sagemaker-ipinsights.ipy.
  4. Implement the provided IAM policy and corresponding IAM trust policy for your SageMaker Studio Notebook instance to access all the necessary data in S3, Lake Formation, and Athena.

This blog walks through the relevant portion of code within the notebook after it’s deployed in your environment.

Install the dependencies and import the required library

Use the following code to install dependencies, import the required libraries, and create the SageMaker S3 bucket needed for data processing and model training. One of the required libraries, awswrangler, is an AWS SDK for pandas dataframe that is used to query the relevant tables within the AWS Glue Data Catalog and store the results locally in a dataframe.

import boto3
import botocore
import os
import sagemaker
import pandas as pd

%conda install openjdk -y
%pip install pyspark 
%pip install sagemaker_pyspark

from pyspark import SparkContext, SparkConf
from pyspark.sql import SparkSession

bucket = sagemaker.Session().default_bucket()
prefix = "sagemaker/ipinsights-vpcflowlogs"
execution_role = sagemaker.get_execution_role()
region = boto3.Session().region_name
seclakeregion = region.replace("-","_")
# check if the bucket exists
try:
    boto3.Session().client("s3").head_bucket(Bucket=bucket)
except botocore.exceptions.ParamValidationError as e:
    print("Missing S3 bucket or invalid S3 Bucket")
except botocore.exceptions.ClientError as e:
    if e.response["Error"]["Code"] == "403":
        print(f"You don't have permission to access the bucket, {bucket}.")
    elif e.response["Error"]["Code"] == "404":
        print(f"Your bucket, {bucket}, doesn't exist!")
    else:
        raise
else:
    print(f"Training input/output will be stored in: s3://{bucket}/{prefix}")

Query the Amazon Security Lake VPC flow log table

This portion of code uses the AWS SDK for pandas to query the AWS Glue table related to VPC Flow Logs. As mentioned in the prerequisites, Amazon Security Lake tables are managed by AWS Lake Formation, so all proper permissions must be granted to the role used by the SageMaker notebook. This query will pull multiple days of VPC flow log traffic. The dataset used during development of this blog was small. Depending on the scale of your use case, you should be aware of the limits of the AWS SDK for pandas. When considering terabyte scale, you should consider AWS SDK for pandas support for Modin.

ocsf_df = wr.athena.read_sql_query("SELECT src_endpoint.instance_uid as instance_id, src_endpoint.ip as sourceip FROM amazon_security_lake_table_"+seclakeregion+"_vpc_flow_1_0 WHERE src_endpoint.ip IS NOT NULL AND src_endpoint.instance_uid IS NOT NULL AND src_endpoint.instance_uid != '-' AND src_endpoint.ip != '-'", database="amazon_security_lake_glue_db_us_east_1", 
ctas_approach=False, 
unload_approach=True, 
s3_output=f"s3://{bucket}/unload/parquet/updated") 
ocsf_df.head()

When you view the data frame, you will see an output of a single column with common fields that can be found in the Network Activity (4001) class of the OCSF.

Normalize the Amazon Security Lake VPC flow log data into the required training format for IP Insights.

The IP Insights algorithm requires that the training data be in CSV format and contain two columns. The first column must be an opaque string that corresponds to an entity’s unique identifier. The second column must be the IPv4 address of the entity’s access event in decimal-dot notation. In the sample dataset for this blog, the unique identifier is the Instance IDs of EC2 instances associated to the instance_id value within the dataframe. The IPv4 address will be derived from the src_endpoint. Based on the way the Amazon Athena query was created, the imported data is already in the correct format for training an IP Insights model, so no additional feature engineering is required. If you modify the query in another way, you may need to incorporate additional feature engineering.

Query and normalize the Amazon Security Lake Route 53 resolver log table

Just as you did above, the next step of the notebook runs a similar query against the Amazon Security Lake Route 53 resolver table. Since you will be using all OCSF compliant data within this notebook, any feature engineering tasks remain the same for Route 53 resolver logs as they were for VPC Flow Logs. You then combine the two data frames into a single data frame that is used for training. Since the Amazon Athena query loads the data locally in the correct format, no further feature engineering is required.

ocsf_rt_53_df = wr.athena.read_sql_query("SELECT src_endpoint.instance_uid as instance_id, src_endpoint.ip as sourceip FROM amazon_security_lake_table_"+seclakeregion+"_route53_1_0 WHERE src_endpoint.ip IS NOT NULL AND src_endpoint.instance_uid IS NOT NULL AND src_endpoint.instance_uid != '-' AND src_endpoint.ip != '-'", database="amazon_security_lake_glue_db_us_east_1", 
ctas_approach=False, 
unload_approach=True, 
s3_output=f"s3://{bucket}/unload/rt53parquet")
ocsf_rt_53_df.head()
ocsf_complete = pd.concat([ocsf_df, ocsf_rt_53_df], ignore_index=True)

Get IP Insights training image and train the model with the OCSF data

In this next portion of the notebook, you train an ML model based on the IP Insights algorithm and use the consolidated dataframe of OCSF from different types of logs. A list of the IP Insights hyperparmeters can be found here. In the example below we selected hyperparameters that outputted the best performing model, for example, 5 for epoch and 128 for vector_dim. Since the training dataset for our sample was relatively small, we utilized a ml.m5.large instance. Hyperparameters and your training configurations such as instance count and instance type should be chosen based on your objective metrics and your training data size. One capability that you can utilize within Amazon SageMaker to find the best version of your model is Amazon SageMaker automatic model tuning that searches for the best model across a range of hyperparameter values.

training_path = f"s3://{bucket}/{prefix}/training/training_input.csv"
wr.s3.to_csv(ocsf_complete, training_path, header=False, index=False)
from sagemaker.amazon.amazon_estimator 
import image_uris

image = sagemaker.image_uris.get_training_image_uri(boto3.Session().region_name,"ipinsights")

ip_insights = sagemaker.estimator.Estimator(image,execution_role,
instance_count=1,
instance_type="ml.m5.large",
output_path=f"s3://{bucket}/{prefix}/output",
sagemaker_session=sagemaker.Session())
ip_insights.set_hyperparameters(num_entity_vectors="20000",
random_negative_sampling_rate="5",
vector_dim="128",
mini_batch_size="1000",
epochs="5",learning_rate="0.01")

input_data = { "train": sagemaker.session.s3_input(training_path, content_type="text/csv")}
ip_insights.fit(input_data)

Deploy the trained model and test with valid and anomalous traffic

After the model has been trained, you deploy the model to a SageMaker endpoint and send a series of unique identifier and IPv4 address combinations to test your model. This portion of code assumes you have test data saved in your S3 bucket. The test data is a .csv file, where the first column is instance ids and the second column is IPs. It is recommended to test valid and invalid data to see the results of the model. The following code deploys your endpoint.

predictor = ip_insights.deploy(initial_instance_count=1, instance_type="ml.m5.large")
print(f"Endpoint name: {predictor.endpoint}")

Now that your endpoint is deployed, you can now submit inference requests to identify if traffic is potentially anomalous. Below is a sample of what your formatted data should look like. In this case, the first column identifier is an instance id and the second column is an associated IP address as shown in the following:

i-0dee580a031e28c14,10.0.2.125
i-05891769c3b7b2879,10.0.3.238
i-0dee580a031e28c14,10.0.2.145
i-05891769c3b7b2879,10.0.10.11

After you have your data in CSV format, you can submit the data for inference using the code by reading your .csv file from an S3 bucket.:

inference_df = wr.s3.read_csv('s3://{bucket}/{prefix}/inference/testdata.csv')

import io
from io import StringIO

csv_file = io.StringIO()
inference_csv = inference_df.to_csv(csv_file, sep=",", header=True, index=False)
inference_payload = csv_file.getvalue()
print(inference_payload)
response = predictor.predict(
inference_payload,
initial_args={"ContentType":'text/csv'})
print(response)

b'{"predictions": [{"dot_product": 1.2591100931167603}, {"dot_product": 0.97600919008255}, {"dot_product": -3.638532876968384}, {"dot_product": -6.778188705444336}]}'

The output for an IP Insights model provides a measure of how statistically expected an IP address and online resource are. The range for this address and resource is unbounded however, so there are considerations on how you would determine if an instance ID and IP address combination should be considered anomalous.

In the preceding example, four different identifier and IP combinations were submitted to the model. The first two combinations were valid instance ID and IP address combinations that are expected based on the training set. The third combination has the correct unique identifier but a different IP address within the same subnet. The model should determine there is a modest anomaly as the embedding is slightly different from the training data. The fourth combination has a valid unique identifier but an IP address of a nonexistent subnet within any VPC in the environment.

Note: Normal and abnormal traffic data will change based on your specific use case, for example: if you want to monitor external and internal traffic you would need a unique identifier aligned to each IP address and a scheme to generate the external identifiers.

To determine what your threshold should be to determine whether traffic is anomalous can be done using known normal and abnormal traffic. The steps outlined in this sample notebook are as follows:

  1. Construct a test set to represent normal traffic.
  2. Add abnormal traffic into the dataset.
  3. Plot the distribution of dot_product scores for the model on normal traffic and the abnormal traffic.
  4. Select a threshold value which distinguishes the normal subset from the abnormal subset. This value is based on your false-positive tolerance

Set up continuous monitoring of new VPC flow log traffic.

To demonstrate how this new ML model could be use with Amazon Security Lake in a proactive manner, we will configure a Lambda function to be invoked on each PutObject event within the Amazon Security Lake managed bucket, specifically the VPC flow log data. Within Amazon Security Lake there is the concept of a subscriber, that consumes logs and events from Amazon Security Lake. The Lambda function that responds to new events must be granted a data access subscription. Data access subscribers are notified of new Amazon S3 objects for a source as the objects are written to the Security Lake bucket. Subscribers can directly access the S3 objects and receive notifications of new objects through a subscription endpoint or by polling an Amazon SQS queue.

  1. Open the Security Lake console.
  2. In the navigation pane, select Subscribers.
  3. On the Subscribers page, choose Create subscriber.
  4. For Subscriber details, enter inferencelambda for Subscriber name and an optional Description.
  5. The Region is automatically set as your currently selected AWS Region and can’t be modified.
  6. For Log and event sources, choose Specific log and event sources and choose VPC Flow Logs and Route 53 logs
  7. For Data access method, choose S3.
  8. For Subscriber credentials, provide your AWS account ID of the account where the Lambda function will reside and a user-specified external ID.
    Note: If doing this locally within an account, you don’t need to have an external ID.
  9. Choose Create.

Create the Lambda function

To create and deploy the Lambda function you can either complete the following steps or deploy the prebuilt SAM template 01_ipinsights/01.02-ipcheck.yaml in the GitHub repo. The SAM template requires you provide the SQS ARN and the SageMaker endpoint name.

  1. On the Lambda console, choose Create function.
  2. Choose Author from scratch.
  3. For Function Name, enter ipcheck.
  4. For Runtime, choose Python 3.10.
  5. For Architecture, select x86_64.
  6. For Execution role, select Create a new role with Lambda permissions.
  7. After you create the function, enter the contents of the ipcheck.py file from the GitHub repo.
  8. In the navigation pane, choose Environment Variables.
  9. Choose Edit.
  10. Choose Add environment variable.
  11. For the new environment variable, enter ENDPOINT_NAME and for value enter the endpoint ARN that was outputted during deployment of the SageMaker endpoint.
  12. Select Save.
  13. Choose Deploy.
  14. In the navigation pane, choose Configuration.
  15. Select Triggers.
  16. Select Add trigger.
  17. Under Select a source, choose SQS.
  18. Under SQS queue, enter the ARN of the main SQS queue created by Security Lake.
  19. Select the checkbox for Activate trigger.
  20. Select Add.

Validate Lambda findings

  1. Open the Amazon CloudWatch console.
  2. In the left side pane, select Log groups.
  3. In the search bar, enter ipcheck, and then select the log group with the name /aws/lambda/ipcheck.
  4. Select the most recent log stream under Log streams.
  5. Within the logs, you should see results that look like the following for each new Amazon Security Lake log:

{'predictions': [{'dot_product': 0.018832731992006302}, {'dot_product': 0.018832731992006302}]}

This Lambda function continually analyzes the network traffic being ingested by Amazon Security Lake. This allows you to build mechanisms to notify your security teams when a specified threshold is violated, which would indicate an anomalous traffic in your environment.

Cleanup

When you’re finished experimenting with this solution and to avoid charges to your account, clean up your resources by deleting the S3 bucket, SageMaker endpoint, shutting down the compute attached to the SageMaker Jupyter notebook, deleting the Lambda function, and disabling Amazon Security Lake in your account.

Conclusion

In this post you learned how to prepare network traffic data sourced from Amazon Security Lake for machine learning, and then trained and deployed an ML model using the IP Insights algorithm in Amazon SageMaker. All of the steps outlined in the Jupyter notebook can be replicated in an end-to-end ML pipeline. You also implemented an AWS Lambda function that consumed new Amazon Security Lake logs and submitted inferences based on the trained anomaly detection model. The ML model responses received by AWS Lambda could proactively notify security teams of anomalous traffic when certain thresholds are met. Continuous improvement of the model can be enabled by including your security team in the loop reviews to label whether traffic identified as anomalous was a false positive or not. This could then be added to your training set and also added to your normal traffic dataset when determining an empirical threshold. This model can identify potentially anomalous network traffic or behavior whereby it can be included as part of a larger security solution to initiate an MFA check if a user is signing in from an unusual server or at an unusual time, alert staff if there is a suspicious network scan coming from new IP addresses, or combine the IP insights score with other sources such as Amazon Guard Duty to rank threat findings. This model can include custom log sources such as Azure Flow Logs or on-premises logs by adding in custom sources to your Amazon Security Lake deployment.

In part 2 of this blog post series, you will learn how to build an anomaly detection model using the Random Cut Forest algorithm trained with additional Amazon Security Lake sources that integrate network and host security log data and apply the security anomaly classification as part of an automated, comprehensive security monitoring solution.


About the authors

Joe Morotti is a Solutions Architect at Amazon Web Services (AWS), helping Enterprise customers across the Midwest US. He has held a wide range of technical roles and enjoy showing customer’s art of the possible. In his free time, he enjoys spending quality time with his family exploring new places and overanalyzing his sports team’s performance

Bishr Tabbaa is a solutions architect at Amazon Web Services. Bishr specializes in helping customers with machine learning, security, and observability applications. Outside of work, he enjoys playing tennis, cooking, and spending time with family.

Sriharsh Adari is a Senior Solutions Architect at Amazon Web Services (AWS), where he helps customers work backwards from business outcomes to develop innovative solutions on AWS. Over the years, he has helped multiple customers on data platform transformations across industry verticals. His core area of expertise include Technology Strategy, Data Analytics, and Data Science. In his spare time, he enjoys playing Tennis, binge-watching TV shows, and playing Tabla.

Read More

Driving advanced analytics outcomes at scale using Amazon SageMaker powered PwC’s Machine Learning Ops Accelerator

Driving advanced analytics outcomes at scale using Amazon SageMaker powered PwC’s Machine Learning Ops Accelerator

This post was written in collaboration with Ankur Goyal and Karthikeyan Chokappa from PwC Australia’s Cloud & Digital business.

Artificial intelligence (AI) and machine learning (ML) are becoming an integral part of systems and processes, enabling decisions in real time, thereby driving top and bottom-line improvements across organizations. However, putting an ML model into production at scale is challenging and requires a set of best practices. Many businesses already have data scientists and ML engineers who can build state-of-the-art models, but taking models to production and maintaining the models at scale remains a challenge. Manual workflows limit ML lifecycle operations to slow down the development process, increase costs, and compromise the quality of the final product.

Machine learning operations (MLOps) applies DevOps principles to ML systems. Just like DevOps combines development and operations for software engineering, MLOps combines ML engineering and IT operations. With the rapid growth in ML systems and in the context of ML engineering, MLOps provides capabilities that are needed to handle the unique complexities of the practical application of ML systems. Overall, ML use cases require a readily available integrated solution to industrialize and streamline the process that takes an ML model from development to production deployment at scale using MLOps.

To address these customer challenges, PwC Australia developed Machine Learning Ops Accelerator as a set of standardized process and technology capabilities to improve the operationalization of AI/ML models that enable cross-functional collaboration across teams throughout ML lifecycle operations. PwC Machine Learning Ops Accelerator, built on top of AWS native services, delivers a fit-for-purpose solution that easily integrates into the ML use cases with ease for customers across all industries. In this post, we focus on building and deploying an ML use case that integrates various lifecycle components of an ML model, enabling continuous integration (CI), continuous delivery (CD), continuous training (CT), and continuous monitoring (CM).

Solution overview

In MLOps, a successful journey from data to ML models to recommendations and predictions in business systems and processes involves several crucial steps. It involves taking the result of an experiment or prototype and turning it into a production system with standard controls, quality, and feedback loops. It’s much more than just automation. It’s about improving organization practices and delivering outcomes that are repeatable and reproducible at scale.

Only a small fraction of a real-world ML use case comprises the model itself. The various components needed to build an integrated advanced ML capability and continuously operate it at scale is shown in Figure 1. As illustrated in the following diagram, PwC MLOps Accelerator comprises seven key integrated capabilities and iterative steps that enable CI, CD, CT, and CM of an ML use case. The solution takes advantage of AWS native features from Amazon SageMaker, building a flexible and extensible framework around this.

PwC Machine Learning Ops Accelerator capabilities

Figure 1 -– PwC Machine Learning Ops Accelerator capabilities

In a real enterprise scenario, additional steps and stages of testing may exist to ensure rigorous validation and deployment of models across different environments.

  1. Data and model management provide a central capability that governs ML artifacts throughout their lifecycle. It enables auditability, traceability, and compliance. It also promotes the shareability, reusability, and discoverability of ML assets.
  2. ML model development allows various personas to develop a robust and reproducible model training pipeline, which comprises a sequence of steps, from data validation and transformation to model training and evaluation.
  3. Continuous integration/delivery facilitates the automated building, testing, and packaging of the model training pipeline and deploying it into the target execution environment. Integrations with CI/CD workflows and data versioning promote MLOps best practices such as governance and monitoring for iterative development and data versioning.
  4. ML model continuous training capability executes the training pipeline based on retraining triggers; that is, as new data becomes available or model performance decays below a preset threshold. It registers the trained model if it qualifies as a successful model candidate and stores the training artifacts and associated metadata.
  5. Model deployment allows access to the registered trained model to review and approve for production release and enables model packaging, testing, and deploying into the prediction service environment for production serving.
  6. Prediction service capability starts the deployed model to provide prediction through online, batch, or streaming patterns. Serving runtime also captures model serving logs for continuous monitoring and improvements.
  7. Continuous monitoring monitors the model for predictive effectiveness to detect model decay and service effectiveness (latency, pipeline throughout, and execution errors)

PwC Machine Learning Ops Accelerator architecture

The solution is built on top of AWS-native services using Amazon SageMaker and serverless technology to keep performance and scalability high and running costs low.

PwC MLOps Accelerator architecture

Figure 2 – PwC Machine Learning Ops Accelerator architecture 

  • PwC Machine Learning Ops Accelerator provides a persona-driven access entitlement for build-out, usage, and operations that enables ML engineers and data scientists to automate deployment of pipelines (training and serving) and rapidly respond to model quality changes. Amazon SageMaker Role Manager is used to implement role-based ML activity, and Amazon S3 is used to store input data and artifacts.
  • Solution uses existing model creation assets from the customer and builds a flexible and extensible framework around this using AWS native services. Integrations have been built between Amazon S3, Git, and AWS CodeCommit that allow dataset versioning with minimal future management.
  • AWS CloudFormation template is generated using AWS Cloud Development Kit (AWS CDK). AWS CDK provides the ability to manage changes for the complete solution. The automated pipeline includes steps for out-of-the-box model storage and metric tracking.
  • PwC MLOps Accelerator is designed to be modular and delivered as infrastructure-as-code (IaC) to allow automatic deployments. The deployment process uses AWS CodeCommit, AWS CodeBuild, AWS CodePipeline, and AWS CloudFormation template. Complete end-to-end solution to operationalize an ML model is available as deployable code.
  • Through a series of IaC templates, three distinct components are deployed: model build, model deployment , and model monitoring and prediction serving, using Amazon SageMaker Pipelines
    • Model build pipeline automates the model training and evaluation process and enables approval and registration of the trained model.
    • Model deployment pipeline provisions the necessary infrastructure to deploy the ML model for batch and real-time inference.
    • Model monitoring and prediction serving pipeline deploys the infrastructure required to serve predictions and monitor model performance.
  • PwC MLOps Accelerator is designed to be agnostic to ML models, ML frameworks, and runtime environments. The solution allows for the familiar use of programming languages like Python and R, development tools such as Jupyter Notebook, and ML frameworks through a configuration file. This flexibility makes it straightforward for data scientists to continuously refine models and deploy them using their preferred language and environment.
  • The solution has built-in integrations to use either pre-built or custom tools to assign the labeling tasks using Amazon SageMaker Ground Truth for training datasets to provide continuous training and monitoring.
  • End-to-end ML pipeline is architected using SageMaker native features (Amazon SageMaker Studio , Amazon SageMaker Model Building Pipelines, Amazon SageMaker Experiments, and Amazon SageMaker endpoints).
  • The solution uses Amazon SageMaker built-in capabilities for model versioning, model lineage tracking, model sharing, and serverless inference with Amazon SageMaker Model Registry.
  • Once the model is in production, the solution continuously monitors the quality of ML models in real time. Amazon SageMaker Model Monitor is used to continuously monitor models in production. Amazon CloudWatch Logs is used to collect log files monitoring the model status, and notifications are sent using Amazon SNS when the quality of the model hits certain thresholds. Native loggers such as (boto3) are used to capture run status to expedite troubleshooting.

Solution walkthrough

The following walkthrough dives into the standard steps to create the MLOps process for a model using PwC MLOps Accelerator. This walkthrough describes a use case of an MLOps engineer who wants to deploy the pipeline for a recently developed ML model using a simple definition/configuration file that is intuitive.

PwC MLOps Accelerator process lifecyle

Figure 3 – PwC Machine Learning Ops Accelerator process lifecycle

  • To get started, enroll in PwC MLOps Accelerator to get access to solution artifacts. The entire solution is driven from one configuration YAML file (config.yaml) per model. All the details required to run the solution are contained within that config file and stored along with the model in a Git repository. The configuration file will serve as input to automate workflow steps by externalizing important parameters and settings outside of code.
  • The ML engineer is required to populate config.yaml file and trigger the MLOps pipeline. Customers can configure an AWS account, the repository, the model, the data used, the pipeline name, the training framework, the number of instances to use for training, the inference framework, and any pre- and post-processing steps and several other configurations to check the model quality, bias, and explainability.
Machine Learning Ops Accelerator configuration YAML

Figure 4 – Machine Learning Ops Accelerator configuration YAML                                               

  • A simple YAML file is used to configure each model’s training, deployment, monitoring, and runtime requirements. Once the config.yaml is configured appropriately and saved alongside the model in its own Git repository, the model-building orchestrator is invoked. It also can read from a Bring-Your-Own-Model that can be configured through YAML to trigger deployment of the model build pipeline.
  • Everything after this point is automated by the solution and does not need the involvement of either the ML engineer or data scientist. The pipeline responsible for building the ML model includes data preprocessing, model training, model evaluation, and ost-processing. If the model passes automated quality and performance tests, the model is saved to a registry, and artifacts are written to Amazon S3 storage per the definitions in the YAML files. This triggers the creation of the model deployment pipeline for that ML model.
Sample model deployment workflow

Figure 5 – Sample model deployment workflow                                                      

  • Next, an automated deployment template provisions the model in a staging environment with a live endpoint. Upon approval, the model is automatically deployed into the production environment.
  • The solution deploys two linked pipelines. Prediction serving deploys an accessible live endpoint through which predictions can be served. Model monitoring creates a continuous monitoring tool that calculates key model performance and quality metrics, triggering model retraining if a significant change in model quality is detected.
  • Now that you’ve gone through the creation and initial deployment, the MLOps engineer can configure failure alerts to be alerted for issues, for example, when a pipeline fails to do its intended job.
  • MLOps is no longer about packaging, testing, and deploying cloud service components similar to a traditional CI/CD deployment; it’s a system that should automatically deploy another service. For example, the model training pipeline automatically deploys the model deployment pipeline to enable prediction service, which in turn enables the model monitoring service.

Conclusion

In summary, MLOps is critical for any organization that aims to deploy ML models in production systems at scale. PwC developed an accelerator to automate building, deploying, and maintaining ML models via integrating DevOps tools into the model development process.

In this post, we explored how the PwC solution is powered by AWS native ML services and helps to adopt MLOps practices so that businesses can speed up their AI journey and gain more value from their ML models. We walked through the steps a user would take to access the PwC Machine Learning Ops Accelerator, run the pipelines, and deploy an ML use case that integrates various lifecycle components of an ML model.

To get started with your MLOps journey on AWS Cloud at scale and run your ML production workloads, enroll in PwC Machine Learning Operations.


About the Authors

 Kiran Kumar Ballari is a Principal Solutions Architect at Amazon Web Services (AWS). He is an evangelist who loves to help customers leverage new technologies and build repeatable industry solutions to solve their problems. He is especially passionate about software engineering , Generative AI and helping companies with AI/ML product development.

Ankur Goyal is a director in PwC Australia’s Cloud and Digital practice, focused on Data, Analytics & AI. Ankur has extensive experience in supporting public and private sector organizations in driving technology transformations and designing innovative solutions by leveraging data assets and technologies.

Karthikeyan Chokappa (KC) is a Manager in PwC Australia’s Cloud and Digital practice, focused on Data, Analytics & AI. KC is passionate about designing, developing, and deploying end-to-end analytics solutions that transform data into valuable decision assets to improve performance and utilization and reduce the total cost of ownership for connected and intelligent things.

Rama Lankalapalli is a Sr. Partner Solutions Architect at AWS, working with PwC to accelerate their clients’ migrations and modernizations into AWS. He works across diverse industries to accelerate their adoption of AWS Cloud. His expertise lies in architecting efficient and scalable cloud solutions, driving innovation and modernization of customer applications by leveraging AWS services, and establishing resilient cloud foundations.

Jeejee Unwalla is a Senior Solutions Architect at AWS who enjoys guiding customers in solving challenges and thinking strategically. He is passionate about tech and data and enabling innovation.

Read More

ISO 42001: A new foundational global standard to advance responsible AI

ISO 42001: A new foundational global standard to advance responsible AI

Artificial intelligence (AI) is one of the most transformational technologies of our generation and provides opportunities to be a force for good and drive economic growth. The growth of large language models (LLMs), with hundreds of billions of parameters, has unlocked new generative AI use cases to improve customer experiences, boost employee productivity, and so much more. At AWS, we remain committed to harnessing AI responsibly, working hand in hand with our customers to develop and use AI systems with safety, fairness, and security at the forefront.

The AI industry reached an important milestone this week with the publication of ISO 42001. In simple terms, ISO 42001 is an international management system standard that provides guidelines for managing AI systems within organizations. It establishes a framework for organizations to systematically address and control the risks related to the development and deployment of AI. ISO 42001 emphasizes a commitment to responsible AI practices, encouraging organizations to adopt controls specific to their AI systems, fostering global interoperability, and setting a foundation for the development and deployment of responsible AI.

Trust in AI is crucial and integrating standards such as ISO 42001, which promotes AI governance, is one way to help earn public trust by supporting a responsible use approach.

As part of a broad, international community of subject matter experts, AWS has actively collaborated in ISO 42001’s development since 2021 and started laying the foundation prior to the standard’s final publication.

International standards foster global cooperation in developing and implementing responsible AI solutions. Perhaps like no other technology before it, managing these challenges requires collaboration and a truly multidisciplinary effort across technology companies, policymakers, community groups, scientists, and others—and international standards play a valuable role.

International standards are an important tool to help organizations translate domestic regulatory requirements into compliance mechanisms, including engineering practices, that are largely globally interoperable. Effective standards help reduce confusion about what AI is and what responsible AI entails, and help focus the industry on the reduction of potential harms. AWS is working in a community of diverse international stakeholders to improve emerging AI standards on variety of topics, including risk management, data quality, bias, and transparency.

Conformance with the new ISO 42001 standard is one of the ways that organizations can demonstrate a commitment to excellence in responsibly developing and deploying AI systems and applications. We are continuing to pursue the adoption of ISO 42001, and look forward to working with customers to do the same.

We are committed to investing in the future of responsible AI, and helping to inform international standards in the interest of our customers and the communities in which we all live and operate.


About the authors

Swami Sivasubramanian is Vice President of Data and Machine Learning at AWS. In this role, Swami oversees all AWS Database, Analytics, and AI & Machine Learning services. His team’s mission is to help organizations put their data to work with a complete, end-to-end data solution to store, access, analyze, and visualize, and predict.

Read More

Accelerating time-to-insight with MongoDB time series collections and Amazon SageMaker Canvas

Accelerating time-to-insight with MongoDB time series collections and Amazon SageMaker Canvas

This is a guest post co-written with Babu Srinivasan from MongoDB.

As industries evolve in today’s fast-paced business landscape, the inability to have real-time forecasts poses significant challenges for industries heavily reliant on accurate and timely insights. The absence of real-time forecasts in various industries presents pressing business challenges that can significantly impact decision-making and operational efficiency. Without real-time insights, businesses struggle to adapt to dynamic market conditions, accurately anticipate customer demand, optimize inventory levels, and make proactive strategic decisions. Industries such as Finance, Retail, Supply Chain Management, and Logistics face the risk of missed opportunities, increased costs, inefficient resource allocation, and the inability to meet customer expectations. By exploring these challenges, organizations can recognize the importance of real-time forecasting and explore innovative solutions to overcome these hurdles, enabling them to stay competitive, make informed decisions, and thrive in today’s fast-paced business environment.

By harnessing the transformative potential of MongoDB’s native time series data capabilities and integrating it with the power of Amazon SageMaker Canvas, organizations can overcome these challenges and unlock new levels of agility. MongoDB’s robust time series data management allows for the storage and retrieval of large volumes of time-series data in real-time, while advanced machine learning algorithms and predictive capabilities provide accurate and dynamic forecasting models with SageMaker Canvas.

In this post, we will explore the potential of using MongoDB’s time series data and SageMaker Canvas as a comprehensive solution.

MongoDB Atlas

MongoDB Atlas is a fully managed developer data platform that simplifies the deployment and scaling of MongoDB databases in the cloud. It is a document based storage that provides a fully managed database, with built-in full-text and vector Search, support for Geospatial queries, Charts and native support for efficient time series storage and querying capabilities. MongoDB Atlas offers automatic sharding, horizontal scalability, and flexible indexing for high-volume data ingestion. Among all, the native time series capabilities is a standout feature, making it ideal for a managing high volume of time-series data, such as business critical application data, telemetry, server logs and more. With efficient querying, aggregation, and analytics, businesses can extract valuable insights from time-stamped data. By using these capabilities, businesses can efficiently store, manage, and analyze time-series data, enabling data-driven decisions and gaining a competitive edge.

Amazon SageMaker Canvas

Amazon SageMaker Canvas is a visual machine learning (ML) service that enables business analysts and data scientists to build and deploy custom ML models without requiring any ML experience or having to write a single line of code. SageMaker Canvas supports a number of use cases, including time-series forecasting, which empowers businesses to forecast future demand, sales, resource requirements, and other time-series data accurately. The service uses deep learning techniques to handle complex data patterns and enables businesses to generate accurate forecasts even with minimal historical data. By using Amazon SageMaker Canvas capabilities, businesses can make informed decisions, optimize inventory levels, improve operational efficiency, and enhance customer satisfaction.

The SageMaker Canvas UI lets you seamlessly integrate data sources from the cloud or on-premises, merge datasets effortlessly, train precise models, and make predictions with emerging data—all without coding. If you need an automated workflow or direct ML model integration into apps, Canvas forecasting functions are accessible through APIs.

Solution overview

Users persist their transactional time series data in MongoDB Atlas. Through Atlas Data Federation, data is extracted into Amazon S3 bucket. Amazon SageMaker Canvas access the data to build models and create forecasts. The results of the forecasting are stored in an S3 bucket. Using the MongoDB Data Federation services, the forecasts are presented visually through MongoDB Charts.

The following diagram outlines the proposed solution architecture.

Prerequisites

For this solution we use MongoDB Atlas to store time series data, Amazon SageMaker Canvas to train a model and produce forecasts, and Amazon S3 to store data extracted from MongoDB Atlas.

Make sure you have the following prerequisites:

Configure MongoDB Atlas cluster

Create a free MongoDB Atlas cluster by following the instructions in Create a Cluster. Setup the Database access and Network access.

Populate a time series collection in MongoDB Atlas

For the purposes of this demonstration, you can use a sample data set from from Kaggle and upload the same to MongoDB Atlas with the MongoDB tools , preferably MongoDB Compass.

The following code shows a sample data set for a time series collection:

{
"store": "1 1",
"timestamp": { "2010-02-05T00:00:00.000Z"},
"temperature": "42.31",
"target_value": 2.572,
"IsHoliday": false
}

The following screenshot shows the sample time series data in MongoDB Atlas:

Create an S3 Bucket

Create an S3 bucket in AWS , where the time series data need to be stored and analyzed. Note we have two folders. sales-train-data is used to store data extracted from MongoDB Atlas, while sales-forecast-output contains predictions from  Canvas.

Create the Data Federation

Setup the Data Federation in Atlas and register the S3 bucket created previously as part of the data source. Notice the three different database/collections are created in the data federation for Atlas cluster, S3 bucket for MongoDB Atlas data and S3 bucket to store the Canvas results.

The following screenshots shows the setup of the data federation.

Setup the Atlas application service

Create the MongoDB Application Services to deploy the functions to transfer the data from MongoDB Atlas cluster to S3 bucket using the $out aggregation.

Verify the Datasource Configuration

The Application services create a new Altas Service Name that needs to be referred as the data services in the following function. Verify that the Atlas Service Name is created and note it for future reference.

Create the function

Setup the Atlas Application services to create the trigger and functions. The triggers need to be scheduled to write the data to S3 at a period frequency based on the business need for training the models.

The following script shows the function to write to the S3 bucket:

exports = function () {

   const service = context.services.get("");
   const db = service.db("")
   const events = db.collection("");

   const pipeline = [
    {
            "$out": {
               "s3": {
                  "bucket": "<S3_bucket_name>",
                  "region": "<AWS_Region>",
                   "filename": {$concat: ["<S3path>/<filename>_",{"$toString":  new Date(Date.now())}]},
                  "format": {
                        "name": "json",
                        "maxFileSize": "10GB"
                  }
               }
            }
      }
   ];

   return events.aggregate(pipeline);
};

Sample function

The function can be run through the Run tab and the errors can be debugged using the log features in the Application Services. In addition, the errors can be debugged using the Logs menu in the left pane.

The following screenshot shows the execution of the function along with the output:

Create dataset in Amazon SageMaker Canvas

The following steps assume that you have created a SageMaker domain and user profile. If you have not already done so, make sure that you configure the SageMaker domain and user profile. In the user profile, update your S3 bucket to be custom and supply your bucket name.

When complete, navigate to SageMaker Canvas, select your domain and profile, and select Canvas.

Create a dataset supplying the data source.

Select the dataset source as S3

Select the data location from the S3 bucket and select Create dataset.

Review the schema and click Create dataset

Upon successful import, the dataset will appear in the list as shown in the following screenshot.

Train the model

Next, we will use Canvas to set up to train the model. Select the dataset and click Create.


Create a model name, select Predictive analysis, and select Create.

Select target column

Next, click Configure time series model and select item_id as the Item ID column.

Select tm for the time stamp column

To specify the amount of time that you want to forecast, choose 8 weeks.

Now you are ready to preview the model or launch the build process.

After you preview the model or launch the build, your model will be created and can take up to four hours. You can leave the screen and return to see the model training status.

When the model is ready, select the model and click on the latest version

Review the model metrics and column impact and if you are satisfied with the model performance, click Predict.

Next, choose Batch prediction, and click Select dataset.

Select your dataset, and click Choose dataset.

Next, click Start Predictions.

Observe a job created or observe the job progress in SageMaker under Inference, Batch transform jobs.

When the job completes, select the job, and note the S3 path where Canvas stored the predictions.

Visualize forecast data in Atlas Charts

To visualize forecast data, create the MongoDB Atlas charts based on the Federated data (amazon-forecast-data) for P10, P50, and P90 forecasts as shown in the following chart.

Clean up

  • Delete the MongoDB Atlas cluster
  • Delete Atlas Data Federation Configuration
  • Delete Atlas Application Service App
  • Delete the S3 Bucket
  • Delete Amazon SageMaker Canvas dataset and models
  • Delete the Atlas Charts
  • Log out of Amazon SageMaker Canvas

Conclusion

In this post we extracted time series data from MongoDB time series collection. This is a special collection optimized for storage and querying speed of time series data. We used Amazon SageMaker Canvas to train models and generate predictions and we visualized the predictions in Atlas Charts.

For more information, refer to the following resources.


About the authors

Igor Alekseev is a Senior Partner Solution Architect at AWS in Data and Analytics domain. In his role Igor is working with strategic partners helping them build complex, AWS-optimized architectures. Prior joining AWS, as a Data/Solution Architect he implemented many projects in Big Data domain, including several data lakes in Hadoop ecosystem. As a Data Engineer he was involved in applying AI/ML to fraud detection and office automation.


Babu Srinivasan
is a Senior Partner Solutions Architect at MongoDB. In his current role, he is working with AWS to build the technical integrations and reference architectures for the AWS and MongoDB solutions. He has more than two decades of experience in Database and Cloud technologies . He is passionate about providing technical solutions to customers working with multiple Global System Integrators(GSIs) across multiple geographies.

Read More

Use Amazon DocumentDB to build no-code machine learning solutions in Amazon SageMaker Canvas

Use Amazon DocumentDB to build no-code machine learning solutions in Amazon SageMaker Canvas

We are excited to announce the launch of Amazon DocumentDB (with MongoDB compatibility) integration with Amazon SageMaker Canvas, allowing Amazon DocumentDB customers to build and use generative AI and machine learning (ML) solutions without writing code. Amazon DocumentDB is a fully managed native JSON document database that makes it straightforward and cost-effective to operate critical document workloads at virtually any scale without managing infrastructure. Amazon SageMaker Canvas is a no-code ML workspace offering ready-to-use models, including foundation models, and the ability to prepare data and build and deploy custom models.

In this post, we discuss how to bring data stored in Amazon DocumentDB into SageMaker Canvas and use that data to build ML models for predictive analytics. Without creating and maintaining data pipelines, you will be able to power ML models with your unstructured data stored in Amazon DocumentDB.

Solution overview

Let’s assume the role of a business analyst for a food delivery company. Your mobile app stores information about restaurants in Amazon DocumentDB because of its scalability and flexible schema capabilities. You want to gather insights on this data and build an ML model to predict how new restaurants will be rated, but find it challenging to perform analytics on unstructured data. You encounter bottlenecks because you need to rely on data engineering and data science teams to accomplish these goals.

This new integration solves these problems by making it simple to bring Amazon DocumentDB data into SageMaker Canvas and immediately start preparing and analyzing data for ML. Additionally, SageMaker Canvas removes the dependency on ML expertise to build high-quality models and generate predictions.

We demonstrate how to use Amazon DocumentDB data to build ML models in SageMaker Canvas in the following steps:

  1. Create an Amazon DocumentDB connector in SageMaker Canvas.
  2. Analyze data using generative AI.
  3. Prepare data for machine learning.
  4. Build a model and generate predictions.

Prerequisites

To implement this solution, complete the following prerequisites:

  1. Have AWS Cloud admin access with an AWS Identity and Access Management (IAM) user with permissions required to complete the integration.
  2. Complete the environment setup using AWS CloudFormation through either of the following options:
    1. Deploy a CloudFormation template into a new VPC – This option builds a new AWS environment that consists of the VPC, private subnets, security groups, IAM execution roles, Amazon Cloud9, required VPC endpoints, and SageMaker domain. It then deploys Amazon DocumentDB into this new VPC. Download the template or quick launch the CloudFormation stack by choosing Launch Stack:
      Launch CloudFormation stack
    2. Deploy a CloudFormation template into an existing VPC – This option creates the required VPC endpoints, IAM execution roles, and SageMaker domain in an existing VPC with private subnets. Download the template or quick launch the CloudFormation stack by choosing Launch Stack:
      Launch CloudFormation stack

Note that if you’re creating a new SageMaker domain, you must configure the domain to be in a private VPC without internet access to be able to add the connector to Amazon DocumentDB. To learn more, refer to Configure Amazon SageMaker Canvas in a VPC without internet access.

  1. Follow the tutorial to load sample restaurant data into Amazon DocumentDB.
  2. Add access to Amazon Bedrock and the Anthropic Claude model within it. For more information, see Add model access.

Create an Amazon DocumentDB connector in SageMaker Canvas

After you create your SageMaker domain, complete the following steps:

  1. On the Amazon DocumentDB console, choose No-code machine learning in the navigation pane.
  2. Under Choose a domain and profile¸ choose your SageMaker domain and user profile.
  3. Choose Launch Canvas to launch SageMaker Canvas in a new tab.

When SageMaker Canvas finishes loading, you will land on the Data flows tab.

  1. Choose Create to create a new data flow.
  2. Enter a name for your data flow and choose Create.
  3. Add a new Amazon DocumentDB connection by choosing Import data, then choose Tabular for Dataset type.
  4. On the Import data page, for Data Source, choose DocumentDB and Add Connection.
  5. Enter a connection name such as demo and choose your desired Amazon DocumentDB cluster.

Note that SageMaker Canvas will prepopulate the drop-down menu with clusters in the same VPC as your SageMaker domain.

  1. Enter a user name, password, and database name.
  2. Finally, select your read preference.

To protect the performance of primary instances, SageMaker Canvas defaults to Secondary, meaning that it will only read from secondary instances. When read preference is Secondary preferred, SageMaker Canvas reads from available secondary instances, but will read from the primary instance if a secondary instance is not available. For more information on how to configure an Amazon DocumentDB connection, see the Connect to a database stored in AWS.

  1. Choose Add connection.

If the connection is successful, you will see collections in your Amazon DocumentDB database shown as tables.

  1. Drag your table of choice to the blank canvas. For this post, we add our restaurant data.

The first 100 rows are displayed as a preview.

  1. To start analyzing and preparing your data, choose Import data.
  2. Enter a dataset name and choose Import data.

Analyze data using generative AI

Next, we want to get some insights on our data and look for patterns. SageMaker Canvas provides a natural language interface to analyze and prepare data. When the Data tab loads, you can start chatting with your data with the following steps:

  1. Choose Chat for data prep.
  2. Gather insights about your data by asking questions like the samples shown in the following screenshots.

To learn more about how to use natural language to explore and prepare data, refer to Use natural language to explore and prepare data with a new capability of Amazon SageMaker Canvas.

Let’s get a deeper sense of our data quality by using the SageMaker Canvas Data Quality and Insights Report, which automatically evaluates data quality and detects abnormalities.

  1. On the Analyses tab, choose Data Quality and Insights Report.
  2. Choose rating as the target column and Regression as the problem type, then choose Create.

This will simulate model training and provide insights on how we can improve our data for machine learning. The complete report is generated in a few minutes.

Our report shows that 2.47% of rows in our target have missing values—we’ll address that in the next step. Additionally, the analysis shows that the address line 2, name, and type_of_food features have the most prediction power in our data. This indicates basic restaurant information like location and cuisine may have an outsized impact on ratings.

Prepare data for machine learning

SageMaker Canvas offers over 300 built-in transformations to prepare your imported data. For more information on transformation features of SageMaker Canvas, refer to Prepare data with advanced transformations. Let’s add some transformations to get our data ready for training an ML model.

  1. Navigate back to the Data flow page by choosing the name of your data flow at the top of the page.
  2. Choose the plus sign next to Data types and choose Add transform.
  3. Choose Add step.
  4. Let’s rename the address line 2 column to cities.
    1. Choose Manage columns.
    2. Choose Rename column for Transform.
    3. Choose address line 2 for Input column, enter cities for New name, and choose Add.
  5. Additionally, lets drop some unnecessary columns.
    1. Add a new transform.
    2. For Transform, choose Drop column.
    3. For Columns to drop, choose URL and restaurant_id.
    4. Choose Add.
      [
  6. Our rating feature column has some missing values, so let’s fill in those rows with the average value of this column.
    1. Add a new transform.
    2. For Transform, choose Impute.
    3. For Column type, choose Numeric.
    4. For Input columns, choose the rating column.
    5. For Imputing strategy, choose Mean.
    6. For Output column, enter rating_avg_filled.
    7. Choose Add.
  7. We can drop the rating column because we have a new column with filled values.
  8. Because type_of_food is categorical in nature, we’ll want to numerically encode it. Let’s encode this feature using the one-hot encoding technique.
    1. Add a new transform.
    2. For Transform, choose One-hot encode.
    3. For Input columns, choose type_of_food.
    4. For Invalid handling strategy¸ choose Keep.
    5. For Output style¸ choose Columns.
    6. For Output column, enter encoded.
    7. Choose Add.

Build a model and generate predictions

Now that we have transformed our data, let’s train a numeric ML model to predict the ratings for restaurants.

  1. Choose Create model.
  2. For Dataset name, enter a name for the dataset export.
  3. Choose Export and wait for the transformed data to be exported.
  4. Choose the Create model link at the bottom left corner of the page.

You can also select the dataset from the Data Wrangler feature on the left of the page.

  1. Enter a model name.
  2. Choose Predictive analysis, then choose Create.
  3. Choose rating_avg_filled as the target column.

SageMaker Canvas automatically selects a suitable model type.

  1. Choose Preview model to ensure there are no data quality issues.
  2. Choose Quick build to build the model.

The model creation will take approximately 2–15 minutes to complete.

You can view the model status after the model finishes training. Our model has an RSME of 0.422, which means the model often predicts the rating of a restaurant within +/- 0.422 of the actual value, a solid approximation for the rating scale of 1–6.

  1. Finally, you can generate sample predictions by navigating to the Predict tab.

Clean up

To avoid incurring future charges, delete the resources you created while following this post. SageMaker Canvas bills you for the duration of the session, and we recommend logging out of SageMaker Canvas when you’re not using it. Refer to Logging out of Amazon SageMaker Canvas for more details.

Conclusion

In this post, we discussed how you can use SageMaker Canvas for generative AI and ML with data stored in Amazon DocumentDB. In our example, we showed how an analyst can quickly build a high-quality ML model using a sample restaurant dataset.

We showed the steps to implement the solution, from importing data from Amazon DocumentDB to building an ML model in SageMaker Canvas. The entire process was completed through a visual interface without writing a single line of code.

To start your low-code/no-code ML journey, refer to Amazon SageMaker Canvas.


About the authors

Adeleke Coker is a Global Solutions Architect with AWS. He works with customers globally to provide guidance and technical assistance in deploying production workloads at scale on AWS. In his spare time, he enjoys learning, reading, gaming and watching sport events.

Gururaj S Bayari is a Senior DocumentDB Specialist Solutions Architect at AWS. He enjoys helping customers adopt Amazon’s purpose-built databases. He helps customers design, evaluate, and optimize their internet scale and high performance workloads powered by NoSQL and/or Relational databases.

Tim Pusateri is a Senior Product Manager at AWS where he works on Amazon SageMaker Canvas. His goal is to help customers quickly derive value from AI/ML. Outside of work, he loves to be outdoors, play guitar, see live music, and spend time with family and friends.

Pratik Das is a Product Manager at AWS. He enjoys working with customers looking to build resilient workloads and strong data foundations in the cloud. He brings expertise working with enterprises on modernization, analytical and data transformation initiatives.

Varma Gottumukkala is a Senior Database Specialist Solutions Architect at AWS based out of Dallas Fort Worth. Varma works with the customers on their database strategy and architect their workloads using AWS purpose built databases. Before joining AWS, he worked extensively with relational databases, NOSQL databases and multiple programming languages for the last 22 years.

Read More