Configure Amazon Q Business with AWS IAM Identity Center trusted identity propagation

Configure Amazon Q Business with AWS IAM Identity Center trusted identity propagation

Amazon Q Business is a fully managed, permission aware generative artificial intelligence (AI)-powered assistant built with enterprise grade security and privacy features. Amazon Q Business can be configured to answer questions, provide summaries, generate content, and securely complete tasks based on your enterprise data. The native data source connectors provided by Amazon Q Business can seamlessly integrate and index content from multiple repositories into a unified index. Amazon Q Business uses AWS IAM Identity Center to record the workforce users you assign access to and their attributes, such as group associations. IAM Identity Center is used by many AWS managed applications such as Amazon Q. You connect your existing source of identities to Identity Center once and can then assign users to any of these AWS services. Because Identity Center serves as their common reference of your users and groups, these AWS applications can give your users a consistent experience as they navigate AWS. For example, it enables user subscription management across Amazon Q offerings and consolidates Amazon Q billing from across multiple AWS accounts. Additionally, Q Business conversation APIs employ a layer of privacy protection by leveraging trusted identity propagation enabled by IAM Identity Center.

Amazon Q Business comes with rich API support to perform administrative tasks or to build an AI-assistant with customized user experience for your enterprise. With administrative APIs you can automate creating Q Business applications, set up data source connectors, build custom document enrichment, and configure guardrails. With conversation APIs, you can chat and manage conversations with Q Business AI assistant. Trusted identity propagation provides authorization based on user context, which enhances the privacy controls of Amazon Q Business.

In this blog post, you will learn what trusted identity propagation is and why to use it, how to automate configuration of a trusted token issuer in AWS IAM Identity Center with provided AWS CloudFormation templates, and what APIs to invoke from your application facilitate calling Amazon Q Business identity-aware conversation APIs.

Why use trusted identity propagation?

Trusted identity propagation provides a mechanism that enables applications that authenticate outside of AWS to make requests on behalf of their users with the use of a trusted token issuer. Consider a client-server application that uses an external identity provider (IdP) to authenticate a user to provide access to an AWS resource that’s private to the user. For example, your web application might use Okta as an external IdP to authenticate a user to view their private conversations from Q Business. In this scenario, Q Business is unable to use the identity token generated by the third party provider to provide direct access to the user’s private data since there is no mechanism to trust the identity token issued by the third party.

To solve this, you can use IAM Identity Center to get the user identity from your external IdP into an AWS Identity and Access Management (IAM) role session which allows you to authorize requests based on the human, their attributes, and their group memberships, rather than set up fine-grained permissions in an IAM policy. You can exchange the token issued by the external IdP for a token generated by Identity Center. The token generated by Identity Center refers to the corresponding Identity Center user. The web application can now use the new token to initiate a request to Q Business for the private chat conversation. That token refers to the corresponding user in Identity Center, Q Business can authorize the requested access to the private conversation based on the user or their group membership as represented in Identity Center.

Some of the benefits of using trusted identity propagation are:

  • Prevents user impersonation and protects against unauthorized access to user private data by spoofing user identity.
  • Facilitates auditability and fosters responsible use of resources as Q Business automatically logs API invocations to AWS CloudTrail along with user identifier.
  • Promotes software design principles rooted in user privacy.

Overview of trusted identity propagation deployment

The following figure is a model of a client-server architecture for trusted identity propagation.

To understand how your application can be integrated with IAM Identity Center for trusted identity propagation, consider the model client-server web application shown in the preceding figure. In this model architecture, the web browser represents the user interface to your application. This could be a web page rendered on a web browser, Slack, Microsoft Teams, or other applications. The application server might be a web server running on Amazon Elastic Container Service (Amazon ECS), or a Slack or Microsoft Teams gateway implemented with AWS Lambda. Identity Center itself might be deployed on a delegated admin account or Identity Center (the Identity Account in the preceding figure), or could be deployed in the same AWS account (the Application Account in the preceding figure) where the application server is deployed along with Amazon Q Business. Finally, you have an OAuth 2.0 OpenID Connect (OIDC) external IdP such as Okta, Ping One, Microsoft Entra ID, or Amazon Cognito for authenticating and authorizing.

Deployment of trusted identity propagation involves five steps. As a best practice, we recommend that the security owner manages IAM Identity Center updates and the application owner manages application updates, providing clear separation of duties. The security owner is responsible for administering the Identity Center of an organization or account. The application owner is responsible for creating an application on AWS.

  1. The security owner adds the external OIDC IdP’s issuer URL to the IAM Identity Center instance’s trusted token issuer. It’s important that the issuer URL matches the iss claim attribute present in the JSON Web Token (JWT) identity token generated by the IdP after user authentication. This is configured once for a given issuer URL.
  2. The security owner creates a customer managed identity provider application in IAM Identity Center and explicitly configures the specific audience for a given trusted token issuer is being authorized to perform token exchange using Identity Center. Because there could be more than one application (or audience) for which the external IdP could be authenticating users, explicitly specifying an audience helps prevent an unauthorized applications from using the token exchange process. It’s important the audience ID matches the aud claim attribute present in the JWT identity token generated by the IdP after user authentication.
  3. The security owner edits the application policy for the customer managed identity provider application created in the previous step to add or update the IAM execution role used by the application server or AWS Lambda. This helps prevent any unapproved users or applications from invoking the CreateTokenWithIAM API in Identity Center to initiate the token exchange.
  4. The application owner creates and adds an IAM policy to the application execution role to allow the application to invoke a CreateTokenWithIAM API on Identity Center to perform a token exchange and to create temporary credentials using AWS Security Token Service (AWS STS) .
  5. The application owner creates an IAM role with a policy allowing access to the Q Business Conversation API for use with STS to create a temporary credential to invoke Q Business APIs.

You can use AWS CloudFormation templates, discussed later in this blog, to automate the preceding deployment steps. See the IAM Identity Center documentation for detailed step-by-step instructions on setting up trusted identity propagation. You can also use the AWS Command Line Interface (AWS CLI) setup process in Making authenticated Amazon Q Business API calls using IAM Identity Center.

Important: Choosing to add a trusted token issuer is a security decision that requires careful consideration. Only choose trusted token issuers that you trust to perform the following tasks:

  • Authenticate the user who is specified in the token. Control the audience claim, a claim you configure as the user identifier.
  • Generate a token that IAM Identity Center can exchange for an Identity Center-created token. Control the Identity Center customer managed application policy to add only IAM users, roles, and execution roles that can perform the exchange.

Authorization flow

For a typical web application, the trusted identity propagation process will involve five steps as shown in the following flow diagram.

  1. Sign-in and obtain an authorization code from the IdP.
  2. Use the authorization code and client secret to retrieve the ID token from the IdP.
  3. Exchange the IdP generated JWT ID token with the IAM Identity Center token that includes the AWS STS context identity.
  4. Use the STS context identity to obtain temporary access credentials from AWS STS.
  5. Use temporary access credentials to access Q Business APIs.

An end-to-end implementation of the identity propagation is available for reference in <project_home>/webapp/main.py of AWS Samples – main.py.

Sample JWT tokens

In the preceding authorization flow, one of the key steps is step 3, where the JWT ID token from the OAuth IdP is exchanged with IAM Identity Center for an AWS identity-aware JWT token. Key attributes of the respective JWT tokens are explored in the next section. An understanding of the tokens will help with troubleshooting authorization flow errors.

OpenID Connect JWT ID token

A sample JWT ID token generated by an OIDC OAuth IdP is shown in the following code sample. OIDC’s ID tokens take the form of a JWT, which is a JSON payload that’s signed with the private key of the issuer and can be parsed and verified by the application. In contrast to access tokens, ID tokens are intended to be understood by the OAuth client and include a handful of defined property names that provide information to the application. Important properties include aud, email, iss, and jti, which are used by IAM Identity Center to validate the token issuer, match the user directory, and issue a new Identity Center token. The following code sample shows a JWT identity token issued by an OIDC external IdP (such as Okta).

{
    'amr': ['pwd'],
    'at_hash': '3fMsKeFGoem************',
    'aud': '0oae4epmqqa************',
    'auth_time': 1715792363,
    'email': 'john_doe@******.com',
    'exp': 1715795964,
    'iat': 1715792364,
    'idp': '00oe36vc7kj7************',
    'iss': 'https://*******.okta.com/oauth2/default',
    'jti': 'ID.7l6jFX3KO9M7***********************',
    'name': 'John Doe',
    'nonce': 'SampleNonce',
    'preferred_username': 'john_doe@******.com',
    'sub': '00ue36ou4gCv************',
    'ver': 1
}

IAM Identity Center JWT token with identity context

A sample JWT token generated by CreateTokenWithIAM is shown in the following code sample. This token includes a property called sts:identity_context which allows you to create an identity-enhanced IAM role session using an AWS STS AssumeRole API. The enhanced STS session allows the receiving AWS service to authorize the IAM Identity Center user to perform an action and log the user identity to CloudTrail for auditing.

{
    'act':{
        'sub': 'arn:aws:sso::*********:trustedTokenIssuer/ssoins-*********/74******-7***-7***-d***-fd9*********'
    },
    'aud': 'BTHY************-c9Ed3V************',
    'auth_time': '2024-05-15T16:59:27Z',
    'aws:application_arn': 'arn:aws:sso::************:application/ssoins-************/apl-************',
    'aws:credential_id': 'AAAAAGZE9_8Y******_Zj******',
    'aws:identity_store_arn': 'arn:aws:identitystore::************:identitystore/d-**********',
    'aws:identity_store_id': 'd-**********',
    'aws:instance_account': '************',
    'aws:instance_arn': 'arn:aws:sso:::instance/ssoins-************',
    'exp': 1715795967,
    'iat': 1715792367,
    'iss': 'https://identitycenter.amazonaws.com/ssoins-************',
    'sts:audit_context': 'AQoJb3Jp*********************************Bg==',
    'sts:identity_context': 'AQoJb3Jp********************************************gY=',
    'sub': '34******-d***-7***-b***-e2*********'
}

Automate configuration of a trusted token issuer using AWS CloudFormation

A broad range of possibilities exists to integrate your application with Amazon Q Business using IAM Identity Center and your enterprise IdP. For all integration projects, Identity Center needs to be configured to use a trusted token issuer. The sample CloudFormation templates discussed in this post focuses on helping you automate the core trusted token issuer setup. If you’re new to Amazon Q Business and don’t have all the inputs required to deploy the CloudFormation template, the prerequisites section includes links to resources that can help you get started. You can also follow a tutorial on Configuring sample web application with Okta included in the accompanying AWS Samples repository.

Note: The full source code of the solution using AWS CloudFormation templates and sample web application is available in AWS Samples Repository.

Prerequisites and considerations

Template for configuring AWS IAM Identity Center by a security owner

A security owner can use this CloudFormation template to automate configuration of the trusted token issuer in your IAM Identity Center. Deploy this stack in the AWS account where your Identity Center instance is located. This could be in the same AWS account where your application is deployed as a standalone or account instance, or can be in a delegated admin account managed as part of AWS Organizations.

  1. To launch the stack, choose:
    Launch Stack

You can download the latest version of the CloudFormation template from AWS Samples – TTI CFN.

The following figure shows the stack input for the template

  1. The stack creation requires four parameters:
  • AuthorizedAudiences: The authorized audience is an auto generated UUID by a third-party IdP service or a pseudo-ID configured by the administrator of the third-party IdP to uniquely identify the client (your application) for which the ID token is generated. The value must match the aud attribute value included in the JWT ID token generated by the third-party identity provider.
  • ClientAppExecutionArn: The Amazon Resource Name (ARN) of the IAM user, group or execution role that’s used to run your application, which will invoke Identity Center for token exchange and AWS STS service for generating temporary credentials. For example, this could be the execution role ARN of the Lambda function where your code is run.
  • IDCInstanceArn: The instance ARN of the IAM Identity Center instance used by your application.
  • TokenIssuerUrl: The URL of the trusted token issuer. The trusted token issuer is a third-party identity provider that will authenticate a user and generate an ID token for authorization purposes. The token URL must match the iss attribute value included in the JWT ID token generated by the third-party identity provider.

The following figure shows the output of the CloudFormation stack to configure a trusted token issuer with IAM Identity Center

The stack creation produces the following output:

  • IDCApiAppArn: The ARN for the IAM Identity Center custom application auth provider. You will use this application to call the Identity Center CreateTokenWithIAM API to exchange the third-party JWT ID token with the Identity Center token.

Validate the configuration

  1. From the AWS Management Console where your IAM Identity Center instance is located, go to the AWS IAM Identity Center console to verify if the trusted token issuer is configured properly.
  2. From the left navigation pane, choose Applications and choose the Customer Managed tab to see a list of applications as shown in the following figure. The newly created customer managed IdP application will be the same as the CloudFormation stack name. Choose application name to open the application configuration page.
  3. On your application configuration page, as shown in the following figure, verify the following:
    1. User and group assignments are set to Do not require assignments.
    2. Trusted applications for identity propagation lists Amazon Q and includes the application scope qbusiness:conversations:access.
    3. Authentication with the trusted token issuer is set to configured.
  4. Next, to verify trusted token issuer configuration, choose Actions on the top right of the page and select Edit configurations from the drop-down menu.
  5. At the bottom of the page, expand Authentication with trusted token issuer and verify:
  6. That your Issuer URL is selected by default and is listed under .
  7. The audience ID (Aud claim) is configured properly for the issuer URL, as shown in the following figure. Next expand Application credentials to verify if your application execution IAM role is listed.

Depending on your IAM Identity Center instance type, you might not be able to access the console customer managed applications page. In such cases, you can use the AWS CLI or SDK to view the configuration. Here is a list of useful AWS CLI commands: list-applicationslist-application-access-scopesget-application-assignment-configurationdescribe-trusted-token-issuer, and list-application-grants.

Template for configuring your application by the application owner

To propagate user identities, your application will need to:

  • Invoke the IAM Identity Center instance to exchange a third-party JWT ID token and obtain an Identity Center ID token
  • Invoke AWS STS to generate a temporary credential with an IAM assumed role.

The application owner can use a CloudFormation template to generate the required IAM policy, which can be attached to your application execution role and the assumed role with the required Q Business chat API privileges for use with AWS STS to generate temporary credentials.

Remember to include the add-on policy generated to your application’s IAM execution role to allow the applications to invoke Identity Center and AWS STS APIs.

  1. To launch the stack, choose:
    Launch Stack

You can download the latest version of the CloudFormation template from AWS Samples – App Roles CFN.

The following figure shows the CloudFormation stack configuration to install IAM roles and policies required for the application to propagate identities

  1. The stack creation takes four parameters, as shown in the preceding figure:
  • ClientAppExecutionArn: The ARN of an IAM user, group, or execution role that is used to run your application and will invoke IAM Identity Center for token exchange and AWS STS for generating temporary credentials. For example, this could be the execution role ARN of Lambda where your code is run.
  • IDCApiAppArn: ARN for the IAM Identity Center custom application auth provider. This will be created as part of the trusted token issuer configuration.
  • KMSKeyId: [Optional] The AWS Key Management Server (AWS KMS) ID, if the Q Business Application is encrypted with a customer managed encryption key.
  • QBApplicationID: Q Business application ID, which your application will use to invoke chat APIs. The STS assume role will be restricted to this application ID.

The following figure shows the output of the CloudFormation stack to install IAM roles and policies required for the application to propagate identities.

The stack creation produces the following outputs:

  • ClientAppExecutionAddOnPolicyArn: This is a customer managed IAM policy created with the required permissions for your application to invoke the IAM Identity Center CreateTokenWithIAM API and call the STS AssumeRole API to generate temporary credentials to call Q Business chat APIs. You can include this policy in your application IAM execution role to allow access for the APIs.
  • QBusinessSTSAssumeRoleArn: This IAM role will include the necessary permissions to call Q Business chat APIs, for use with the STS AssumeRole API call.

Validate the configuration

  1. From the AWS account where your application is deployed, open the AWS IAM console, verify if the IAM role for STS AssumeRole and the user managed IAM policy for the application execution role are created.
    • To verify if the IAM Role for STS AssumeRole, obtain the role name QBusinessSTSAssumeRoleArn stack output value, choose theRoles link on the left panel of the IAM console and use the search bar to enter the role name and shown in the following figure.
  2. Choose the link to the role to open the role and expand the inline policy to review the permissions, as shown in the following figure.
  3. To verify if the IAM policy for add-on to an application execution role is created, obtain the IAM policy name from the ClientAppExecutionAddOnPolicyArn stack output value, go the Policies in the IAM console, and search for the policy, as shown in the following figure.
  4. Choose the link to the policy name to open the policy and review the permissions, as shown in the following figure.

Update the application for invoking the Q Business API with identity propagation

Most web applications using OAuth 2.0 with an IdP will have implemented a sign-in mechanism and invoke the IdPs ID endpoint to retrieve a JWT ID token. However, before invoking Amazon Q Business APIs that require identity propagation, your application needs to be updated to include calls to CreateTokenWithIAM and AssumeRole to facilitate trusted token propagation.

The CreateTokenWithIAM API enables exchanging the JWT ID token received from the OIDC IdP with an IAM identity Center generated JWT token. The newly generated token is then passed on to AssumeRole API to create an identity aware temporary security credentials that you can use to access AWS resources.

Note: Remember to add permissions to your IAM role and user policy to allow invoking these APIs. Alternatively, you can attach the sample policy referenced by ClientAppExecutionAddOnPolicyArn that was created by the CloudFormation template for configuring your application.

A sample access helper method using  get_oidc_id_tokenget_idc_sts_id_context, or get_sts_credential is available in <project_home>/src/qbapi_tools/access_helpers.py  (AWS Samples – access_helpers.py). An end-to-end sample implementation of the complete sequence of steps as depicted in the end-to-end authentication sequence is provided in <project_home>/webapp/main.py (AWS Samples – main.py).

Restrictions and limitations

Below are some common limitations and restrictions that you may encounter while configuring trusted token propagation along with recommendations on how to mitigate them.

Group membership propagation

Enterprises typically manage group membership in their external IdP. However, when using trusted token propagation, the web identity token generated by the external IdP is exchanged with an ID token generated by IAM Identity Center. Thus, when invoking the Q Business API from an STS session enhanced with Identity Center identity context, only the group membership information available for the user in Identity Center is passed to the Q Business API, not the group membership from the external IdP. To mitigate this issue, it’s recommended that all relevant users and groups are synchronized to Identity Center from the external IdP using System for Cross-domain Identity Management (SCIM). For more information, see automatic provisioning (synchronization) of users and groups.

Caching credentials to prevent invalid grant types

You can use a web identity token only once with the CreateTokenWithIAM API. This is to prevent token replay attacks, where an attacker can intercept a JWT and reuse it multiple times, allowing them to bypass authentication and authorization controls. Because it isn’t practical to generate a new ID token for every Q Business API, it’s recommended that the temporary credentials generated by a Q Business API session using AWS STS AssumeRole is cached and reused for subsequent API calls.

Clean up

To avoid incurring additional charges, make sure you delete any resources created in this post.

  1. Follow the instructions in Deleting a stack on the AWS CloudFormation console to delete any CloudFormation stacks created using templates provided in this post.
  2. If you enabled an IAM Identity Center instance, follow the instructions to delete your IAM Identity Center instance.
  3. Ensure you unregister or delete any IdP services such as Okta, Entra ID, Ping Identity, or Amazon Cognito that you have created for this post.
  4. Finally, delete any sample code repositories you have cloned or downloaded, and any associated resources deployed as part of setting up the environment for running the samples in the code repository.

Conclusion

Trusted identity propagation is an important mechanism for securely integrating Amazon Q Business APIs into enterprise applications that use external IdPs. By implementing trusted identity propagation with AWS IAM Identity Center, organizations can confidently build AI-powered applications and tools using Amazon Q Business APIs, knowing that user identities are properly verified and protected throughout the process. This approach allows enterprises to harness the full potential of generative AI while maintaining the highest standards of security and privacy. To get started with Amazon Q Business, explore the Getting started guide. To learn more about how trusted token propagation works, see How to develop a user-facing data application with IAM Identity Center and S3 Access Grants.


About the Author

Rajesh Kumar Ravi is a Senior Solutions Architect at Amazon Web Services specializing in building generative AI solutions with Amazon Q Business, Amazon Bedrock, and Amazon Kendra. He is an accomplished technology leader with experience in building innovative AI products, nurturing the builder community, and contributes to the development of new ideas. He enjoys walking and loves to go on short hiking trips outside of work.

Read More

Enhance your media search experience using Amazon Q Business and Amazon Transcribe

Enhance your media search experience using Amazon Q Business and Amazon Transcribe

In today’s digital landscape, the demand for audio and video content is skyrocketing. Organizations are increasingly using media to engage with their audiences in innovative ways. From product documentation in video format to podcasts replacing traditional blog posts, content creators are exploring diverse channels to reach a wider audience. The rise of virtual workplaces has also led to a surge in content captured through recorded meetings, calls, and voicemails. Additionally, contact centers generate a wealth of media content, including support calls, screen-share recordings, and post-call surveys.

We are excited to introduce Mediasearch Q Business, an open source solution powered by Amazon Q Business and Amazon Transcribe. Mediasearch Q Business builds on the Mediasearch solution powered by Amazon Kendra and enhances the search experience using Amazon Q Business. Mediasearch Q Business supercharges the way you consume media files by using them as part of the knowledge base used by Amazon Q Business to generate reliable answers to user questions. The solution also features an enhanced Amazon Q Business query application that allows users to play the relevant section of the original media files or YouTube videos directly from the search results page, providing a seamless and intuitive user experience.

Solution overview

Mediasearch Q Business is straightforward to install and try out.

The solution has two components, as illustrated in the following diagram:

  • A Mediasearch indexer that transcribes media files (audio and video) on an Amazon Simple Storage Service (Amazon S3) bucket or media from a YouTube playlist and ingests the transcriptions into either an Amazon Q Business native index (configured as part of the Amazon Q Business application) or an Amazon Kendra
  • A Mediasearch finder, which provides a UI and makes API calls to the Amazon Q Business service APIs on behalf of the logged-in user. The response from API calls are displayed to the end-user.

The Mediasearch indexer finds and transcribes audio and video files stored in an S3 bucket. The indexer can also index YouTube videos from a YouTube playlist as audio files and transcribe these audio files. It prepares the transcriptions by embedding time markers at the start of each sentence, and it indexes each prepared transcription in an Amazon Q Business native retriever or an Amazon Kendra retriever. The indexer runs the first time when you install it, and subsequently runs on an interval that you specify, maintaining the index to reflect any new, modified, or deleted files.

The Mediasearch finder is a web search client that you use to search for content in your Amazon Q Business application. Additionally, the Mediasearch finder includes in-line embedded media players in the search result, so you can see the relevant section of the transcript, and play the corresponding section from the original media (audio files and video files in your media bucket or a YouTube video) without navigating away from the search page.

In the sections that follow, we discuss the following topics:

  • How to deploy the solution to your AWS account
  • How to use it to index and search sample media files
  • How to use the solution with your own media files
  • How the solution works
  • The estimated costs involved
  • How to monitor usage and troubleshoot problems
  • Options to customize and tune the solution
  • How to uninstall and clean up when you’re done experimenting

Prerequisites

Make sure you have the following:

Deploy the Mediasearch Q Business solution

In this section, we walk through deploying the two solution components: the indexer and the finder. We use a CloudFormation stack to deploy the necessary resources in the us-east-1 AWS Region.

If you’re deploying the solution to another Region, follow the instructions in the README available in the Mediasearch Q Business GitHub repository.

Deploy the Mediasearch Q Business indexer component

To deploy the indexer component, complete the following steps:

  1. Choose Launch Stack.
  2. In the Identity center ARN and Retriever selection section, for IdentityCenterInstanceArn, enter the ARN for your IAM Identity Center instance.

You can find the ARN on the Settings page of the IAM Identity Center console. The ARN is a required field.

  1. Use default values for all other parameters. We will customize these values later to suit your specific requirements.
  2. Acknowledge that the stack might create IAM resources with custom names, then choose Create stack.

The indexer stack takes around 10 minutes to deploy. Wait for the indexer to finish deploying before you deploy the Mediasearch Q Business finder.

Deploy the Mediasearch Q Business finder component

The Mediasearch finder uses Amazon Cognito to authenticate users to the solution. For an authenticated user to interact with an Amazon Q Business application, you must configure an IAM Identity Center customer managed application that either supports SAML 2.0 or OAuth 2.0.

In this post, we create a customer managed application that supports OAuth 2.0, a secure way for applications to communicate and share user data without exposing passwords. We use a technique called trusted identity propagation, which allows the Mediasearch Q Business finder app to access the Amazon Q service securely without sharing passwords between the two identity providers (Amazon Cognito and IAM Identity Center in our example).

Instead of sharing passwords, trusted identity propagation uses tokens. Tokens are like digital certificates that prove who the user is and what they’re allowed to do. AWS managed applications that work with trusted identity propagation get tokens directly from IAM Identity Center. IAM Identity Center can also exchange identity tokens and access tokens from external authorization servers like Amazon Cognito. This lets an application authenticate users and obtain tokens outside of AWS (like with Amazon Cognito, Microsoft Entra ID, or Okta), exchange that token for an IAM Identity Center token, and then use the new token to request access to AWS services like Amazon Q Business.

For more information, see Using trusted identity propagation with customer managed applications.

When the IAM Identity Center instance is in the same account where you are deploying the Mediasearch Q Business solution, the finder stack allows you to automatically create the IAM Identity Center customer managed application as part of the stack deployment.

If you use the organization instance of IAM Identity Center enabled in your management account, then you will be deploying the Mediasearch Q Business finder stack in a different AWS account. In this case, follow the steps in the README to create an IAM Identity Center application manually.

To deploy the finder component and create the IAM Identity Center customer managed application, complete the following steps:

  1. Choose Launch Stack.
  2. For IdentityCenterInstanceArn, enter the ARN for the IAM Identity Center instance. This is the same value you used while deploying the indexer stack.
  3. For CreateIdentityCenterApplication, choose Yes to create the IAM Identity Center application for the Mediasearch finder application.
  4. Under Mediasearch Indexer parameters, enter the Amazon Q Business application ID that was created by the indexer stack. You can copy this from the QBusinessApplicationId output of the indexer stack.
  5. Select the retriever type that was used to deploy the Mediasearch indexer. (If you deployed an Amazon Kendra index, then select Kendra, otherwise select Native.
  6. If you selected Kendra, enter the Amazon Kendra index ID that was used by the indexer stack.
  7. For MediaBucketNames, use the MediaBucketsUsed output from the indexer CloudFormation stack to allow the search page to access media files across YTMediaBucket and Mediabucket.
  8. Acknowledge that the stack might create IAM resources with custom names, then choose Create stack.

Configure user access to Amazon Q Business

To access the Mediasearch Q Business solution, add a user with an appropriate subscription to the Amazon Q Business application and to the IAM Identity Center customer managed application.

Add a user to the Amazon Q Business application

To start using the Amazon Q Business application, you can add users or groups to the Amazon Q Business application from your IAM Identity Center instance. Complete the following steps to add a user to the application:

  1. Access the Amazon Q Business application by choosing the link for QBusinessApplication in the indexer CloudFormation stack outputs.
  2. Under Groups and users, on the Users tab, choose Manage access and subscription.
  3. Choose Add groups and users.
  4. Choose Add existing users and groups.
  5. Search for an existing user, choose the user, and choose Assign.
  6. Select the added user and on the Change subscription menu, choose Update subscription tier.
  7. Select the appropriate subscription tier and choose Confirm.

For details of each Amazon Q subscription, refer to Amazon Q Business pricing.

Assign users to the IAM Identity Center customer managed application

Now you can assign users or groups to the IAM Identity Center customer managed application. Complete the following steps to add a user:

  1. From the outputs section of the finder CloudFormation stack, choose the URL for IdentityCenterApplicationConsoleURLto navigate to the customer managed application.
  1. Choose Assign users and groups.
  1. Select users and choose Assign users.

This concludes the user access configuration to the Mediasearch Q Business solution.

Test with the sample media files

When the Mediasearch indexer and finder stack are deployed, the indexer should have completed processing the audio (mp3) files for the YouTube videos and sample media files (selected AWS Podcast episodes and AWS Knowledge Center videos). You can now run your first Mediasearch query.

  1. To log in to the Mediasearch finder application, choose the URL for MediasearchFinderURL in the stack outputs.

The Mediasearch finder application in your browser will show a splash page for Amazon Q Business.

  1. Choose Get Started to access the Amazon Cognito page.

To access Mediasearch Q Business, you need to log in to the application using a user ID in the Amazon Cognito user pool created by the finder stack. The email address in Amazon Cognito must match the email address for the user in IAM Identity Center. Alternatively, the Mediasearch solution allows you to create a user through the application.

  1. On the Create Account tab, enter your email (which matches the email address in IAM Identity Center), followed by a password and password confirmation, and choose Create Account.

Amazon Cognito will send an email with a confirmation code for email verification.

  1. Enter this confirmation code to complete your email verification.
  1. After email verification, you will now be able to log in to the Mediasearch Q Business application.
  2. After you’re logged in, in the Enter a prompt box, write a query, such as “What is AWS Fargate?”

The query returns a response from Amazon Q Business based on the media (sample media files and YouTube audio sources) ingested into the index.


The response includes citations, with reference to sources. Users can verify their answer from Amazon Q Business by playing media files from their S3 buckets or YouTube starting at the time marker where the relevant information is found.

  1. Use the embedded video player to play the original video inline. Observe that the media playback starts at the relevant section of the video based on the time marker.
  2. To play the video full screen in a new browser tab, use the Full screen menu option in the player, or choose the media file hyperlink shown above the answer text.
  3. Choose (right-click) the video file hyperlink, copy the URL, and enter it into a text editor.

If the media is an audio file for a YouTube video, it looks something like the following:

https://www.youtube.com/watch?v=unFVfqj9cQ8&t=36.58s

If the media file is a non-YouTube audio file that resides in MediaBucket, the URL looks like the following:

https://mediasearchtest.s3.amazonaws.com/mediasamples/What_is_an_Interface_VPC_Endpoint_and_how_can_I_create_Interface_Endpoint_for_my_VPC_.mp4?AWSAccessKeyId=ASIAXMBGHMGZLSYWJHGD&Expires=1625526197&Signature=BYeOXOzT585ntoXLDoftkfS4dBU%3D&x-amz-security-token=.... #t=253.52

This is a presigned S3 URL that provides your browser with temporary read access to the media file referenced in the search result. Using presigned URLs means you don’t need to provide permanent public access to all of your indexed media files.

  1. Experiment with additional queries, such as “How has AWS helped customers in building MLOps platform?” or “How can I use Generative AI to improve customer experience?” or try your own questions.

Index and search your own media files

To index media files stored in your own S3 bucket, replace the MediaBucket and MediaFolderPrefix parameters with your own bucket name and prefix when you install or update the indexer component stack, and modify the MediaBucketName parameter with your own bucket name when you install or update the finder component stack. Additionally, you can replace the YouTube playlist (PlayListURL) with your own playlist URL and update the indexer stack.

  1. When creating a new MediaSearch indexer stack, you can choose to use either a native retriever or an Amazon Kendra retriever. You can make this selection using the parameter RetrieverType. When using the Amazon Kendra retriever, you can either let indexer stack create an Amazon Kendra index or use an existing Amazon Kendra IndexId to add files stored in the new location. To deploy a new indexer, follow the steps from earlier in this post, but replace the defaults to specify the media bucket name and prefix for your own media files or replace the YouTube playlist URL with your own playlist URL. Make sure that you comply with the YouTube Terms of Service.
  2. Alternatively, update an existing MediaSearch indexer stack to replace the previously indexed files with files from the new location or update the YouTube playlist URL or the number of videos to download from the playlist:
    1. Select the stack on the AWS CloudFormation console, choose Update, then Use current template, then Next.
    2. Modify the media bucket name and prefix parameter values as needed.
    3. Modify the YouTube Playlist URL and Number of YouTube Videos values as needed.
    4. Choose Next twice, select the acknowledgement check box, and choose Update stack.
  3. Update an existing MediaSearch finder stack to change bucket names or add additional bucket names to the MediaBucketNames

When the MediaSearch indexer stack is successfully created or updated, the indexer automatically finds, transcribes, and indexes the media files stored in your S3 bucket. When it’s complete, you can submit queries and find answers from the audio tracks of your own audio and video files.

You have the option to provide metadata for any or all of your media files. Use metadata to assign values to index attributes for sorting, filtering, and faceting your search results, or to specify access control lists to govern access to the files. Metadata files can be in the same S3 folder as your media files (default), or in a parallel folder structure specified by the optional indexer parameter MetadataFolderPrefix. For more information about how to create metadata files, see Amazon S3 document metadata.

You can also provide customized transcription options for any or all of your media files. This allows you to take full advantage of Amazon Transcribe features such as custom vocabularies, automatic content redaction, and custom language models.

How the Mediasearch solution works

Let’s take a quick look at how the solution works, as illustrated in the following diagram.

The Mediasearch solution has an event-driven serverless computing architecture with the following steps:

  1. You provide an S3 bucket containing the audio and video files you want to index and search. This is also known as the MediaBucket. Leave this blank if you don’t want to index media from your MediaBucket.
  2. You also provide your YouTube playlist URL and the number of videos to index from the YouTube playlist. Make sure that you comply with the YouTube Terms of Service. The YTIndexer will index the latest files from the YouTube playlist. For example, if the number of videos is set to 5, then the YTIndexer will index the five latest videos in the playlist. Any YouTube video indexed prior is ignored from being indexed.
  3. An AWS Lambda function fetches the YouTube videos from the playlist as audio (mp3 files) into the YTMediaBucket and also creates a metadata file in the MetadataFolderPrefix location with metadata for the YouTube video. The YouTube videoid along with the related metadata are recorded in an Amazon DynamoDB table (YTMediaDDBQueueTable).
  4. Amazon EventBridge generates events on a repeating interval (every 2 hours, every 6 hours, and so on) These events invoke the Lambda function S3CrawlLambdaFunction.
  5. An AWS Lambda function is invoked initially when the CloudFormation stack is first deployed, and then subsequently by the scheduled events from EventBridge. The S3CrawlLambdaFunction function crawls through the MediaBucket and the YTMediabucket and starts an Amazon Q Business index (or Amazon Kendra) data source sync job. The Lambda function lists all the supported media files (FLAC, MP3, MP4, Ogg, WebM, AMR, or WAV) and associated metadata and transcribe options stored in the user provided S3 bucket.
  6. Each new file is added to another DynamoDB tracking table and submitted to be transcribed by an Amazon Transcribe job. Any file that has been previously transcribed is submitted for transcription again only if it has been modified since it was previously transcribed, or if associated Amazon Transcribe options have been updated. The DynamoDB table is updated to reflect the transcription status and last modified timestamp of each file. Any tracked files that no longer exist in the S3 bucket are removed from the DynamoDB table and from the Amazon Q Business index (or Amazon Kendra index). If no new or updated files are discovered, the Amazon Q Business index (or Amazon Kendra) data source sync job is immediately stopped. The DynamoDB table holds a record for each media file with attributes to track transcription job names and status, and last modified timestamps.
  7. As each Amazon Transcribe job completes, EventBridge generates a job complete event, which invokes another Lambda function (S3JobCompletionLambdaFunction).
  8. The Lambda function processes the transcription job output, generating a modified transcription that has a time marker inserted at the start of each sentence. This modified transcription is indexed in Amazon Q Business (or Amazon Kendra), and the job status for the file is updated in the DynamoDB table. When the last file has been transcribed and indexed, the Amazon Q Business (or Amazon Kendra) data source sync job is stopped.
  9. The index is populated and kept in sync with the transcriptions of all the media files in the S3 bucket monitored by the Mediasearch indexer component, integrated with any additional content from any other provisioned data sources. The media transcriptions are used by the Amazon Q Business application, which allows users to find content and answers to their questions.
  10. The sample finder client application enhances users’ search experience by embedding an inline media player with each source or citation that is based on a transcribed media file. The client uses the time markers embedded in the transcript to start media playback at the relevant section of the original media file.
  11. An Amazon Cognito user pool is used to authenticate users and is configured to exchange tokens from IAM Identity Center to support Amazon Q Business service calls.

Estimated costs

In addition to Amazon S3 costs associated with storing your media, the Mediasearch solution incurs usage costs from the Amazon Q, Amazon Kendra (if using an Amazon Kendra index), Amazon Transcribe, and Amazon API Gateway. Additional minor costs are incurred by the other services mentioned after free tier allowances have been used. For more information, see the pricing pages for Amazon Q Business, Amazon Kendra, Amazon Transcribe, Lambda, DynamoDB, and EventBridge.

Monitor and troubleshoot

To see the details of each media file transcript job, navigate to the Transcription jobs page on the Amazon Transcribe console.

Each media file is transcribed only one time, unless the file is modified. Modified files are re-transcribed and re-indexed to reflect the changes.

Choose any transcription job to review the transcription and examine additional job details.

You can check the status of the data source sync by navigating to the Amazon Q Business application deployed by the indexer stack (choose the link on the indexer stack outputs page for QApplication). In the data source section, choose the custom data source and view the status of the sync job.

On the DynamoDB console, choose Tables in the navigation pane. Use your MediaSearch stack name as a filter to display the MediaSearch DynamoDB tables, and examine the items showing each indexed media file and corresponding status.

The table MediaSearch-Indexer-YTMediaDDBQueueTable has one record for each YouTube videoid that is downloaded as an audio (mp3) file along with the metadata for the video like author, view count, video title, and so on.

The table MediaSearch-Indexer-MediaDynamoTable has one record for each media file (including YouTube videos), and contains attributes with information about the file and its processing status.

On the Functions page of the Lambda console, use your indexer stack name as a filter to list the Lambda functions that are part of the solution:

  • The YouTubeVideoIndexer function indexes and downloads YouTube videos if the CloudFormation stack parameter PlayListURL is set to a valid YouTube playlist
  • The S3CrawlLambdaFunction function crawls the YTMediaBucket and the MediaBucket for media files and initiates the transcription jobs for the media files

When the transcription job is complete, a completion event invokes the S3JobCompletionLambdaFunction function, which ingests the transcription into the Amazon Q Business index (or Amazon Kendra index) with any related metadata.

Choose any of the functions to examine the function details, including environment variables, source code, and more. Choose Monitor and View logs in CloudWatch to examine the output of each function invocation and troubleshoot any issues.

On the Functions page of the Lambda console, use your finder stack name as a filter to list the Lambda functions that are part of the solution:

  • The BuildTriggerLambda function runs the build of the finder AWS Amplify application after cloning the AWS CodeCommit repository with the finder ReactJS code.
  • The IDCTokenCreateLambda function uses the authorization header that contains a JWT token from a successful authentication with Amazon Cognito to exchange bearer tokens from IAM Identity Center.
  • The IDCAppCreateLambda function creates an OAuth 2.0 IAM Identity Center application to exchange tokens from IAM Identity Center and a trusted token issuer for the Amazon Cognito user pool.
  • The UserConversationLambda function is called from API Gateway to list or delete Amazon Q Business conversations.
  • The UserPromptsLambda function is called from API Gateway to call the chat_sync API of the Amazon Q Business service.
  • The PreSignedURLCreateLambda function is called from API Gateway to create a presigned URL for S3 buckets. The presigned URL is used to play the media files residing on the Mediabucket that serves as the source for an Amazon Q Business response.

Choose any of the functions to examine the function details, including environment variables, source code, and more. Choose Monitor and View logs in CloudWatch to examine the output of each function invocation and troubleshoot any issues.

Customize and enhance the solution

You can fork the MediaSearch Q Business GitHub repository, enhance the code, and send us pull requests so we can incorporate and share your improvements.

The following are a few suggestions for features you might want to implement:

  • Enhance the indexer stack to allow the existing Amazon Q Business application IDs to be used
  • Extend your search sources to include other video streaming platforms relevant to your organization
  • Build Amazon CloudWatch metrics and dashboards to improve the manageability of MediaSearch

Clean up

When you’re finished experimenting with this solution, clean up your resources by using the AWS CloudFormation console to delete the indexer and finder stacks that you deployed. This deletes all the resources that were created by deploying the solution.

Preexisting Amazon Q Business applications or indexes or IAM Identity Center applications or trusted token issuers that were created manually aren’t deleted.

Conclusion

The combination of Amazon Q Business and Amazon Transcribe enables a scalable, cost-effective solution to surface insights from your media files. You can use the content of your media files to find accurate answers to your users’ questions, whether they’re from text documents or media files, and consume them in their native format. This solution enhances the overall experience of the previous Mediasearch solution by using the powerful generative artificial intelligence (AI) capabilities of Amazon Q Business.

The sample MediaSearch Q Business solution is provided as open source—use it as a starting point for your own solution, and help us make it better by contributing back fixes and features through GitHub pull requests. For expert assistance, AWS Professional Services and other Amazon partners are here to help.

We’d love to hear from you. Let us know what you think in the comments section, or use the issues forum in the MediaSearch Q Business GitHub repository.


About the Authors

Roshan Thomas is a Senior Solutions Architect at Amazon Web Services. He is based in Melbourne, Australia, and works closely with power and utilities customers to accelerate their journey in the cloud. He is passionate about technology and helping customers architect and build solutions on AWS.

Anup Dutta is a Solutions Architect with AWS based in Chennai, India. In his role at AWS, Anup works closely with startups to design and build cloud-centered solutions on AWS.

Bob StrahanBob Strahan is a Principal Solutions Architect in the AWS Language AI Services team.

Abhinav JawadekarAbhinav Jawadekar is a Principal Solutions Architect in the Amazon Q Business service team at AWS. Abhinav works with AWS customers and partners to help them build generative AI solutions on AWS.

Read More

Monks boosts processing speed by four times for real-time diffusion AI image generation using Amazon SageMaker and AWS Inferentia2

Monks boosts processing speed by four times for real-time diffusion AI image generation using Amazon SageMaker and AWS Inferentia2

This post is co-written with Benjamin Moody from Monks.

Monks is the global, purely digital, unitary operating brand of S4Capital plc. With a legacy of innovation and specialized expertise, Monks combines an extraordinary range of global marketing and technology services to accelerate business possibilities and redefine how brands and businesses interact with the world. Its integration of systems and workflows delivers unfettered content production, scaled experiences, enterprise-grade technology and data science fueled by AI—managed by the industry’s best and most diverse digital talent—to help the world’s trailblazing companies outmaneuver and outpace their competition.

Monks leads the way in crafting cutting-edge brand experiences. We shape modern brands through innovative and forward-thinking solutions. As brand experience experts, we harness the synergy of strategy, creativity, and in-house production to deliver exceptional results. Tasked with using the latest advancements in AWS services and machine learning (ML) acceleration, our team embarked on an ambitious project to revolutionize real-time image generation. Specifically, we focused on using AWS Inferentia2 chips with Amazon SageMaker to enhance the performance and cost-efficiency of our image generation processes..

Initially, our setup faced significant challenges regarding scalability and cost management. The primary issues were maintaining consistent inference performance under varying loads, while providing generative experience for the end-user. Traditional compute resources were not only costly but also failed to meet the low latency requirements. This scenario prompted us to explore more advanced solutions from AWS that could offer high-performance computing and cost-effective scalability.

The adoption of AWS Inferentia2 chips and SageMaker asynchronous inference endpoints emerged as a promising solution. These technologies promised to address our core challenges by significantly enhancing processing speed (AWS Inferentia2 chips were four times faster in our initial benchmarks) and reducing costs through fully managed auto scaling inference endpoints.

In this post, we share how we used AWS Inferentia2 chips with SageMaker asynchronous inference to optimize the performance by four times and achieve a 60% reduction in cost per image for our real-time diffusion AI image generation.

Solution overview

The combination of SageMaker asynchronous inference with AWS Inferentia2 allowed us to efficiently handle requests that had large payloads and long processing times while maintaining low latency requirements. A prerequisite was to fine-tune the Stable Diffusion XL model with domain-specific images which were stored in Amazon Simple Storage Service (Amazon S3). For this, we used Amazon SageMaker JumpStart. For more details, refer to Fine-Tune a Model.

The solution workflow consists of the following components:

  • Endpoint creation – We created an asynchronous inference endpoint using our existing SageMaker models, using AWS Inferentia2 chips for higher price/performance.
  • Request handling – Requests were queued by SageMaker upon invocation. Users submitted their image generation requests, where the input payload was placed in Amazon S3. SageMaker then queued the request for processing.
  • Processing and output – After processing, the results were stored back in Amazon S3 in a specified output bucket. During periods of inactivity, SageMaker automatically scaled the instance count to zero, significantly reducing costs because charges only occurred when the endpoint was actively processing requests.
  • Notifications – Completion notifications were set up through Amazon Simple Notification Service (Amazon SNS), notifying users of success or errors.

The following diagram illustrates our solution architecture and process workflow.

Solution architecture

In the following sections, we discuss the key components of the solution in more detail.

SageMaker asynchronous endpoints

SageMaker asynchronous endpoints queue incoming requests to process them asynchronously, which is ideal for large inference payloads (up to 1 GB) or inference requests with long processing times (up to 60 minutes) that need to be processed as requests arrive. The ability to serve long-running requests enabled Monks to effectively serve their use case. Auto scaling the instance count to zero allows you to design cost-optimal inference in response to spiky traffic, so you only pay for when the instances are serving traffic. You can also scale the endpoint instance count to zero in the absence of outstanding requests and scale back up when new requests arrive.

To learn how to create a SageMaker asynchronous endpoint, attach auto scaling policies, and invoke an asynchronous endpoint, refer to Create an Asynchronous Inference Endpoint.

AWS Inferentia2 chips, which powered the SageMaker asynchronous endpoints, are AWS AI chips optimized to deliver high performance for deep learning inference applications at lowest cost. Integrated within SageMaker asynchronous inference endpoints, AWS Inferentia2 chips support scale-out distributed inference with ultra-high-speed connectivity between chips. This setup was ideal for deploying our large-scale generative AI model across multiple accelerators efficiently and cost-effectively.

In the context of our high-profile nationwide campaign, the use of asynchronous computing was key in managing peak and unexpected spikes in concurrent requests to our inference infrastructure, which was expected to be in the hundreds of concurrent requests per second. Asynchronous inference endpoints, like those provided by SageMaker, offer dynamic scalability and efficient task management.

The solution offered the following benefits:

  • Efficient handling of longer processing times – SageMaker asynchronous inference endpoints are perfect for scenarios where each request might involve substantial computational work. These fully managed endpoints queue incoming inference requests and process them asynchronously. This method was particularly advantageous in our application, because it allowed the system to manage fluctuating demand efficiently. The ability to process requests asynchronously makes sure our infrastructure can handle large unexpected spikes in traffic without causing delays in response times.
  • Cost-effective resource utilization – One of the most significant advantages of using asynchronous inference endpoints is their impact on cost management. These endpoints can automatically scale the compute resources down to zero in periods of inactivity, without the risk of dropping or losing requests as resources scale back up.

Custom scaling policies using Amazon CloudWatch metrics

SageMaker endpoint auto scaling behavior is defined through the use of a scaling policy, which helps us scale to multiple users using the application concurrently This policy defines how and when to scale resources up or down to provide optimal performance and cost-efficiency.

SageMaker synchronous inference endpoints are typically scaled using the InvocationsPerInstance metric, which helps determine event triggers based on real-time demands. However, for SageMaker asynchronous endpoints, this metric isn’t available due to their asynchronous nature.

We encountered challenges with alternative metrics such as ApproximateBacklogSizePerInstance because they didn’t meet our real-time requirements. The inherent delay in these metrics resulted in unacceptable latency in our scaling processes.

Consequently, we sought a custom metric that could more accurately reflect the real-time load on our SageMaker instances.

Amazon CloudWatch custom metrics provide a powerful tool for monitoring and managing your applications and services in the AWS Cloud.

We had previously established a range of custom metrics to monitor various aspects of our infrastructure, including a particularly crucial one for tracking cache misses during image generation. Due to the nature of asynchronous endpoints, which don’t provide the InvocationsPerInstance metric, this custom cache miss metric became essential. It enabled us to gauge the number of requests contributing to the size of the endpoint queue. With this insight into the number of requests, one of our senior developers began to explore additional metrics available through CloudWatch to calculate the asynchronous endpoint capacity and utilization rate. We used the following calculations:

  • InferenceCapacity = (CPU utilization * 60) / (InferenceTimeInSeconds * InstanceGPUCount)
  • Number of inference requests = (served from cache + cache misses)
  • Usage rate = (number of requests) / (InferenceCapacity)​

The calculations included the following variables:

  • CPU utilization – Represents the average CPU utilization percentage of the SageMaker instances (CPUUtilization CloudWatch metric). It provides a snapshot of how much CPU resources are currently being used by the instances.
  • InferenceCapacity – The total number of inference tasks that the system can process per minute, calculated based on the average CPU utilization and scaled by the number of GPUs available (inf2.48xlarge has 12 GPUs). This metric provides an estimate of the system’s throughput capability per minute.
    • Multiply by 60 / Divide by InferenceTimeInSeconds – This step effectively adjusts the CPUUtilization metric to reflect how it translates into jobs per minute, assuming each job takes 10 seconds. Therefore, (CPU utilization * 60) / 10 represents the theoretical maximum number of jobs that can be processed in one minute based on current or typical CPU utilization.
    • Multiply by 12 – Because the inf2.48xlarge instance has 12 GPUs, this multiplication provides a total capacity in terms of how many jobs all GPUs can handle collectively in 1 minute.
  • Number of inference requests (served from cache + cache misses) – We monitor the total number of inference requests processed, distinguishing between those served from cache and those requiring real-time processing due to cache misses. This helps us gauge the overall workload.
  • Usage rate (number of inference requests) / (InferenceCapacity)​ – This formula determines the rate of resource usage by comparing the number of operations that invoke new tasks (number of requests) to the total inference capacity (InferenceCapacity).

A higher InferenceCapacity value suggests that we have either scaled up our resources or that our instances are under-utilized. Conversely, a lower capacity value could indicate that we’re reaching our capacity limits and might need to scale out to maintain performance.

Our custom usage rate metric quantifies the usage rate of available SageMaker instance capacity. It’s a composite measure that factors in both the image generation tasks that weren’t served from cache and those that resulted in a cache miss, relative to the total capacity metric. The usage rate is intended to provide insights into how much of the total provisioned SageMaker instance capacity is actively being used for image generation operations. It serves as a key indicator of operational efficiency and helps identify the workload’s operational demands.

We then used the usage rate metric as our auto scaling trigger metric. The use of this trigger in our auto scaling policy made sure SageMaker instances were neither over-provisioned nor under-provisioned. A high value for usage rate might indicate the need to scale up resources to maintain performance. A low value, on the other hand, could signal under-utilization, indicating a potential for cost optimization by scaling down resources.

We applied our custom metrics as triggers for a scaling policy:

CustomizedMetricSpecification = {
    "Metrics": [
        {
            "Id": "m1",
            "MetricStat": {
                "Metric": {
                    "MetricName": "CPUUtilization",
                    "Namespace": "/aws/sagemaker/Endpoints",
                    "Dimensions": [
                        { "Name": "EndpointName", "Value": endpoint_name },
                        { "Name": "VariantName", "Value": "AllTraffic" },
                    ]
                },
                "Stat": "SampleCount"
            },
            "ReturnData": False
        },
        {
            "Id": "m2",
            "MetricStat": {
                "Metric": {
                    "MetricName": " NumberOfInferenceRequests ",
                    "Namespace": "ImageGenAPI",
                    "Dimensions": [
                        { "Name": "service", "Value": "ImageGenerator" },
                        { "Name": "executionEnv", "Value": "AWS_Lambda_nodejs18.x" },
                        { "Name": "region", "Value": "us-west-2" },
                    ]
                },
                "Stat": "SampleCount"
            },
            "ReturnData": False
        },
        {
            "Label": "utilization rate",
            "Id": "e1",
            "Expression": "IF(m1 != 0, m2 / (m1 * 60 / 10 * 12))",
            "ReturnData": True
        }
    ]
}

aas_client.put_scaling_policy(
    PolicyName=endpoint_name,
    PolicyType="TargetTrackingScaling",
    ServiceNamespace=service_namespace,
    ResourceId=resource_id,
    ScalableDimension=scalable_dimension,
    TargetTrackingScalingPolicyConfiguration={
        "CustomizedMetricSpecification": CustomizedMetricSpecification,
        "TargetValue":0.75,
        "ScaleOutCooldown": 60,
        "ScaleInCooldown": 120,
        "DisableScaleIn": False,
    }
)

Deployment on AWS Inferentia2 chips

The integration of AWS Inferentia2 chips into our SageMaker inference endpoints not only resulted in a four-times increase in inference performance for our finely-tuned Stable Diffusion XL model, but also significantly enhanced cost-efficiency. Specifically, SageMaker instances powered by these chips reduced our deployment costs by 60% compared to other comparable instances on AWS. This substantial reduction in cost, coupled with improved performance, underscores the value of using AWS Inferentia2 for intensive computational tasks such as real-time diffusion AI image generation.

Given the importance of swift response times for our specific use case, we established an acceptance criterion of single digit second latency.

SageMaker instances equipped with AWS Inferentia2 chips successfully optimized our infrastructure to deliver image generation in just 9.7 seconds. This enhancement not only met our performance requirements at a low cost, but also provided a seamless and engaging user experience owing to the high availability of Inferentia2 chips.

The effort to integrate with the Neuron SDK also proved highly beneficial. The optimized model not only met our performance criteria, but also enhanced the overall efficiency of our inference processes.

Results and benefits

The implementation of SageMaker asynchronous inference endpoints significantly enhanced our architecture’s ability to handle varying traffic loads and optimize resource utilization, leading to marked improvements in performance and cost-efficiency:

  • Inference performance – The AWS Inferentia2 setup processed an average of 27,796 images per instance per hour, giving us 2x improvement in throughput over comparable accelerated compute instances.
  • Inference savings – In addition to performance enhancements, the AWS Inferentia2 configurations achieved a 60% reduction in cost per image compared to the original estimation. The cost for processing each image with AWS Inferentia2 was $0.000425. Although the initial requirement to compile models for the AWS Inferentia2 chips introduced an additional time investment, the substantial throughput gains and significant cost reductions justified this effort. For demanding workloads that necessitate high throughput without compromising budget constraints, AWS Inferentia2 instances are certainly worthy of consideration.
  • Smoothing out traffic spikes – We effectively smoothed out spikes in traffic to provide continual real-time experience for end-users. As shown in the following figure, the SageMaker asynchronous endpoint auto scaling and managed queue is preventing significant drift from our goal of single digit second latency per image generation.

Image generation request latency

  • Scheduled scaling to manage demand – We can scale up and back down on schedule to cover more predictable traffic demands, reducing inference costs while supplying demand. The following figure illustrates the impact of auto scaling reacting to unexpected demand as well as scaling up and down on a schedule.

Utilization rate

Conclusion

In this post, we discussed the potential benefits of applying SageMaker and AWS Inferentia2 chips within a production-ready generative AI application. SageMaker fully managed asynchronous endpoints provide an application time to react to both unexpected and predictable demand in a structured manner, even for high-demand applications such as image-based generative AI. Despite the learning curve involved in compiling the Stable Diffusion XL model to run on AWS Inferentia2 chips, using AWS Inferentia2 allowed us to achieve our demanding low-latency inference requirements, providing an excellent user experience, all while remaining cost-efficient.

To learn more about SageMaker deployment options for your generative AI use cases, refer to the blog series Model hosting patterns in Amazon SageMaker. You can get started with hosting a Stable Diffusion model with SageMaker and AWS Inferentia2 by using the following example.

Discover how Monks serves as a comprehensive digital partner by integrating a wide array of solutions. These encompass media, data, social platforms, studio production, brand strategy, and cutting-edge technology. Through this integration, Monks enables efficient content creation, scalable experiences, and AI-driven data insights, all powered by top-tier industry talent.


About the Authors

Benjamin MoodyBenjamin Moody is a Senior Solutions Architect at Monks. He focuses on designing and managing high-performance, robust, and secure architectures, utilizing a broad range of AWS services. Ben is particularly adept at handling projects with complex requirements, including those involving generative AI at scale. Outside of work, he enjoys snowboarding and traveling.

Karan JainKaran Jain is a Senior Machine Learning Specialist at AWS, where he leads the worldwide Go-To-Market strategy for Amazon SageMaker Inference. He helps customers accelerate their generative AI and ML journey on AWS by providing guidance on deployment, cost-optimization, and GTM strategy. He has led product, marketing, and business development efforts across industries for over 10 years, and is passionate about mapping complex service features to customer solutions.

Raghu RameshaRaghu Ramesha is a Senior Gen AI/ML Specialist Solutions Architect with AWS. He focuses on helping enterprise customers build and deploy AI/ ML production workloads to Amazon SageMaker at scale. He specializes in generative AI, machine learning, and computer vision domains, and holds a master’s degree in Computer Science from UT Dallas. In his free time, he enjoys traveling and photography.

Rupinder GrewalRupinder Grewal is a Senior Gen AI/ML Specialist Solutions Architect with AWS. He currently focuses on model serving and MLOps on SageMaker. Prior to this role, he worked as a Machine Learning Engineer building and hosting models. Outside of work, he enjoys playing tennis and biking on mountain trails.

Parag SrivastavaParag Srivastava is a Senior Solutions Architect at AWS, where he has been helping customers in successfully applying generative AI to real-life business scenarios. During his professional career, he has been extensively involved in complex digital transformation projects. He is also passionate about building innovative solutions around geospatial aspects of addresses.

Read More

Implement web crawling in Knowledge Bases for Amazon Bedrock

Implement web crawling in Knowledge Bases for Amazon Bedrock

Amazon Bedrock is a fully managed service that offers a choice of high-performing foundation models (FMs) from leading artificial intelligence (AI) companies like AI21 Labs, Anthropic, Cohere, Meta, Stability AI, and Amazon through a single API, along with a broad set of capabilities to build generative AI applications with security, privacy, and responsible AI.

With Amazon Bedrock, you can experiment with and evaluate top FMs for various use cases. It allows you to privately customize them with your enterprise data using techniques like Retrieval Augmented Generation (RAG), and build agents that run tasks using your enterprise systems and data sources. Knowledge Bases for Amazon Bedrock enables you to aggregate data sources into a repository of information. With knowledge bases, you can effortlessly build an application that takes advantage of RAG.

Accessing up-to-date and comprehensive information from various websites is crucial for many AI applications in order to have accurate and relevant data. Customers using Knowledge Bases for Amazon Bedrock want to extend the capability to crawl and index their public-facing websites. By integrating web crawlers into the knowledge base, you can gather and utilize this web data efficiently. In this post, we explore how to achieve this seamlessly.

Web crawler for knowledge bases

With a web crawler data source in the knowledge base, you can create a generative AI web application for your end-users based on the website data you crawl using either the AWS Management Console or the API. The default crawling behavior of the web connector starts by fetching the provided seed URLs and then traversing all child links within the same top primary domain (TPD) and having the same or deeper URL path.

The current considerations are that the URL can’t require any authentication, it can’t be an IP address for its host, and its scheme has to start with either http:// or https://. Additionally, the web connector will fetch non-HTML supported files such as PDFs, text files, markdown files, and CSVs referenced in the crawled pages regardless of their URL, as long as they aren’t explicitly excluded. If multiple seed URLs are provided, the web connector will crawl a URL if it fits any seed URL’s TPD and path. You can have up to 10 source URLs, which the knowledge base uses to as a starting point to crawl.

However, the web connector doesn’t traverse pages across different domains by default. The default behavior, however, will retrieve supported non-HTML files. This makes sure the crawling process remains within the specified boundaries, maintaining focus and relevance to the targeted data sources.

Understanding the sync scope

When setting up a knowledge base with web crawl functionality, you can choose from different sync types to control which webpages are included. The following table shows the example paths that will be crawled given the source URL for different sync scopes (https://example.com is used for illustration purposes).

Sync Scope Type Source URL Example Domain Paths Crawled Description
Default https://example.com/products

https://example.com/products

https://example.com/products/product1

https://example.com/products/product

https://example.com/products/discounts

Same host and the same initial path as the source URL
Host only https://example.com/sellers

https://example.com/

https://example.com/products

https://example.com/sellers

https://example.com/delivery

Same host as the source URL
Subdomains https://example.com

https://blog.example.com

https://blog.example.com/posts/post1

https://discovery.example.com

https://transport.example.com

Subdomain of the primary domain of the source URLs

You can set the maximum throttling for crawling speed to control the maximum crawl rate. Higher values will reduce the sync time. However, the crawling job will always adhere to the domain’s robots.txt file if one is present, respecting standard robots.txt directives like ‘Allow’, ‘Disallow’, and crawl rate.

You can further refine the scope of URLs to crawl by using inclusion and exclusion filters. These filters are regular expression (regex) patterns applied to each URL. If a URL matches any exclusion filter, it will be ignored. Conversely, if inclusion filters are set, the crawler will only process URLs that match at least one of these filters that are still within the scope. For example, to exclude URLs ending in .pdf, you can use the regex ^.*.pdf$. To include only URLs containing the word “products,” you can use the regex .*products.*.

Solution overview

In the following sections, we walk through the steps to create a knowledge base with a web crawler and test it. We also show how to create a knowledge base with a specific embedding model and an Amazon OpenSearch Service vector collection as a vector database, and discuss how to monitor your web crawler.

Prerequisites

Make sure you have permission to crawl the URLs you intend to use, and adhere to the Amazon Acceptable Use Policy. Also make sure any bot detection features are turned off for those URLs. A web crawler in a knowledge base uses the user-agent bedrockbot when crawling webpages.

Create a knowledge base with a web crawler

Complete the following steps to implement a web crawler in your knowledge base:

  1. On the Amazon Bedrock console, in the navigation pane, choose Knowledge bases.
  2. Choose Create knowledge base.
  3. On the Provide knowledge base details page, set up the following configurations:
    1. Provide a name for your knowledge base.
    2. In the IAM permissions section, select Create and use a new service role.
    3. In the Choose data source section, select Web Crawler as the data source.
    4. Choose Next.
  4. On the Configure data source page, set up the following configurations:
    1. Under Source URLs, enter https://www.aboutamazon.com/news/amazon-offices.
    2. For Sync scope, select Host only.
    3. For Include patterns, enter ^https?://www.aboutamazon.com/news/amazon-offices/.*$.
    4. For exclude pattern, enter .*plants.* (we don’t want any post with a URL containing the word “plants”).
    5. For Content chunking and parsing, chose Default.
    6. Choose Next.
  5. On the Select embeddings model and configure vector store page, set up the following configurations:
    1. In the Embeddings model section, chose Titan Text Embeddings v2.
    2. For Vector dimensions, enter 1024.
    3. For Vector database, choose Quick create a new vector store.
    4. Choose Next.
  6. Review the details and choose Create knowledge base.

In the preceding instructions, the combination of Include patterns and Host only sync scope is used to demonstrate the use of the include pattern for web crawling. The same results can be achieved with the default sync scope, as we learned in the previous section of this post.

Create knowledge base web crawler

You can use the Quick create vector store option when creating the knowledge base to create an Amazon OpenSearch Serverless vector search collection. With this option, a public vector search collection and vector index is set up for you with the required fields and necessary configurations. Additionally, Knowledge Bases for Amazon Bedrock manages the end-to-end ingestion and query workflows.

Test the knowledge base

Let’s go over the steps to test the knowledge base with a web crawler as the data source:

  1. On the Amazon Bedrock console, navigate to the knowledge base that you created.
  2. Under Data source, select the data source name and choose Sync. It could take several minutes to hours to sync, depending on the size of your data.
  1. When the sync job is complete, in the right panel, under Test knowledge base, choose Select model and select the model of your choice.
  2. Enter one of the following prompts and observe the response from the model:
    1. How do I tour the Seattle Amazon offices?
    2. Provide me with some information about Amazon’s HQ2.
    3. What is it like in the Amazon’s New York office?

As shown in the following screenshot, citations are returned within the response reference webpages. The value of x-amz-bedrock-kb-source-uri is a webpage link, which helps you verify the response accuracy.

knowledge base web crawler testing

Create a knowledge base using the AWS SDK

This following code uses the AWS SDK for Python (Boto3) to create a knowledge base in Amazon Bedrock with a specific embedding model and OpenSearch Service vector collection as a vector database:

import boto3

client = boto3.client('bedrock-agent')

response = client.create_knowledge_base(
    name='workshop-aoss-knowledge-base',
    roleArn='your-role-arn',
    knowledgeBaseConfiguration={
        'type': 'VECTOR',
        'vectorKnowledgeBaseConfiguration': {
            'embeddingModelArn': 'arn:aws:bedrock:your-region::foundation-model/amazon.titan-embed-text-v2:0'
        }
    },
    storageConfiguration={
        'type': 'OPENSEARCH_SERVERLESS',
        'opensearchServerlessConfiguration': {
            'collectionArn': 'your-opensearch-collection-arn',
            'vectorIndexName': 'blog_index',
            'fieldMapping': {
                'vectorField': 'documentid',
                'textField': 'data',
                'metadataField': 'metadata'
            }
        }
    }
)

The following Python code uses Boto3 to create a web crawler data source for an Amazon Bedrock knowledge base, specifying URL seeds, crawling limits, and inclusion and exclusion filters:

import boto3

client = boto3.client('bedrock-agent', region_name='us-east-1')

knowledge_base_id = 'knowledge-base-id'

response = client.create_data_source(
    knowledgeBaseId=knowledge_base_id,
    name='example',
    description='test description',
    dataSourceConfiguration={
        'type': 'WEB',
        'webConfiguration': {
            'sourceConfiguration': {
                'urlConfiguration': {
                    'seedUrls': [
                        {'url': 'https://example.com/'}
                    ]
                }
            },
            'crawlerConfiguration': {
                'crawlerLimits': {
                    'rateLimit': 300
                },
                'inclusionFilters': [
                    '.*products.*'
                ],
                'exclusionFilters': [
                    '.*.pdf$'
                ],
                'scope': 'HOST_ONLY'
            }
        }
    }
)

Monitoring

You can track the status of an ongoing web crawl in your Amazon CloudWatch logs, which should report the URLs being visited and whether they are successfully retrieved, skipped, or failed. The following screenshot shows the CloudWatch logs for the crawl job.

knowledge base cloudwatch monitoring

Clean up

To clean up your resources, complete the following steps:

  1. Delete the knowledge base:
    1. On the Amazon Bedrock console, choose Knowledge bases under Orchestration in the navigation pane.
    2. Choose the knowledge base you created.
    3. Take note of the AWS Identity and Access Management (IAM) service role name in the knowledge base overview.
    4. In the Vector database section, take note of the OpenSearch Serverless collection ARN.
    5. Choose Delete, then enter delete to confirm.
  2. Delete the vector database:
    1. On the OpenSearch Service console, choose Collections under Serverless in the navigation pane.
    2. Enter the collection ARN you saved in the search bar.
    3. Select the collection and chose Delete.
    4. Enter confirm in the confirmation prompt, then choose Delete.
  3. Delete the IAM service role:
    1. On the IAM console, choose Roles in the navigation pane.
    2. Search for the role name you noted earlier.
    3. Select the role and choose Delete.
    4. Enter the role name in the confirmation prompt and delete the role.

Conclusion

In this post, we showcased how Knowledge Bases for Amazon Bedrock now supports the web data source, enabling you to index public webpages. This feature allows you to efficiently crawl and index websites, so your knowledge base includes diverse and relevant information from the web. By taking advantage of the infrastructure of Amazon Bedrock, you can enhance the accuracy and effectiveness of your generative AI applications with up-to-date and comprehensive data.

For pricing information, see Amazon Bedrock pricing. To get started using Knowledge Bases for Amazon Bedrock, refer to Create a knowledge base. For deep-dive technical content, refer to Crawl web pages for your Amazon Bedrock knowledge base. To learn how our Builder communities are using Amazon Bedrock in their solutions, visit our community.aws website.


About the Authors

Hardik Vasa is a Senior Solutions Architect at AWS. He focuses on Generative AI and Serverless technologies, helping customers make the best use of AWS services. Hardik shares his knowledge at various conferences and workshops. In his free time, he enjoys learning about new tech, playing video games, and spending time with his family.

Malini Chatterjee is a Senior Solutions Architect at AWS. She provides guidance to AWS customers on their workloads across a variety of AWS technologies. She brings a breadth of expertise in Data Analytics and Machine Learning. Prior to joining AWS, she was architecting data solutions in financial industries. She is very passionate about semi-classical dancing and performs in community events. She loves traveling and spending time with her family.

Read More

Intuit uses Amazon Bedrock and Anthropic’s Claude to explain taxes in TurboTax to millions of consumer tax filers

Intuit uses Amazon Bedrock and Anthropic’s Claude to explain taxes in TurboTax to millions of consumer tax filers

Intuit is committed to providing its customers innovative solutions that simplify complex financial processes. Tax filing can be a challenge, with its ever-changing regulations and intricate nuances. That’s why the company empowers millions of individuals and small businesses to comprehend tax-related information effortlessly and file with full confidence that their taxes are done right.

For the 2024 tax season, Intuit set out to raise the bar with generative AI, using Anthropic’s advanced language model Claude in Amazon Bedrock—underpinned by Intuit’s proprietary tax engine—to provide individual tax filers with simple-to-understand contextual explanations of tax calculations, backed by real-time accuracy checks.

In this blog post, we discuss the journey of developing a solution that benefited millions of TurboTax customers in 2024.

The challenge

Taxes, with their complicated regulations and nuances, can be a labyrinth for even the most seasoned. The tax code includes 15,000+ federal tax forms and state tax forms for individual and business tax filers in the U.S. It is estimated that Americans spend 8.9 billion hours every year doing their taxes.

To streamline and simplify the tax filing experiences, Intuit’s AI/GenAI-powered TurboTax products guide consumers through the process. One challenge is to explain complex calculations in a simple-to-understand manner so taxpayers can confidently file their taxes —and seamlessly connect to a human expert whenever needed. According to Nhung Ho, vice president of AI at Intuit, “With Intuit Assist for TurboTax, we wanted to answer every customer’s question about how they arrived at their final tax outcome, and we had to do it in clear, concise language, so they have peace of mind before they file.”

The solution

Applying its years of domain expertise, robust data set and proprietary tax knowledge engine, Intuit worked closely with Anthropic and Amazon Web Services to further boost filer confidence by integrating Claude via Amazon Bedrock into its AI financial assistant, Intuit Assist for TurboTax. During federal tax reviews where customers see a summary of their return, the combined work of Intuit, Anthropic and AWS provides simple explanations of tax calculations. Altogether, helping users better understand how their tax result is calculated will help them feel assured their taxes were filed correctly. The following video shows examples of tax explanations.

Implementing Claude in Amazon Bedrock: a collaborative effort

In June 2023, Intuit announced its proprietary generative AI operating system (GenOS), which runs on AWS infrastructure and empowers the company’s developers to design, build, and deploy breakthrough generative AI experiences. GenOS serves as the primary paved path for rolling out generative AI applications or capabilities in production across the company.

Last fall, Intuit began experimenting with Anthropic’s Claude via Amazon Bedrock.

“After a successful partnership with Amazon SageMaker for its ML capabilities, Intuit looked forward to working with Amazon Bedrock as a managed service to simplify the deployment and management of LLMs,” explained Nhung.

Each year, tax filing is a seasonal process between January 1 and October 15, so the ability to scale rapidly to help meet the needs of millions of Intuit customers during this period was a critical success factor for Intuit’s tax explanations use case with Anthropic Claude in Amazon Bedrock.

“Amazon Bedrock offered Intuit the latency, scalability, and reliability to introduce AI-powered tax explanations to its customers,” Nhung added. “This allowed Intuit to deliver valuable generative AI experiences to its users.”

The company took advantage of AWS elasticity to acquire resources as they needed them, and to release resources when no longer needed. Provisioned throughput for Amazon Bedrock enabled Intuit to achieve the scalability and latency needed to serve millions of customers, beginning in January 2024. Intuit also implemented a multi-region setup to provide resiliency needed for such a critical application.

Additionally, a private connection between TurboTax Virtual Private Cloud (VPC) and Amazon Bedrock made sure that user data was appropriately protected.

“Intuit takes great pains to protect user data with our anti-fraud technology. It is important that user data remain secure. Anthropic’s Claude LLM, managed by Amazon Bedrock, provides that capability.” Nhung explained.

Conclusion

By using Amazon Bedrock to integrate Anthropic’s Claude into its tax preparation software, Intuit expanded the following benefits:

  • Simplified Tax Explanations: By demystifying tax complexities, Intuit instilled confidence in users, empowering them to navigate the tax filing process with greater ease and assurance.
  • Simplified Management: A simplified management experience of Anthropic’s Claude with Bedrock made it simple for Intuit to scale securely.

For the 2024 tax season, Intuit’s innovative use of Anthropic’s Claude in Amazon Bedrock is helping demystify the complexities of tax filing. By harnessing the power of advanced language models, the company is redefining the way people understand and engage with tax-related information. Through personalized explanations, tailored guidance, and a commitment to continuous improvement, Intuit is paving the way for a “done for you” future, where the hard work of tax preparation is done on its customers’ behalf, with a seamless path to human tax and bookkeeping experts whenever needed.

As the company moves forward, it remains dedicated to using cutting-edge generative AI technologies to enhance its solutions and provide its customers with the tools they need to achieve financial success. The successful integration of Amazon Bedrock in the tax domain has opened up new opportunities for Intuit to leverage advanced language models in other areas of financial management, solidifying its position as a trailblazer in fintech.


About the Author

Shivanshu Upadhyay is a Principal Solutions Architect in the AWS Industries group. In this role, he helps the most advanced adopters of AWS transform their industry by effectively using data and AI.

Read More

Build generative AI–powered Salesforce applications with Amazon Bedrock

Build generative AI–powered Salesforce applications with Amazon Bedrock

This post is co-authored by Daryl Martis and Darvish Shadravan from Salesforce.

This is the fourth post in a series discussing the integration of Salesforce Data Cloud and Amazon SageMaker.

In Part 1 and Part 2, we show how Salesforce Data Cloud and Einstein Studio integration with SageMaker allows businesses to access their Salesforce data securely using SageMaker’s tools to build, train, and deploy models to endpoints hosted on SageMaker. SageMaker endpoints can be registered with Salesforce Data Cloud to activate predictions in Salesforce. In Part 3, we demonstrate how business analysts and citizen data scientists can create machine learning (ML) models, without code, in Amazon SageMaker Canvas and deploy trained models for integration with Salesforce Einstein Studio to create powerful business applications.

In this post, we show how native integrations between Salesforce and Amazon Web Services (AWS) enable you to Bring Your Own Large Language Models (BYO LLMs) from your AWS account to power generative artificial intelligence (AI) applications in Salesforce. Requests and responses between Salesforce and Amazon Bedrock pass through the Einstein Trust Layer, which promotes responsible AI use across Salesforce.

We demonstrate BYO LLM integration by using Anthropic’s Claude model on Amazon Bedrock to summarize a list of open service cases and opportunities on an account record page, as shown in the following figure.

Partner quote

“We continue to expand on our strong collaboration with AWS with our BYO LLM integration with Amazon Bedrock, empowering our customers with more model choices and allowing them to create AI-powered features and Copilots customized for their specific business needs. Our open and flexible AI environment, grounded with customer data, positions us well to be leaders in AI-driven solutions in the CRM space.”

–Kaushal Kurapati, Senior Vice President of Product for AI at Salesforce

Amazon Bedrock

Amazon Bedrock is a fully managed service that offers a choice of high-performing foundation models (FMs) from leading AI companies like AI21 Labs, Anthropic, Cohere, Meta, Mistral AI, Stability AI, and Amazon through a single API, along with a broad set of capabilities you need to build generative AI applications with security, privacy, and responsible AI. Using Amazon Bedrock, you can quickly experiment with and evaluate top FMs for your use case, privately customize them with your data using techniques such as fine-tuning and Retrieval Augmented Generation (RAG), and build agents that execute tasks using your enterprise systems and data sources. Since Amazon Bedrock is serverless, you don’t have to manage infrastructure, and you can securely integrate and deploy generative AI capabilities into your applications using the AWS services you are already familiar with.

Salesforce Data Cloud and Einstein Model Builder

Salesforce Data Cloud is a data platform that unifies your company’s data, giving every team a 360-degree view of the customer to drive automation and analytics, personalize engagement, and power trusted AI. Data Cloud creates a holistic customer view by turning volumes of disconnected data into a single, trusted model that’s simple to access and understand. With data harmonized within Salesforce Data Cloud, customers can put their data to work to build predictions and generative AI–powered business processes across sales, support, and marketing.

With Einstein Model Builder, customers can build their own models using Salesforce’s low-code model builder experience or integrate their own custom-built models into the Salesforce platform. Einstein Model Builder’s BYO LLM experience provides the capability to register custom generative AI models from external environments such as Amazon Bedrock and Salesforce Data Cloud.

Once custom Amazon Bedrock models are registered in Einstein Model Builder, models are connected through the Einstein Trust Layer, a robust set of features and guardrails that protect the privacy and security of data, improve the safety and accuracy of AI results, and promote the responsible use of AI across Salesforce. Registered models can then be used in Prompt Builder, a newly launched, low-code prompt engineering tool that allows Salesforce admins to build, test, and fine-tune trusted AI prompts that can be used across the Salesforce platform. These prompts can be integrated with Salesforce capabilities such as Flows and Invocable Actions and Apex.

Solution overview

With the Salesforce Einstein Model Builder BYO LLM feature, you can invoke Amazon Bedrock models in your AWS account. At the time of this writing, Salesforce supports Anthropic Claude 3 models on Amazon Bedrock for BYO LLM. For this post, we use the Anthropic Claude 3 Sonnet model. To learn more about inference with Claude 3, refer to Anthropic Claude models in the Amazon Bedrock documentation.

For your implementation, you may use the model of your choice. Refer to Bring Your Own Large Language Model in Einstein 1 Studio for models supported with Salesforce Einstein Model Builder.

The following image shows a high-level architecture of how you can integrate the LLM from your AWS account into the Salesforce Prompt Builder.

In this post, we show how to build generative AI–powered Salesforce applications with Amazon Bedrock. The following are the high-level steps involved:

  1. Grant Amazon Bedrock invoke model permission to an AWS Identity and Access Management (IAM) user
  2. Register the Amazon Bedrock model in Salesforce Einstein Model Builder
  3. Integrate the prompt template with the field in the Lightning App Builder

Prerequisites

Before deploying this solution, make sure you meet the following prerequisites:

  1. Have access to Salesforce Data Cloud and meet the requirements for using BYO LLM.
  2. Have Amazon Bedrock set up. If this is the first time you are accessing Anthropic Claude models on Amazon Bedrock, you need to request access. You need to have sufficient permissions to request access to models through the console. To request model access, sign in to the Amazon Bedrock console and select Model access at the bottom of the left navigation pane.

Solution walkthrough

To build generative AI–powered Salesforce applications with Amazon Bedrock, implement the following steps.

Grant Amazon Bedrock invoke model permission to an IAM User

Salesforce Einstein Studio requires an access key and a secret to access the Amazon Bedrock API. Follow the instructions to set up an IAM user and access keys. The IAM user must have Amazon Bedrock invoke model permission to access the model. Complete the following steps:

  1. On the IAM console, select Users in the navigation panel. On the right side of the console, choose Add permissions and Create inline policy.
  2. On the Specify permissions screen, in the Service dropdown menu, select Bedrock.
  3. Under Actions allowed, enter “invoke.” Under Read, select InvokeModel. Select All under Resources. Choose Next.
  4. On the Review and create screen, under Policy name, enter BedrockInvokeModelPolicy. Choose Create policy.

Register Amazon Bedrock model in Einstein Model Builder

  1. On the Salesforce Data Cloud console, under the Einstein Studio tab, choose Add Foundation Model.
  2. Choose Connect to Amazon Bedrock.
  3. For Endpoint information, enter the endpoint name, your AWS account Access Key, and your Secret Key. Enter the Region and Model information. Choose Connect.
  4. Now, create the configuration for the model endpoint you created in the previous steps. Provide Inference parameters such as temperature to set the deterministic factor of the LLM. Enter a sample prompt to verify the response.
  5. Next, you can save this new model configuration. Enter the name for the saved LLM model and choose Create Model.
  6. After the model creation is successful, choose Close and proceed to create the prompt template.
  7. Select the Model name to open the Model configuration.
  8. Select Create Prompt Template to launch the prompt builder.
  9. Select Field Generation as the prompt template type, template name, set Object to Account, and set Object Field to PB Case and Oppty Summary. This will associate the template to a custom field in the account record object to summarize the cases.

For this demo, a rich text field named PB Case and Oppty Summary was created and added to the Salesforce Account page layout according to the Add a Field Generation Prompt Template to a Lightning Record Page instructions.

  1. Provide the prompt and input variables or objects for data grounding and select the model. Refer to Prompt Builder to learn more.

Integrate prompt template with the field in the Lightning App builder

  1. On the Salesforce console, use the search bar to find Lightning App Builder. Build or edit an existing page to integrate the prompt template with the field as shown in the following screenshot. Refer to Add a Field Generation Prompt Template to a Lightning Record Page for detailed instructions.
  2. Navigate to the Account page and click on the PB Case and Oppty Summary enabled for chat completion to launch the Einstein generative AI assistant and summarize the account case data.

Cleanup

Complete the following steps to clean up your resources.

  1. Delete the IAM user
  2. Delete the foundation model in Einstein Studio

Amazon Bedrock offers on-demand inference pricing. There’s no additional costs with a continued model subscription. To remove model access, refer to the steps in Remove model access.

Conclusion

In this post, we demonstrated how to use your own LLM in Amazon Bedrock to power Salesforce applications. We used summarization of open service cases on an account object as an example to showcase the implementation steps.

Amazon Bedrock is a fully managed service that makes high-performing FMs from leading AI companies and Amazon available for your use through a unified API. You can choose from a wide range of FMs to find the model that is best suited for your use case.

Salesforce Einstein Model Builder lets you register your Amazon Bedrock model and use it in Prompt Builder to create prompts grounded in your data. These prompts can then be integrated with Salesforce capabilities such as Flows and Invocable Actions and Apex. You can then build custom generative AI applications with Claude 3 that are grounded in the Salesforce user experience. Amazon Bedrock requests from Salesforce pass through the Einstein Trust Layer, which provides responsible AI use with features such as dynamic grounding, zero data retention, and toxicity detection while maintaining safety and security standards.

AWS and Salesforce are excited for our mutual customers to harness this integration and build generative AI–powered applications. To learn more and start building, refer to the following resources.


About the Authors

Daryl Martis is the Director of Product for Einstein Studio at Salesforce Data Cloud. He has over 10 years of experience in planning, building, launching, and managing world-class solutions for enterprise customers, including AI/ML and cloud solutions. He has previously worked in the financial services industry in New York City. Follow him on LinkedIn.

Darvish Shadravan is a Director of Product Management in the AI Cloud at Salesforce. He focuses on building AI/ML features for CRM, and is the product owner for the Bring Your Own LLM feature. You can connect with him on LinkedIn.

RachnaRachna Chadha is a Principal Solutions Architect AI/ML in Strategic Accounts at AWS. Rachna is an optimist who believes that ethical and responsible use of AI can improve society in the future and bring economic and social prosperity. In her spare time, Rachna likes spending time with her family, hiking, and listening to music.

Ravi Bhattiprolu is a Sr. Partner Solutions Architect at AWS. Ravi works with strategic partners Salesforce and Tableau to deliver innovative and well-architected products and solutions that help joint customers realize their business objectives.

Ife Stewart is a Principal Solutions Architect in the Strategic ISV segment at AWS. She has been engaged with Salesforce Data Cloud over the last 2 years to help build integrated customer experiences across Salesforce and AWS. Ife has over 10 years of experience in technology. She is an advocate for diversity and inclusion in the technology field.

Mike Patterson is a Senior Customer Solutions Manager in the Strategic ISV segment at AWS. He has partnered with Salesforce Data Cloud to align business objectives with innovative AWS solutions to achieve impactful customer experiences. In Mike’s spare time, he enjoys spending time with his family, sports, and outdoor activities.

Dharmendra Kumar Rai (DK Rai) is a Sr. Data Architect, Data Lake & AI/ML, serving strategic customers. He works closely with customers to understand how AWS can help them solve problems, especially in the AI/ML and analytics space. DK has many years of experience in building data-intensive solutions across a range of industry verticals, including high-tech, FinTech, insurance, and consumer-facing applications.

Read More

Transition your Amazon Forecast usage to Amazon SageMaker Canvas

Transition your Amazon Forecast usage to Amazon SageMaker Canvas

Amazon Forecast is a fully managed service that uses statistical and machine learning (ML) algorithms to deliver highly accurate time series forecasts. Launched in August 2019, Forecast predates Amazon SageMaker Canvas, a popular low-code no-code AWS tool for building, customizing, and deploying ML models, including time series forecasting models.

With SageMaker Canvas, you get faster model building, cost-effective predictions, advanced features such as a model leaderboard and algorithm selection, and enhanced transparency. You can also either use the SageMaker Canvas UI, which provides a visual interface for building and deploying models without needing to write any code or have any ML expertise, or use its automated machine learning (AutoML) APIs for programmatic interactions.

In this post, we provide an overview of the benefits SageMaker Canvas offers and details on how Forecast users can transition their use cases to SageMaker Canvas.

Benefits of SageMaker Canvas

Forecast customers have been seeking greater transparency, lower costs, faster training, and enhanced controls for building time series ML models. In response to this feedback, we have made next-generation time series forecasting capabilities available in SageMaker Canvas, which already offers a robust platform for preparing data and building and deploying ML models. With the addition of forecasting, you can now access end-to-end ML capabilities for a broad set of model types—including regression, multi-class classification, computer vision (CV), natural language processing (NLP), and generative artificial intelligence (AI)—within the unified user-friendly platform of SageMaker Canvas.

SageMaker Canvas offers up to 50% faster model building performance and up to 45% quicker predictions on average for time series models compared to Forecast across various benchmark datasets. Generating predictions is  significantly more cost-effective than Forecast, because costs are based solely on the Amazon SageMaker compute resources used. SageMaker Canvas also provides excellent model transparency by offering direct access to trained models, which you can deploy at your chosen location, along with numerous model insight reports, including access to validation data, model- and item-level performance metrics, and hyperparameters employed during training.

SageMaker Canvas includes the key capabilities found in Forecast, including the ability to train an ensemble of forecasting models using both statistical and neural network algorithms. It creates the best model for your dataset by generating base models for each algorithm, evaluating their performance, and then combining the top-performing models into an ensemble. This approach leverages the strengths of different models to produce more accurate and robust forecasts. You have the flexibility to select one or several algorithms for model creation, along with the capability to evaluate the impact of model features on prediction accuracy. SageMaker Canvas simplifies your data preparation with automated solutions for filling in missing values, making your forecasting efforts as seamless as possible. It facilitates an out-of-the-box integration of external information, such as country-specific holidays, through simple UI options or API configurations. You can also take advantage of its data flow feature to connect with external data providers’ APIs to import data, such as weather information. Furthermore, you can conduct what-if analyses directly in the SageMaker Canvas UI to explore how various scenarios might affect your outcomes.

We will continue to innovate and deliver cutting-edge, industry-leading forecasting capabilities through SageMaker Canvas by lowering latency, reducing training and prediction costs, and improving accuracy. This includes expanding the range of forecasting algorithms we support and incorporating new advanced algorithms to further enhance the model building and prediction experience.

Transitioning from Forecast to SageMaker Canvas

Today, we’re releasing a transition package comprising two resources to help you transition your usage from Forecast to SageMaker Canvas. The first component includes a workshop to get hands-on experience with the SageMaker Canvas UI and APIs and to learn how to transition your usage from Forecast to SageMaker Canvas. We also provide a Jupyter notebook that shows how to transform your existing Forecast training datasets to the SageMaker Canvas format.

Before we learn how to build forecast models in SageMaker Canvas using your Forecast input datasets, let’s understand some key differences between Forecast and SageMaker Canvas:

  • Dataset types – Forecast uses multiple datasets – target time series, related time series (optional), and item metadata (optional). In contrast, SageMaker Canvas requires only one dataset, eliminating the need for managing multiple datasets.
  • Model invocation – SageMaker Canvas allows you to invoke the model for a single dataset or a batch of datasets using the UI as well as the APIs. Unlike Forecast, which requires you to first create a forecast and then query it, you simply use the UI or API to invoke the endpoint where the model is deployed to generate forecasts. The SageMaker Canvas UI also gives you the option to deploy the model for inference on SageMaker real-time endpoints. With just a few clicks, you can receive an HTTPS endpoint that can be invoked from within your application to generate forecasts.

In the following sections, we discuss the high-level steps for transforming your data, building a model, and deploying a model using SageMaker Canvas using either the UI or APIs.

Build and deploy a model using the SageMaker Canvas UI

We recommend reorganizing your data sources to directly create a single dataset for use with SageMaker Canvas. Refer to Time Series Forecasts in Amazon SageMaker Canvas  for guidance on structuring your input dataset to build a forecasting model in SageMaker Canvas. However, if you prefer to continue using multiple datasets as you do in Forecast, you have the following options to merge them into a single dataset supported by SageMaker Canvas:

  • SageMaker Canvas UI – Use the SageMaker Canvas UI to join the target time series, related time series, and item metadata datasets into one dataset. The following screenshot shows an example dataflow created in SageMaker Canvas to merge the three datasets into one SageMaker Canvas dataset.
  • Python script – Use a Python script to merge the datasets. For sample code and hands-on experience in transforming multiple Forecast datasets into one dataset for SageMaker Canvas, refer to this workshop.

When the dataset is ready, use the SageMaker Canvas UI, available on the SageMaker console, to load the dataset into the SageMaker Canvas application, which uses AutoML to train, build, and deploy the model for inference. The workshop shows how to merge your datasets and build the forecasting model.

After the model is built, there are multiple ways to generate and consume forecasts:

  • Make an in-app prediction – You can generate forecasts using the SageMaker Canvas UI and export them to Amazon QuickSight using built-in integration or download the prediction file to your local desktop. You can also access the generated predictions from the Amazon Simple Storage Service (Amazon S3) storage location where SageMaker Canvas is configured to store model artifacts, datasets, and other application data. Refer to Configure your Amazon S3 storage to learn more about the Amazon S3 storage location used by SageMaker Canvas.
  • Deploy the model to a SageMaker endpoint – You can deploy the model to SageMaker real-time endpoints directly from the SageMaker Canvas UI. These endpoints can be queried by developers in their applications with a few lines of code. You can update the code in your existing application to invoke the deployed model. Refer to the workshop for more details.

Build and deploy a model using the SageMaker Canvas (Autopilot) APIs

You can use the sample code provided in the notebook in the GitHub repo to process your datasets, including target time series data, related time series data, and item metadata, into a single dataset needed by SageMaker Canvas APIs.

Next, use the SageMaker AutoML API for time series forecasting to process the data, train the ML model, and deploy the model programmatically. Refer to the sample notebook in the GitHub repo for a detailed implementation on how to train a time series model and produce predictions using the model.

Refer to the workshop for more hands-on experience.

Conclusion

In this post, we outlined steps to transition from Forecast and build time series ML models in SageMaker Canvas, and provided a data transformation notebook and prescriptive guidance through a workshop. After the transition, you can benefit from a more accessible UI, cost-effectiveness, and higher transparency of the underlying AutoML API in SageMaker Canvas, democratizing time series forecasting within your organization and saving time and resources on model training and deployment.

SageMaker Canvas can be accessed from the SageMaker console. Time series forecasting with Canvas is available in all regions where SageMaker Canvas is available. For more information about AWS Region availability, see AWS Services by Region.

Resources

For more information, see the following resources:


About the Authors

Nirmal Kumar is Sr. Product Manager for the Amazon SageMaker service. Committed to broadening access to AI/ML, he steers the development of no-code and low-code ML solutions. Outside work, he enjoys travelling and reading non-fiction.

Dan Sinnreich is a Sr. Product Manager for Amazon SageMaker, focused on expanding no-code / low-code services. He is dedicated to making ML and generative AI more accessible and applying them to solve challenging problems. Outside of work, he can be found playing hockey, scuba diving, and reading science fiction.

Davide Gallitelli is a Specialist Solutions Architect for AI/ML in the EMEA region. He is based in Brussels and works closely with customer throughout Benelux. He has been a developer since very young, starting to code at the age of 7. He started learning AI/ML in his later years of university, and has fallen in love with it since then.

Biswanath Hore is a Solutions Architect at Amazon Web Services. He works with customers early in their AWS journey, helping them adopt cloud solutions to address their business needs. He is passionate about Machine Learning and, outside of work, loves spending time with his family.

Read More

Connect Amazon Q Business to Microsoft SharePoint Online using least privilege access controls

Connect Amazon Q Business to Microsoft SharePoint Online using least privilege access controls

Amazon Q Business is the generative artificial intelligence (AI) assistant that empowers employees with your company’s knowledge and data. Microsoft SharePoint Online is used by many organizations as a secure place to store, organize, share, and access their internal data. With generative AI, employees can get answers to their questions, summarize content, or generate insights from data stored in SharePoint Online. Using Amazon Q Business Connectors, you can connect SharePoint Online data to an Amazon Q Business application and start gaining insights from your data quickly.

This post demonstrates how to use Amazon Q Business with SharePoint Online as the data source to provide answers, generate summaries, and present insights using least privilege access controls and best practices recommended by Microsoft SharePoint Dev Support Team.

Solution overview

In this post, we walk you through the process of setting up an Amazon Q Business application that connects to your SharePoint Online sites using an out-of-the-box Amazon Q Business Connector and configuring it using the Sites.Selected application permission scope. The Sites.Selected permission is important because many organizations implement policies that prevent granting read access on all sites (Sites.Read.All) or full control (Sites.FullControl.All) to any connector.

The solution approach respects users’ existing identities, roles, and permissions by enabling identity crawling and access control lists (ACLs) on the Amazon Q Business connector for SharePoint Online using secure credentials facilitated through AWS Secrets Manager. If a user doesn’t have permissions to access certain data without Amazon Q Business, then they can’t access it using Amazon Q Business either. Only the data the user has access to is used to support the user query.

Prerequisites

The following are the prerequisites necessary to deploy the solution:

  • An AWS account with an AWS Identity and Access Management (IAM) role and user with permissions to create and manage the necessary resources and components for the application. If you don’t have an AWS account, see How do I create and activate a new Amazon Web Services account?
  • An Amazon Q Business application. If you haven’t set one up yet, see Creating an Amazon Q Business application environment.
  • A Microsoft account and a SharePoint Online subscription to create and publish the application using the steps outlined in this post. If you don’t have this, check with your organization admins to create sandboxes for you to experiment in, or create a new account and trial subscription as needed to complete the steps.
  • An application in Microsoft Entra ID with Sites.FullControl application-level permissions, along with its client ID and client secret. This application won’t be used by the Amazon Q Business connector, but it’s needed to grant Sites.Selected permissions exclusively to the target application.

Register a new app in the Microsoft Azure portal

Complete the following steps to register a new app in the Microsoft Azure portal:

  1. Log in to the Azure Portal with your Microsoft account.
  2. Choose New registration.
    1. For Name, provide the name for your application. For this post, we use the name TargetApp. The Amazon Q Business application uses TargetApp to connect to the SharePoint Online site to crawl and index the data.
    2. For Who can use this application or access this API, choose Accounts in this organizational directory only (<Tenant name> only – Single tenant).
    3. Choose Register.
  3. Note down the application (client) ID and the directory (tenant) ID on the Overview You’ll need them later when asked for TargetApp-ClientId and TenantId.
  4. Choose API permissions under Manage in the navigation pane.
  5. Choose Add a permission to allow the application to read data in your organization’s directory about the signed-in user.
    1. Choose Microsoft Graph.
    2. Choose Delegated permissions.
    3. Choose User.Read.All from the User section.
    4. Choose GroupMember.Read.All from the GroupMember section.
    5. Choose Sites.Selected from the Sites section.
    6. Choose Add permissions.
  6. On the options menu (three dots), choose Remove permission.
  7. Remove the original User.Read – Delegated permission.
  8. Choose Grant admin consent for Default Directory.

Registering an App and setting permissions

  1. Choose Certificates & secrets in the navigation pane.
  2. Choose New client secret.
    1. For Description, enter a description.
    2. Choose a value for Expires. Note that in production, you’ll need to manually rotate your secret before it expires.
    3. Choose Add.
    4. Note down the value for your new secret. You’ll need it later when asked for your client secret (TargetApp-ClientSecret).
  3. Optionally, choose Owners to add any additional owners for the application. Owners will be able to manage permissions of the Azure AD application (TargetApp).

Use the Graph API to grant permissions to the application on the SharePoint Online site

In this step, you define which of your SharePoint Online sites will be granted access to TargetApp. Amazon Q Business App uses TargetApp to connect to the SharePoint Online site to crawl and index the data.

For this post, we use Postman, a platform for using APIs, to grant permissions. To grant permissions to a specific SharePoint Online site, you need to have another Azure AD application, which we refer to as AdminApp, with Sites.FullControl.All permissions.

If you don’t have the prerequisite AdminApp, follow the previous steps to register AdminApp and for Application Permissions, grant Sites.FullControl.All permissions. As mentioned in the prerequisites, AdminApp will be used only to grant SharePoint Online sites access permissions to TargetApp.

We use the ClientId and ClientSecret values of AdminApp from the Azure AD application to get an AccessToken value.

  1. Create a POST request in Postman with the URL https://login.microsoftonline.com/{TenantId}/oauth2/v2.0/token.
  2. In the body of the request, choose x-www-form-urlencoded and set the following key-value pairs:
    1. Set client_id to AdminApp-ClientId.
    2. Set client_secret to AdminApp-ClientSecret.
    3. Set grant_type to client_credentials.
    4. Set scope to https://graph.microsoft.com/.default.

Get access token

  1. Choose Send.
  2. From the returned response, copy the value of access_token. You need it in a later step when asked for the bearer token.
  3. Use the value of access_token from the previous step to grant permissions to TargetApp.
    1. Get the SiteId of the SharePoint Online site by visiting your site URL (for example, https://<yourcompany>.sharepoint.com/sites/{SiteName}) in a browser. You need to log in to the site by providing valid credentials to access the site.
    2. Edit the URL in the browser address bar to append /_api/site/id at the end of {SiteName} to get the SiteId. You need this SiteId in the next step.

Getting site id

  1. Create another POST request in Postman using the URL https://graph.microsoft.com/v1.0/sites/{SiteId}/permissions. Replace {SiteId} in the URL of the request with the SiteId from the previous step.

You can repeat this step for each site you want to include in the Amazon Q Business SharePoint Online connector.

  1. Choose Bearer Token for Type on the Authorization
  2. Enter the value of access_token from earlier for Token.

Grant permissions to target app

  1. For the payload, select raw and enter the following JSON code (replace the <<TargetApp-ClientId>> and <<TargeApp-Name>> values):
{
    "roles": [
        "fullcontrol"
    ],
    "grantedToIdentities": [
        {
            "application": {
                "id": "<<TargetApp-clientId>>",
                "displayName": "<<TargeApp-Name>>"
            }
        }
    ]
}

Complete granting access

  1. Choose Send to complete the process of granting SharePoint Online sites access to the TargetApp Azure AD application.

Configure the Amazon Q Business SharePoint Online connector

Complete the following steps to configure the Amazon Q Business application’s SharePoint Online connector:

  1. On the Amazon Q Business console, choose Add Data source.
  2. Search for and choose SharePoint.
  3. Give it a name and description (optional).
  4. Choose SharePoint Online for Hosting method under Source settings.
  5. Provide the full URL for the SharePoint site that you want to include in crawling and indexing for Site URLs specific to your SharePoint repository.
    1. If the full URL of the site is https://<yourcompany>.sharepoint.com/sites/anycompany, use <yourcompany> as the value for Domain.
  6. Choose OAuth 2.0 authentication for Authentication method.
  7. Provide the value of TenantId for TenantId.

The SharePoint connector needs credentials to connect to the SharePoint Online site using the Microsoft Graph API. To facilitate this, create a new Secrets Manager secret. These credentials will not be used in any access logs for the SharePoint Online site.

  1. Choose Create and add a new secret.
  2. Enter a name for the secret.
  3. Enter the user name and password of a SiteCollection administrator on the sites included in the Amazon Q repository.
  4. Enter your client ID and client secret that you got from registering TargetApp in the previous steps.
  5. Choose Save.

Create Secret

  1. Choose Create a new service role to create an IAM role, and enter a name for the role.
  2. For Sync scope, choose Select entities and choose All (or specify the combination of items to sync).
  3. Choose a sync option based on your needs (on demand or at a frequency of your choice). For this post, we choose on-demand.
  4. Choose Add data source.
  5. After the data source is created, choose Sync now to start the crawling and indexing.

Test the solution

To test the solution, you can add users and groups, assign subscriptions, and test user and group access within your Amazon Q business application.

Clean up

If you’re only experimenting using the steps in this post, delete your application from the Azure Portal and delete the Amazon Q application from the Amazon Q console to avoid incurring costs.

Conclusion

In this post, we discussed how to configure the Amazon Q Business SharePoint Online connector using least privilege access controls that work with site-level least privileges to crawl and index SharePoint Online site content securely. We also demonstrated how to retain and apply ACLs while responding to user conversations.

Organizations can now use their existing SharePoint Online data to gain better insights, generate summaries, and get answers to natural language queries in a conversational way using Amazon Q Business. By connecting SharePoint Online as a data source, employees can interact with the organization’s knowledge and data stored in SharePoint using natural language, making it effortless to find relevant information, extract key points, and derive valuable insights. This can significantly improve productivity, decision-making, and knowledge sharing within the organization.

Try out the solution in this post, and leave your feedback and questions in the comments section.


About the Authors

Surendar GajavelliSurendar Gajavelli is a Sr. Solutions Architect based out of Nashville, TN. He is a passionate technology enthusiast who enjoys working with customers and helping them build innovative solutions.

Abhi PatlollaAbhi Patlolla is a Sr. Solutions Architect based out of the NYC region, helping customers in their cloud transformation, AI/ML, and data initiatives. He is a strategic and technical leader, advising executives and engineers on cloud strategies to foster innovation and positive impact.

Read More

Improve the productivity of your customer support and project management teams using Amazon Q Business and Atlassian Jira

Improve the productivity of your customer support and project management teams using Amazon Q Business and Atlassian Jira

Effective customer support and project management are critical aspects of providing effective customer relationship management. Atlassian Jira, a platform for issue tracking and project management functions for software projects, has become an indispensable part of many organizations’ workflows to ensure success of the customer and the product. However, extracting valuable insights from the vast amount of data stored in Jira often requires manual efforts and building specialized tooling. Users such as support engineers, project managers, and product managers need to be able to ask questions about a project, issue, or customer in order to provide excellence in their support for customers’ needs. Generative AI provides the ability to take relevant information from a data source and provide well-constructed answers back to the user.

Building a generative AI-based conversational application that is integrated with the data sources that contain the relevant content an enterprise requires time, money, and people. You first need to build connectors to the data sources. Next, you need to index this data to make it available for a Retrieval Augmented Generation (RAG) approach, where relevant passages are delivered with high accuracy to a large language model (LLM). To do this, you need to select an index that provides the capabilities to index the content for semantic and vector search, build the infrastructure to retrieve and rank the answers, and build a feature-rich web application. You also need to hire and staff a large team to build, maintain, and manage such a system.

Amazon Q Business is a fully managed generative AI-powered assistant that can answer questions, provide summaries, generate content, and securely complete tasks based on data and information in your enterprise systems. Amazon Q Business can help you get fast, relevant answers to pressing questions, solve problems, generate content, and take action using the data and expertise found in your company’s information repositories, code, and enterprise systems (such as Jira, among others). Amazon Q provides out-of-the-box native data source connectors that can index content into a built-in retriever and uses an LLM to provide accurate, well-written answers. A data source connector is a component of Amazon Q that helps integrate and synchronize data from multiple repositories into one index.

Amazon Q Business offers multiple prebuilt connectors to a large number of data sources, including Atlassian Jira, Atlassian Confluence, Amazon Simple Storage Service (Amazon S3), Microsoft SharePoint, Salesforce, and many more, and helps you create your generative AI solution with minimal configuration. For a full list of Amazon Q Business supported data source connectors, see Amazon Q Business connectors.

In this post, we walk you through configuring and integrating Amazon Q for Business with Jira to enable your support, project management, product management, leadership, and other teams to quickly get accurate answers to their questions related to the content in Jira projects, issues, and more.

Find accurate answers from content in Jira using Amazon Q Business

After you integrate Amazon Q Business with Jira, users can ask questions from the description of the document. This enables the following use cases:

  • Natural language search – Users can search for tasks, issues, or other project-related information using conversational language, making it straightforward to find the desired data without having to remember specific keywords or filters
  • Summarization – Users can request a concise summary of all issues, tasks, or other entities matching their search query, allowing them to quickly grasp the key points without having to sift through individual document descriptions manually
  • Query clarification – If a user’s query is ambiguous or lacks sufficient context, Amazon Q Business can engage in a dialogue to clarify the intent, so the user receives the most relevant and accurate results

Overview of Jira connector for Amazon Q Business

To crawl and index contents in Jira, you can configure the Amazon Q Business Jira connector as a data source in your Amazon Q business application. When you connect Amazon Q Business to a data source and initiate the sync process, Amazon Q Business crawls and indexes documents from the data source into its index.

Types of documents

In Amazon Q Business, a document is a unit of data. Let’s look at what are considered as documents in the context of Amazon Q Business Jira connector. A document is a collection of information that consists of a title, the content (or the body), metadata (data about the document) and access control list (ACL) information to make sure answers are provided from documents that the user has access to.

The Amazon Q Business Jira connector supports crawling of the following entities in Jira:

  • Projects – Each project is considered a single document
  • Issues – Each issue is considered a single document
  • Comments – Each comment is considered a single document
  • Attachments – Each attachment is considered a single document
  • Worklogs – Each worklog is considered a single document

Additionally, Jira users can create custom objects and custom metadata fields. Amazon Q supports the crawling and indexing of these custom objects and custom metadata.

Amazon Q Business Jira connector also supports the indexing of a rich set of metadata from the various entities in Jira. It further provides the ability to map these source metadata fields to Amazon Q index fields for indexing this metadata. These field mappings allow you to map Jira field names to Amazon Q index field names. There are three types of metadata fields that Amazon Q connectors support:

  • Default fields – These are required with each document, such as the title, creation date, author, and so on.
  • Optional fields – These are provided by the data source. The administrator can optionally choose one or more of these fields if they contain important and relevant information to obtain accurate answers.
  • Custom metadata fields – These are fields created in the data source in addition to what the data source already provides.

Refer to Jira data source connector field mappings for more information.

Authentication

Before you index the content from Jira, you need to establish a secure connection between the Amazon Q Business connector for Jira with your Jira cloud instance. To establish a secure connection, you need to authenticate with the data source. You can authenticate Amazon Q Business to Jira using basic authentication with a Jira ID and Jira API token.

To authenticate using basic authentication, you create a secret using AWS Secrets Manager with your Jira ID and Jira API token. If you use the AWS Management Console, you can choose to create a new secret or use an existing one. If you use the API, you must provide the Amazon Resource Name (ARN) of an existing secret when you use the CreateDataSource operation.

Refer to Manage API tokens for your Atlassian account for more information on creating and managing API tokens in Jira.

Secure querying with ACL crawling, identity crawling, and user store

Secure querying is a critical feature that makes sure users receive answers only from documents they’re authorized to access. Amazon Q Business implements this security measure through a two-step process. First, it indexes ACLs associated with each document. This indexing is vital for data security, because any document without an ACL is treated as public. Second, when a query is made, the system considers both the user’s credentials (typically their email address) and the query content. This dual-check mechanism means that the results are not only relevant to the query but also confined to documents the user has permission to view. By using ACLs and user authentication, Amazon Q Business maintains a robust barrier against unauthorized data access while delivering pertinent information to users.

If you need to index documents without ACLs, you must make sure they’re explicitly marked as public in your data source. Refer to Allow anonymous access to projects to enable public access to documents. Refer to How Amazon Q business connector crawls Jira ACLs for more information about crawling Jira ACLs.

Solution overview

In this post, we walk through the steps to configure a Jira connector for Amazon Q Business application. We use an existing Amazon Q application and configure the Jira connector to sync data from specific Jira projects and issue types, map relevant Jira fields to the Amazon Q index, initiate the data sync, and then query the ingested Jira data using Amazon Q’s web experience.

As part of querying Jira documents using Amazon Q Business application, we demonstrate how to ask natural language questions on your Jira issues, projects, and other issue types and get back relevant results and insights using Amazon Q Business.

Prerequisites

You should have the following:

Configure the Jira connector for an Amazon Q Business application

Complete the following steps to configure the connector:

  1. On the Amazon Q Business console, choose Applications in the navigation pane.
  2. Select the application that you want to add the Jira connector to.
  3. On the Actions menu, choose Edit.

Edit Amazon Q application

  1.  On the Update application page, leave all values as default and choose Update.

Update Amazon Q application

  1. On the Update retriever page, leave all values as default and choose Next.

Update the retriever

  1. On the Connect data sources page, on the All tab, search for Jira in the search field.
  2. Choose the plus sign on the Jira connector.

Add Jira connector

  1. In the Name and description section, enter a name and description.
  2. In the Source section, enter your company’s Jira account URL in https://yourcompany.atlassian.net/

Enter Jira domain

  1. In the Authentication section, choose Create and add new secret.
  2. Enter a name for your Secrets Manager secret.
  3. For Jira ID, enter the user name for the API token.
  4. For Password/Token, enter the API token details.
  5. Choose Save.

See Manage API tokens for your Atlassian account for details on how to create an API token.

Save Jira authentication

  1. In the IAM role section, for IAM role, choose Create a new service role (recommended).

Create IAM role

  1. In the Sync Scope section, you can select All projects or Only specific projects.
  2. By default, the Jira connector indexes all content from the projects. Optionally, you can choose to sync only specific Jira entities by selecting the appropriate options under Additional configuration.

Select sync scope

  1. In the Sync mode section, choose New, modified, or deleted content sync.

Select sync mode

  1. In the Sync run schedule section, choose your desired frequency. For this post, we choose Run on demand.
  2. Choose Add data source wait for the retriever to be created.

Select run schedule

After the data source is created, you’re redirected to the Connect data sources page to add more data sources as needed.

  1. For this walkthrough, choose Next.
  2. On the Update groups and users page, choose Add groups and users.

The users and groups that you add in this section are from the AWS IAM Identity Center users and groups set up by your administrator.

Add users to application

  1. In the Add or assign users and groups pop-up, select Assign existing users and groups to add existing users configured in your connected IAM Identity Center. Optionally, if you have permissions to add users, you can select Add new users.
  2. Choose Next.

Assign existing users

  1. In the Assign users and groups pop-up, search for users by user display name or groups by group name.
  2. Choose the users or groups you want you add and choose Assign.

This closes the pop-up. The groups and users that you added should now be available on the Groups or Users tabs.

Search for Users

For each group or user entry, an Amazon Q Business subscription tier needs to be assigned.

  1. To enable subscription for a group, on the Update groups and users page, choose the Groups (If individual users need to be assigned a subscription, choose the Users tab).
  2. Under the Current subscription column, choose Choose subscription and choose a subscription (Q Business Lite or Q Business Pro).
  3. Choose Update application to complete adding and setting up the Jira data connector for Amazon Q Business.

Assign a subscription

Configure Jira field mappings

To help you structure data for retrieval and chat filtering, Amazon Q Business crawls data source document attributes or metadata and maps them to fields in your Amazon Q index. Amazon Q has reserved fields that it uses when querying your application. When possible, Amazon Q automatically maps these built-in fields to attributes in your data source.

If a built-in field doesn’t have a default mapping, or if you want to map additional index fields, use the custom field mappings to specify how a data source attribute maps to your Amazon Q application.

  1. On the Amazon Q Business console, choose your application.
  2. Under Data sources, select your data source.
  3. On the Actions menu, choose Edit.

Edit data source

  1. In the Field mappings section, select the required fields to crawl under Projects, Issues, and any other issue types that are available and choose

When selecting all items, make sure you navigate through each page by choosing the page numbers and selecting Select All on every page to include all mapped items.

Edit field mapping

The Jira connector setup for Amazon Q is now complete. To test the connectivity to Jira and initiate the data synchronization, choose Sync now. The initial sync process may take several minutes to complete.

When the sync is complete, on the Sync history tab, you can see the sync status along with a summary of how may total items were added, deleted, modified, and failed during the sync process.

Query Jira data using the Amazon Q web experience

Now that the data synchronization is complete, you can start exploring insights from Amazon Q. In the newly created Amazon Q application, choose Customize web experience to open a new tab with a preview of the UI and options to customize as per your needs.

You can customize the Title, Subtitle, and Welcome message fields according to your needs, which will be reflected in the UI.

Configure web experience

For this walkthrough, we use the defaults and choose View web experience to be redirected to the login page for the Amazon Q application.

Log in to the application using the credentials for the user that were added to the Amazon Q application. After the login is successful, you’re redirected to the Amazon Q assistant UI, where you can ask questions using natural language and get insights from your Jira index.

Login to Amazon Q application

The Jira data source connected to this Amazon Q application has a sample IT software management project with tasks related to the project launch and related issues. We demonstrate how the Amazon Q application lets you ask questions on issues within this project using natural language and receive responses and insights for those queries.

Let’s begin by asking Amazon Q to provide a list of the top three challenges encountered during the project launch. The following screenshot displays the response, listing the top three documents associated with launch issues. The response also includes Sources, which contain links to all the matching documents. Choosing any of those links will redirect you to the corresponding Jira page with the relevant issue or task.

Query launch related issues

For the second query, we ask Amazon Q if there were any website-related issues. The following screenshot displays the response, which includes a summary of website-related issues along with corresponding Jira ticket links.

Query website issues

Frequently asked questions

In this section, we provide guidance to frequently asked questions.

Amazon Q Business is unable to answer your questions

If you get the response “Sorry, I could not find relevant information to complete your request,” this may be due to a few reasons:

  • No permissions – ACLs applied to your account don’t allow you to query certain data sources. If this is the case, reach out to your application administrator to make sure your ACLs are configured to access the data sources.
  • Data connector sync failed – Your data connector may have failed to sync information from the source to the Amazon Q Business application. Verify the data connector’s sync run schedule and sync history to confirm the sync is successful.
  • Empty or private Jira projects – Private or empty projects aren’t crawled during the sync run.

If none of these reasons apply to your use case, open a support case and work with your technical account manager to get this resolved.

How to generate responses from authoritative data sources

If you want Amazon Q Business to only generate responses from authoritative data sources, you can configure this using the Amazon Q Business application global controls under Admin controls and guardrails.

  1. Log in to the Amazon Q Business console as an Amazon Q Business application administrator.
  2. Navigate to the application and choose Admin controls and guardrails in the navigation pane.
  3. Choose Edit in the Global controls section to set these options.

For more information, refer to Admin controls and guardrails in Amazon Q Business.

Admin Controls & Guardrails

Amazon Q Business responds using old (stale) data even though your data source is updated

Each Amazon Q Business data connector can be configured with a unique sync run schedule frequency. Verifying the sync status and sync schedule frequency for your data connector reveals when the last sync ran successfully. It could be that your data connector’s sync run schedule is either set to sync at a scheduled time of day, week, or month. If it’s set to run on demand, the sync has to be manually invoked. When the sync run is complete, verify the sync history to make sure the run has successfully synced all new issues. Refer to Sync run schedule for more information about each option.

Check run schedule

Check sync history

Clean up

To prevent incurring additional costs, it’s essential to clean up and remove any resources created during the implementation of this solution. Specifically, you should delete the Amazon Q application, which will consequently remove the associated index and data connectors. However, any IAM roles and secrets created during the Amazon Q application setup process need to be removed separately. Failing to clean up these resources may result in ongoing charges, so it’s crucial to take the necessary steps to completely remove all components related to this solution.

Complete the following steps to delete the Amazon Q application, secret, and IAM role:

  1. On the Amazon Q Business console, select the application that you created.
  2. On the Actions menu, choose Delete and confirm the deletion.

Delete Amazon Q application

  1. On the Secrets Manager console, select the secret that was created for the Jira connector.
  2. On the Actions menu, choose Delete.
  3. Select the waiting period as 7 days and choose Schedule deletion.

Schedule Secrets deletion

  1. On the IAM console, select the role that was created during the Amazon Q application creation.
  2. Choose Delete and confirm the deletion.

Conclusion

The Amazon Q Jira connector allows organizations to seamlessly integrate their Jira projects, issues, and data into the powerful generative AI capabilities of Amazon Q. By following the steps outlined in this post, you can quickly configure the Jira connector as a data source for Amazon Q and initiate synchronization of your Jira information. The native field mapping options enable you to customize exactly which Jira data to include in the Amazon Q index.

Amazon Q can serve as a powerful assistant capable of providing rich insights and summaries about your Jira projects and issues from natural language queries. The Jira plugin further extends this functionality by allowing users to create new Jira issues from within the AI assistant interface.

The Amazon Q Jira integration represents a valuable tool for software teams to gain AI-driven visibility into their development workflows and pain points. By bridging Jira’s industry-leading project management with Amazon’s cutting-edge generative AI, teams can drive productivity, make better informed decisions, and unlock deeper insights into their software operations. As generative AI continues advancing, integrations like this will become critical for organizations aiming to deliver streamlined, data-driven software development lifecycles.

To learn more about the Amazon Q connector for Jira, refer to Connecting Jira to Amazon Q Business.


About the Authors

Praveen Chamarthi is a Senior AI/ML Specialist with Amazon Web Services. He is passionate about AI/ML and all things AWS. He helps customers across the Americas scale, innovate, and operate ML workloads efficiently on AWS. In his spare time, Praveen loves to read and enjoys sci-fi movies.

Srikanth Reddy is a Senior AI/ML Specialist with Amazon Web Services. He is responsible for providing deep, domain specific expertise to enterprise customers, helping them leverage AWS’s AI and ML capabilities to their fullest potential.

Ge Jiang is a Software Development Engineer Manager in the Amazon Q and Amazon Kendra organization of Amazon Web Services. She is responsible for the design and development of features for the Amazon Q and Amazon Kendra connectors.

Vijai Gandikota is a Principal Product Manager in the Amazon Q and Amazon Kendra organization of Amazon Web Services. He is responsible for the Amazon Q and Amazon Kendra connectors, ingestion, security, and other aspects of the Amazon Q and Amazon Kendra services.

Read More

Amazon SageMaker inference launches faster auto scaling for generative AI models

Amazon SageMaker inference launches faster auto scaling for generative AI models

Today, we are excited to announce a new capability in Amazon SageMaker inference that can help you reduce the time it takes for your generative artificial intelligence (AI) models to scale automatically. You can now use sub-minute metrics and significantly reduce overall scaling latency for generative AI models. With this enhancement, you can improve the responsiveness of your generative AI applications as demand fluctuates.

The rise of foundation models (FMs) and large language models (LLMs) has brought new challenges to generative AI inference deployment. These advanced models often take seconds to process, while sometimes handling only a limited number of concurrent requests. This creates a critical need for rapid detection and auto scaling to maintain business continuity. Organizations implementing generative AI seek comprehensive solutions that address multiple concerns: reducing infrastructure costs, minimizing latency, and maximizing throughput to meet the demands of these sophisticated models. However, they prefer to focus on solving business problems rather than doing the undifferentiated heavy lifting to build complex inference platforms from the ground up.

SageMaker provides industry-leading capabilities to address these inference challenges. It offers endpoints for generative AI inference that reduce FM deployment costs by 50% on average and latency by 20% on average by optimizing the use of accelerators. The SageMaker inference optimization toolkit, a fully managed model optimization feature in SageMaker, can deliver up to two times higher throughput while reducing costs by approximately 50% for generative AI performance on SageMaker. Besides optimization, SageMaker inference also provides streaming support for LLMs, enabling you to stream tokens in real time rather than waiting for the entire response. This allows for lower perceived latency and more responsive generative AI experiences, which are crucial for use cases like conversational AI assistants. Lastly, SageMaker inference provides the ability to deploy a single model or multiple models using SageMaker inference components on the same endpoint using advanced routing strategies to effectively load balance to the underlying instances backing an endpoint.

Faster auto scaling metrics

To optimize real-time inference workloads, SageMaker employs Application Auto Scaling. This feature dynamically adjusts the number of instances in use and the quantity of model copies deployed, responding to real-time changes in demand. When in-flight requests surpass a predefined threshold, auto scaling increases the available instances and deploys additional model copies to meet the heightened demand. Similarly, as the number of in-flight requests decreases, the system automatically removes unnecessary instances and model copies, effectively reducing costs. This adaptive scaling makes sure resources are optimally utilized, balancing performance needs with cost considerations in real time.

With today’s launch, SageMaker real-time endpoints now emit two new sub-minute Amazon CloudWatch metrics: ConcurrentRequestsPerModel and ConcurrentRequestsPerCopy. ConcurrentRequestsPerModel is the metric used for SageMaker real-time endpoints; ConcurrentRequestsPerCopy is used when SageMaker real-time inference components are used.

These metrics provide a more direct and accurate representation of the load on the system by tracking the actual concurrency or the number of simultaneous requests being handled by the containers (in-flight requests), including the requests queued inside the containers. The concurrency-based target tracking and step scaling policies focus on monitoring these new metrics. When the concurrency levels increase, the auto scaling mechanism can respond by scaling out the deployment, adding more container copies or instances to handle the increased workload. By taking advantage of these high-resolution metrics, you can now achieve significantly faster auto scaling, reducing detection time and improving the overall scale-out time of generative AI models. You can use these new metrics for endpoints created with accelerator instances like AWS Trainium, AWS Inferentia, and NVIDIA GPUs.

In addition, you can enable streaming responses back to the client on models deployed on SageMaker. Many current solutions track a session or concurrency metric only until the first token is sent to the client and then mark the target instance as available. SageMaker can track a request until the last token is streamed to the client instead of until the first token. This way, clients can be directed to instances to GPUs that are less busy, avoiding hotspots. Additionally, tracking concurrency also helps you make sure requests that are in-flight and queued are treated alike for alerting on the need for auto scaling. With this capability, you can make sure your model deployment scales proactively, accommodating fluctuations in request volumes and maintaining optimal performance by minimizing queuing delays.

In this post, we detail how the new ConcurrentRequestsPerModel and ConcurrentRequestsPerCopy CloudWatch metrics work, explain why you should use them, and walk you through the process of implementing them for your workloads. These new metrics allow you to scale your LLM deployments more effectively, providing optimal performance and cost-efficiency as the demand for your models fluctuates.

Components of auto scaling

The following figure illustrates a typical scenario of how a SageMaker real-time inference endpoint scales out to handle an increase in concurrent requests. This demonstrates the automated and responsive nature of scaling in SageMaker. In this example, we walk through the key steps that occur when the inference traffic to a SageMaker real-time endpoint starts to increase and concurrency to the model deployed on every instance goes up. We show how the system monitors the traffic, invokes an auto scaling action, provisions new instances, and ultimately load balances the requests across the scaled-out resources. Understanding this scaling process is crucial for making sure your generative AI models can handle fluctuations in demand and provide a seamless experience for your customers. By the end of this walkthrough, you’ll have a clear picture of how SageMaker real-time inference endpoints can automatically scale to meet your application’s needs.

Let’s dive into the details of this scaling scenario using the provided figure.

The key steps are as follows:

  1. Increased inference traffic (t0) – At some point, the traffic to the SageMaker real-time inference endpoint starts to increase, indicating a potential need for additional resources. The increase in traffic leads to a higher number of concurrent requests required for each model copy or instance.
  2. CloudWatch alarm monitoring (t0 → t1) – An auto scaling policy uses CloudWatch to monitor metrics, sampling it over a few data points within a predefined time frame. This makes sure the increased traffic is a sustained change in demand, not a temporary spike.
  3. Auto scaling trigger (t1) – If the metric crosses the predefined threshold, the CloudWatch alarm goes into an InAlarm state, invoking an auto scaling action to scale up the resources.
  4. New instance provisioning and container startup (t1 → t2) – During the scale-up action, new instances are provisioned if required. The model server and container are started on the new instances. When the instance provisioning is complete, the model container initialization process begins. After the server successfully starts and passes the health checks, the instances are registered with the endpoint, enabling them to serve incoming traffic requests.
  5. Load balancing (t2) – After the container health checks pass and the container reports as healthy, the new instances are ready to serve inference requests. All requests are now automatically load balanced between the two instances using the pre-built routing strategies in SageMaker.

This approach allows the SageMaker real-time inference endpoint to react quickly and handle the increased traffic with minimal impact to the clients.

Application Auto Scaling supports target tracking and step scaling policies. Each have their own logic to handle scale-in and scale-out:

  • Target tracking works to scale out by adding capacity to reduce the difference between the metric value (ConcurrentRequestsPerModel/Copy) and the target value set. When the metric (ConcurrentRequestsPerModel/Copy) is below the target value, Application Auto Scaling scales in by removing capacity.
  • Step scaling works to scales capacity using a set of adjustments, known as step adjustments. The size of the adjustment varies based on the magnitude of the metric value (ConcurrentRequestsPerModel/Copy)/alarm breach.

By using these new metrics, auto scaling can now be invoked and scale out significantly faster compared to the older SageMakerVariantInvocationsPerInstance predefined metric type. This decrease in the time to measure and invoke a scale-out allows you to react to increased demand significantly faster than before (under 1 minute). This works especially well for generative AI models, which are typically concurrency-bound and can take many seconds to complete each inference request.

Using the new high-resolution metrics allow you to greatly decrease the time it takes to scale up an endpoint using Application Auto Scaling. These high-resolution metrics are emitted at 10-second intervals, allowing for faster invoking of scale-out procedures. For models with less than 10 billion parameters, this can be a significant percentage of the time it takes for an end-to-end scaling event. For larger model deployments, this can be up to 5 minutes shorter before a new copy of your FM or LLM is ready to service traffic.

Get started with faster auto scaling

Getting started with using the metrics is straightforward. You can use the following steps to create a new scaling policy to benefit from faster auto scaling. In this example, we deploy a Meta Llama 3 model that has 8 billion parameters on a G5 instance type, which uses NVIDIA A10G GPUs. In this example, the model can fit entirely on a single GPU and we can use auto scaling to scale up the number of inference components and G5 instances based on our traffic. The full notebook can be found on the GitHub for SageMaker Single Model Endpoints and SageMaker with inference components.

  1. After you create your SageMaker endpoint, you define a new auto scaling target for Application Auto Scaling. In the following code block, you set as_min_capacity and as_max_capacity to the minimum and maximum number of instances you want to set for your endpoint, respectively. If you’re using inference components (shown later), you can use instance auto scaling and skip this step.
    autoscaling_client = boto3.client("application-autoscaling", region_name=region)
    
    # Register scalable target
    scalable_target = autoscaling_client.register_scalable_target(
        ServiceNamespace="sagemaker",
        ResourceId=resource_id,
        ScalableDimension="sagemaker:variant:DesiredInstanceCount",
        MinCapacity=as_min_capacity,
        MaxCapacity=as_max_capacity,  # Replace with your desired maximum instances
    )

  2. After you create your new scalable target, you can define your policy. You can choose between using a target tracking policy or step scaling policy. In the following target tracking policy, we have set TargetValue to 5. This means we’re asking auto scaling to scale up if the number of concurrent requests per model is equal to or greater than five.
    # Create Target Tracking Scaling Policy
    target_tracking_policy_response = autoscaling_client.put_scaling_policy(
        PolicyName="SageMakerEndpointScalingPolicy",
        ServiceNamespace="sagemaker",
        ResourceId=resource_id,
        ScalableDimension="sagemaker:variant:DesiredInstanceCount",
        PolicyType="TargetTrackingScaling",
        TargetTrackingScalingPolicyConfiguration={
            "TargetValue": 5.0,  # Scaling triggers when endpoint receives 5 ConcurrentRequestsPerModel
            "PredefinedMetricSpecification": {
                "PredefinedMetricType": "SageMakerVariantConcurrentRequestsPerModelHighResolution"
            },
            "ScaleInCooldown": 180,  # Cooldown period after scale-in activity
            "ScaleOutCooldown": 180,  # Cooldown period after scale-out activity
        },
    )

If you would like to configure a step scaling policy, refer to the following notebook.

That’s it! Traffic now invoking your endpoint will be monitored with concurrency tracked and evaluated against the policy you specified. Your endpoint will scale up and down based on the minimum and maximum values you provided. In the preceding example, we set a cooldown period for scaling in and out to 180 seconds, but you can change this based on what works best for your workload.

SageMaker inference components

If you’re using inference components to deploy multiple generative AI models on a SageMaker endpoint, you can complete the following steps:

  1. After you create your SageMaker endpoint and inference components, you define a new auto scaling target for Application Auto Scaling:
    autoscaling_client = boto3.client("application-autoscaling", region_name=region)
    
    # Register scalable target
    scalable_target = autoscaling_client.register_scalable_target(
        ServiceNamespace="sagemaker",
        ResourceId=resource_id,
        ScalableDimension="sagemaker:inference-component:DesiredCopyCount",
        MinCapacity=as_min_capacity,
        MaxCapacity=as_max_capacity,  # Replace with your desired maximum instances
    )

  2. After you create your new scalable target, you can define your policy. In the following code, we set TargetValue to 5. By doing so, we’re asking auto scaling to scale up if the number of concurrent requests per model is equal to or greater than five.
    # Create Target Tracking Scaling Policy
    target_tracking_policy_response = autoscaling_client.put_scaling_policy(
        PolicyName="SageMakerInferenceComponentScalingPolicy",
        ServiceNamespace="sagemaker",
        ResourceId=resource_id,
        ScalableDimension="sagemaker:inference-component:DesiredCopyCount",
        PolicyType="TargetTrackingScaling",
        TargetTrackingScalingPolicyConfiguration={
            "TargetValue": 5.0,  # Scaling triggers when endpoint receives 5 ConcurrentRequestsPerCopy
            "PredefinedMetricSpecification": {
                "PredefinedMetricType": "SageMakerInferenceComponentConcurrentRequestsPerCopyHighResolution"
            },
            "ScaleInCooldown": 180,  # Cooldown period after scale-in activity
            "ScaleOutCooldown": 180,  # Cooldown period after scale-out activity
        },
    )

You can use the new concurrency-based target tracking auto scaling policies in tandem with existing invocation-based target tracking policies. When a container experiences a crash or failure, the resulting requests are typically short-lived and may be responded to with error messages. In such scenarios, the concurrency-based auto scaling policy can detect the sudden drop in concurrent requests, potentially causing an unintentional scale-in of the container fleet. However, the invocation-based policy can act as a safeguard, avoiding the scale-in if there is still sufficient traffic being directed to the remaining containers. With this hybrid approach, container-based applications can achieve a more efficient and adaptive scaling behavior. The balance between concurrency-based and invocation-based policies allows the system to respond appropriately to various operational conditions, such as container failures, sudden spikes in traffic, or gradual changes in workload patterns. This enables the container infrastructure to scale up and down more effectively, optimizing resource utilization and providing reliable application performance.

Sample runs and results

With the new metrics, we have observed improvements in the time required to invoke scale-out events. To test the effectiveness of this solution, we completed some sample runs with Meta Llama models (Llama 2 7B and Llama 3 8B). Prior to this feature, detecting the need for auto scaling could take over 6 minutes, but with this new feature, we were able to reduce that time to less than 45 seconds. For generative AI models such as Meta Llama 2 7B and Llama 3 8B, we have been able to reduce the overall end-to-end scale-out time by approximately 40%.

The following figures illustrate the results of sample runs for Meta Llama 3 8B.

The following figures illustrate the results of sample runs for Meta Llama 2 7B.

As a best practice, it’s important to optimize your container, model artifacts, and bootstrapping processes to be as efficient as possible. Doing so can help minimize deployment times and improve the responsiveness of AI services.

Conclusion

In this post, we detailed how the ConcurrentRequestsPerModel and ConcurrentRequestsPerCopy metrics work, explained why you should use them, and walked you through the process of implementing them for your workloads. We encourage you to try out these new metrics and evaluate whether they improve your FM and LLM workloads on SageMaker endpoints. You can find the notebooks on GitHub.

Special thanks to our partners from Application Auto Scaling for making this launch happen: Ankur Sethi, Vasanth Kumararajan, Jaysinh Parmar Mona Zhao, Miranda Liu, Fatih Tekin, and Martin Wang.


About the Authors

James Park is a Solutions Architect at Amazon Web Services. He works with Amazon.com to design, build, and deploy technology solutions on AWS, and has a particular interest in AI and machine learning. In h is spare time he enjoys seeking out new cultures, new experiences,  and staying up to date with the latest technology trends. You can find him on LinkedIn.

Praveen Chamarthi is a Senior AI/ML Specialist with Amazon Web Services. He is passionate about AI/ML and all things AWS. He helps customers across the Americas scale, innovate, and operate ML workloads efficiently on AWS. In his spare time, Praveen loves to read and enjoys sci-fi movies.

Dr. Changsha Ma is an AI/ML Specialist at AWS. She is a technologist with a PhD in Computer Science, a master’s degree in Education Psychology, and years of experience in data science and independent consulting in AI/ML. She is passionate about researching methodological approaches for machine and human intelligence. Outside of work, she loves hiking, cooking, hunting food, and spending time with friends and families.

Saurabh Trikande is a Senior Product Manager for Amazon SageMaker Inference. He is passionate about working with customers and is motivated by the goal of democratizing machine learning. He focuses on core challenges related to deploying complex ML applications, multi-tenant ML models, cost optimizations, and making deployment of deep learning models more accessible. In his spare time, Saurabh enjoys hiking, learning about innovative technologies, following TechCrunch and spending time with his family.

Kunal Shah is a software development engineer at Amazon Web Services (AWS) with 7+ years of industry experience. His passion lies in deploying machine learning (ML) models for inference, and he is driven by a strong desire to learn and contribute to the development of AI-powered tools that can create real-world impact. Beyond his professional pursuits, he enjoys watching historical movies, traveling and adventure sports.

Marc Karp is an ML Architect with the Amazon SageMaker Service team. He focuses on helping customers design, deploy, and manage ML workloads at scale. In his spare time, he enjoys traveling and exploring new places.

Read More