Scale ML workflows with Amazon SageMaker Studio and Amazon SageMaker HyperPod

Scale ML workflows with Amazon SageMaker Studio and Amazon SageMaker HyperPod

Scaling machine learning (ML) workflows from initial prototypes to large-scale production deployment can be daunting task, but the integration of Amazon SageMaker Studio and Amazon SageMaker HyperPod offers a streamlined solution to this challenge. As teams progress from proof of concept to production-ready models, they often struggle with efficiently managing growing infrastructure and storage needs. This integration addresses these hurdles by providing data scientists and ML engineers with a comprehensive environment that supports the entire ML lifecycle, from development to deployment at scale.

In this post, we walk you through the process of scaling your ML workloads using SageMaker Studio and SageMaker HyperPod.

Solution overview

Implementing the solution consists of the following high-level steps:

  1. Set up your environment and the permissions to access Amazon HyperPod clusters in SageMaker Studio.
  2. Create a JupyterLab space and mount an Amazon FSx for Lustre file system to your space. This eliminates the need for data migration or code changes as you scale. This also mitigates potential reproducibility issues that often arise from data discrepancies across different stages of model development.
  3. You can now use SageMaker Studio to discover the SageMaker HyperPod clusters, and view cluster details and metrics. When you have access to multiple clusters, this information can help you compare the specifications of each cluster, current utilization, and queue status of the clusters to identify the one that meets the requirements of your specific ML task.
  4. We use a sample notebook to show how to connect to the cluster and run a Meta Llama 2 training job with PyTorch FSDP on your Slurm cluster.
  5. After you submit the long-running job to the cluster, you can monitor the tasks directly through the SageMaker Studio UI. This can help you get real-time insights into your distributed workflows and allow you to quickly identify bottlenecks, optimize resource utilization, and improve overall workflow efficiency.

This integrated approach not only streamlines the transition from prototype to large-scale training but also enhances overall productivity by maintaining a familiar development experience even as you scale up to production-level workloads.

Prerequisites

Complete the following prerequisite steps:

  1. Create a SageMaker HyperPod Slurm cluster. For instructions, refer to the Amazon SageMaker HyperPod workshop or Tutorial for getting started with SageMaker HyperPod.
  2. Make sure you have the latest version of the AWS Command Line Interface (AWS CLI).
  3. Create a user in the Slurm head node or login node with an UID greater than 10000. Refer to Multi-User for instructions to create a user.
  4. Tag the SageMaker HyperPod cluster with the key hyperpod-cluster-filesystem. This is the ID for the FSx for Lustre file system associated with the SageMaker HyperPod cluster. This is needed for Amazon SageMaker Studio to mount FSx for Lustre onto Jupyter Lab and Code Editor spaces. Use the following code snippet to add a tag to an existing SageMaker HyperPod cluster:
    aws sagemaker add-tags --resource-arn <cluster_ARN> 
    --tags Key=hyperpod-cluster-filesystem,Value=<fsx_id>

Set up your permissions

In the following sections, we outline the steps to create an Amazon SageMaker domain, create a user, set up a SageMaker Studio space, and connect to the SageMaker HyperPod cluster. By the end of these steps, you should be able to connect to a SageMaker HyperPod Slurm cluster and run a sample training workload. To follow the setup instructions, you need to have admin privileges. Complete the following steps:

  1. Create a new AWS Identity and Access Management (IAM) execution role with AmazonSageMakerFullAccess attached to the role. Also attach the following JSON policy to the role, which enables SageMaker Studio to access the SageMaker HyperPod cluster. Make sure the trust relationship on the role allows the sagemaker.amazonaws.com service to assume this role.
{
    "Version": "2012-10-17",            
    "Statement": [
        {
            "Effect": "Allow",
            "Action": [
                "ssm:StartSession",
                "ssm:TerminateSession"
            ],
            "Resource": "*"    
        }
{
            "Effect": "Allow",
            "Action": [
                "sagemaker:CreateCluster",
                "sagemaker:ListClusters"
            ],
            "Resource": "*"    
        },
        {
            "Effect": "Allow",
            "Action": [
                "sagemaker:DescribeCluster",
                "sagemaker:DescribeClusterNode",
                "sagemaker:ListClusterNodes",
                "sagemaker:UpdateCluster",
                "sagemaker:UpdateClusterSoftware"
            ],
            "Resource": "arn:aws:sagemaker:region:account-id:cluster/*"    
        }
    ]
}
  1. In order to use the role you created to access the SageMaker HyperPod cluster head or login node using AWS Systems Manager, you need to add a tag to this IAM role, where Tag Key = “SSMSessionRunAs” and Tag Value = “<posix user>”. The POSIX user is the user that is set up on the Slurm head node. Systems Manager uses this user to exec into the head node.
  2. When you activate Run As support, it prevents Session Manager from starting sessions using the ssm-user account on a managed node. To enable Run As in Session Manager, complete the following steps:
    1. On the Session Manager console, choose Preferences, then choose Edit.
    2. Don’t specify any user name. The user name will be picked from the role tag SSMSessionRunAs that you created earlier.
    3. In the Linux shell profile section, enter /bin/bash.
    4. Choose Save.
  1. Create a new SageMaker Studio domain with the execution role created earlier along with other necessary parameters required to access the SageMaker HyperPod cluster. Use the following script to create the domain and replace the export variables accordingly. Here, VPC_ID and Subnet_ID are the same as the SageMaker HyperPod cluster’s VPC and subnet. The EXECUTION_ROLE_ARN is the role you created earlier.
export DOMAIN_NAME=<domain name>
export VPC_ID=vpc_id-for_hp_cluster
export SUBNET_ID=private_subnet_id
export EXECUTION_ROLE_ARN=execution_role_arn
export FILE_SYSTEM_ID=fsx id
export USER_UID=10000
export USER_GID=1001
export REGION=us-east-2

cat > user_settings.json << EOL
{
    "ExecutionRole": "$EXECUTION_ROLE_ARN",
    "CustomPosixUserConfig":
    {
        "Uid": $USER_UID,
        "Gid": $USER_GID
    },
    "CustomFileSystemConfigs":
    [
        {
            "FSxLustreFileSystemConfig":
            {
                "FileSystemId": "$FILE_SYSTEM_ID",
                "FileSystemPath": "$FILE_SYSTEM_PATH"
            }
        }
    ]
}
EOL

aws sagemaker create-domain 
--domain-name $DOMAIN_NAME 
--vpc-id $VPC_ID 
--subnet-ids $SUBNET_ID 
--auth-mode IAM 
--default-user-settings file://user_settings.json 
--region $REGION 

The UID and GID in the preceding configuration are set to 10000 and 1001 as default; this can be overridden according to the user created in Slurm, and this UID/GID is used to give permissions to the FSx for Lustre file system. Also, setting this at the domain level gives each user the same UID. In order to have a separate UID for each user, consider setting CustomPosixUserConfig while creating the user profile.

  1. After you create the domain, you need to attach SecurityGroupIdForInboundNfs created as part of domain creation to all ENIs of the FSx Lustre volume:
    1. Locate the Amazon Elastic File System (Amazon EFS) file system associated with the domain and corresponding security group attached to It. You can find the EFS file system on the Amazon EFS console; it’s tagged with the domain ID, as shown in the following screenshot.
    2. Collect the corresponding security group, which will be named inbound-nfs-<domain-id> and can be found on the Network tab.
    3. On the FSx for Lustre console, choose To see all the ENIs, see the Amazon EC2 Console. This will show all the ENIs attached to FSx for Lustre. Alternatively, you can find ENIs using the AWS CLI or by calling the fsx:describeFileSystems
    4. For each ENI, attach the SecurityGroupIdForInboundNfs of the domain to it.

Alternately, you can use the following script to automatically find and attach security groups to the ENIs associated with the FSx for Lustre volume. Replace the REGION, DOMAIN_ID, and FSX_ID attributes accordingly.

#!/bin/bash

export REGION=us-east-2
export DOMAIN_ID=d-xxxxx
export FSX_ID=fs-xxx

export EFS_ID=$(aws sagemaker describe-domain --domain-id $DOMAIN_ID --region $REGION --query 'HomeEfsFileSystemId' --output text)
export MOUNT_TARGET_ID=$(aws efs describe-mount-targets --file-system-id $EFS_ID --region $REGION --query 'MountTargets[0].MountTargetId' --output text)
export EFS_SG=$(aws efs describe-mount-target-security-groups --mount-target-id $MOUNT_TARGET_ID --query 'SecurityGroups[0]' --output text)
echo "security group associated with the Domain $EFS_SG"

echo "Adding security group to FSxL file system ENI's"
# Get the network interface IDs associated with the FSx file system
NETWORK_INTERFACE_IDS=$(aws fsx describe-file-systems --file-system-ids $FILE_SYSTEM_ID --query "FileSystems[0].NetworkInterfaceIds" --output text)
# Iterate through each network interface and attach the security group
for ENI_ID in $NETWORK_INTERFACE_IDS; do
aws ec2 modify-network-interface-attribute --network-interface-id $ENI_ID --groups $EFS_SG
echo "Attached security group $EFS_SG to network interface $ENI_ID"
done

Without this step, application creation will fail with an error.

  1. After you create the domain, you can use the domain to create a user profile. Replace the DOMAIN_ID value from the one created in the previous step.
export DOMAIN_ID=d-xxx
export USER_PROFILE_NAME=test
export REGION=us-east-2

aws sagemaker create-user-profile 
--domain-id $DOMAIN_ID 
--user-profile-name$USER_PROFILE_NAME 
--region $REGION

Create a JupyterLab space and mount the FSx for Lustre file system

Create a space using the FSx for Lustre file system with the following code:

export SPACE_NAME=hyperpod-space
export DOMAIN_ID=d-xxx
export USER_PROFILE_NAME=test
export FILE_SYSTEM_ID=fs-xxx
export REGION=us-east-2

aws sagemaker create-space --domain-id $DOMAIN_ID 
--space-name $SPACE_NAME 
--space-settings "AppType=JupyterLab,CustomFileSystems=[{FSxLustreFileSystem={FileSystemId=$FILE_SYSTEM_ID}}]" 
--ownership-settings OwnerUserProfileName=$USER_PROFILE_NAME  --space-sharing-settings SharingType=Private  
--region $REGION

Create an application using the space with the following code:

export SPACE_NAME=hyperpod-space
export DOMAIN_ID=d-xxx
export APP_NAME=test-app
export INSTANCE_TYPE=ml.t3.medium
export REGION=us-east-2
export IMAGE_ARN=arn:aws:sagemaker:us-east-2:081975978581:image/sagemaker-distribution-cpu

aws sagemaker create-app --space-name $SPACE_NAME 
--resource-spec '{"InstanceType":"$INSTANCE_TYPE","SageMakerImageArn":"$IMAGE_ARN"}' 
--domain-id  $DOMAIN_ID --app-type JupyterLab --app-name $APP_NAME --region $REGION

Discover clusters in SageMaker Studio

You should now have everything ready to access the SageMaker HyperPod cluster using SageMaker Studio. Complete the following steps:

  1. On the SageMaker console, choose Admin configurations, Domains.
  2. Locate the user profile you created and launch SageMaker Studio.
  3. Under Compute in the navigation pane, choose HyperPod clusters.

Here you can view the SageMaker HyperPod clusters available in the account.

  1. Identify the right cluster for your training workload by looking at the cluster details and the cluster hardware metrics.

You can also preview the cluster by choosing the arrow icon.

You can also go to the Settings and Details tabs to find more information about the cluster.

Work in SageMaker Studio and connect to the cluster

You can also launch either JupyterLab or Code Editor, which mounts the cluster FSx for Lustre volume for development and debugging.

  1. In SageMaker Studio, choose Get started in and choose JupyterLab.
  2. Choose a space that has the FSx for Lustre file system mounted to get a consistent, reproducible environment.

The Cluster Filesystem column identifies which space has the cluster file system mounted.

This should launch JupyterLab with the FSx for Lustre volume mounted. By default, you should see the getting started notebook in your home folder, which has step-by-step instructions to run a Meta Llama 2 training job with PyTorch FSDP on the Slurm cluster. This example notebook demonstrates how you can use SageMaker Studio notebooks to transition from prototyping your training script to scaling up your workloads across multiple instances in the cluster environment. Additionally, you should see the FSx for Lustre file system you mounted to your JupyterLab space under /home/sagemaker-user/custom-file-systems/fsx_lustre.

Monitor the tasks on SageMaker Studio

You can go to SageMaker Studio and choose the cluster to view a list of tasks currently in the Slurm queue.

You can choose a task to get additional task details such as the scheduling and job state, resource usage details, and job submission and limits.

You can also perform actions such as release, requeue, suspend, and hold on these Slurm tasks using the UI.

Clean up

Complete the following steps to clean up your resources:

  1. Delete the space:
aws —region <REGION> sagemaker delete-space 
--domain-id <DomainId> 
--space-name <SpaceName>
  1. Delete the user profile:
aws —region <REGION> sagemaker delete-user-profile 
--domain-id <DomainId> 
--user-profile-name <UserProfileName>
  1. Delete the domain. To retain the EFS volume, specify HomeEfsFileSystem=Retain.
aws —region <REGION> sagemaker delete-domain 
--domain-id <DomainId> 
--retention-policy HomeEfsFileSystem=Delete
  1. Delete the SageMaker HyperPod cluster.
  2. Delete the IAM role you created.

Conclusion

In this post, we explored an approach to streamline your ML workflows using SageMaker Studio. We demonstrated how you can seamlessly transition from prototyping your training script within SageMaker Studio to scaling up your workload across multiple instances in a cluster environment. We also explained how to mount the cluster FSx for Lustre volume to your SageMaker Studio spaces to get a consistent reproducible environment.

This approach not only streamlines your development process but also allows you to initiate long-running jobs on the clusters and conveniently monitor their progress directly from SageMaker Studio.

We encourage you to try this out and share your feedback in the comments section.

Special thanks to Durga Sury (Sr. ML SA), Monidipa Chakraborty (Sr. SDE), and Sumedha Swamy (Sr. Manager PMT) for their support to the launch of this post.


About the Authors

AKLArun Kumar Lokanatha is a Senior ML Solutions Architect with the Amazon SageMaker team. He specializes in large language model training workloads, helping customers build LLM workloads using SageMaker HyperPod, SageMaker training jobs, and SageMaker distributed training. Outside of work, he enjoys running, hiking, and cooking.

PKPooja Karadgi is a Senior Technical Product Manager at Amazon Web Services. At AWS, she is a part of the Amazon SageMaker Studio team and helps build products that cater to the needs of administrators and data scientists. She began her career as a software engineer before making the transition to product management. Outside of work, she enjoys crafting travel planners in spreadsheets, in true MBA fashion. Given the time she invests in creating these planners, it’s clear that she has a deep love for traveling, alongside a strong passion for hiking.

Read More

NVIDIA NIM on AWS Supercharges AI Inference

NVIDIA NIM on AWS Supercharges AI Inference

Generative AI is rapidly transforming industries, driving demand for secure, high-performance inference solutions to scale increasingly complex models efficiently and cost-effectively.

Expanding its collaboration with NVIDIA, Amazon Web Services (AWS) revealed today at its annual AWS re:Invent conference that it has extended NVIDIA NIM microservices across key AWS AI services to support faster AI inference and lower latency for generative AI applications.

NVIDIA NIM microservices are now available directly from the AWS Marketplace, as well as Amazon Bedrock Marketplace and Amazon SageMaker JumpStart, making it even easier for developers to deploy NVIDIA-optimized inference for commonly used models at scale.

NVIDIA NIM, part of the NVIDIA AI Enterprise software platform available in the AWS Marketplace, provides developers with a set of easy-to-use microservices designed for secure, reliable deployment of high-performance, enterprise-grade AI model inference across clouds, data centers and workstations.

These prebuilt containers are built on robust inference engines, such as NVIDIA Triton Inference Server, NVIDIA TensorRT, NVIDIA TensorRT-LLM and PyTorch, and support a broad spectrum of AI models — from open-source community ones to NVIDIA AI Foundation models and custom ones.

NIM microservices can be deployed across various AWS services, including Amazon Elastic Compute Cloud (EC2), Amazon Elastic Kubernetes Service (EKS) and Amazon SageMaker.

Developers can preview over 100 NIM microservices built from commonly used models and model families, including Meta’s Llama 3, Mistral AI’s Mistral and Mixtral, NVIDIA’s Nemotron, Stability AI’s SDXL and many more on the NVIDIA API catalog. The most commonly used ones are available for self-hosting to deploy on AWS services and are optimized to run on NVIDIA accelerated computing instances on AWS.

NIM microservices now available directly from AWS include:

  • NVIDIA Nemotron-4, available in Amazon Bedrock Marketplace, Amazon SageMaker Jumpstart and AWS Marketplace. This is a cutting-edge LLM designed to generate diverse synthetic data that closely mimics real-world data, enhancing the performance and robustness of custom LLMs across various domains.
  • Llama 3.1 8B-Instruct, available on AWS Marketplace. This 8-billion-parameter multilingual large language model is pretrained and instruction-tuned for language understanding, reasoning and text-generation use cases.
  • Llama 3.1 70B-Instruct, available on AWS Marketplace. This 70-billion-parameter pretrained, instruction-tuned model is optimized for multilingual dialogue.
  • Mixtral 8x7B Instruct v0.1, available on AWS Marketplace. This high-quality sparse mixture of experts model with open weights can follow instructions, complete requests and generate creative text formats.

NIM on AWS for Everyone

Customers and partners across industries are tapping NIM on AWS to get to market faster, maintain security and control of their generative AI applications and data, and lower costs.

SoftServe, an IT consulting and digital services provider, has developed six generative AI solutions fully deployed on AWS and accelerated by NVIDIA NIM and AWS services. The solutions, available on AWS Marketplace, include SoftServe Gen AI Drug Discovery, SoftServe Gen AI Industrial Assistant, Digital Concierge, Multimodal RAG System, Content Creator and Speech Recognition Platform.

They’re all based on NVIDIA AI Blueprints, comprehensive reference workflows that accelerate AI application development and deployment and feature NVIDIA acceleration libraries, software development kits and NIM microservices for AI agents, digital twins and more.

Start Now With NIM on AWS

Developers can deploy NVIDIA NIM microservices on AWS according to their unique needs and requirements. By doing so, developers and enterprises can achieve high-performance AI with NVIDIA-optimized inference containers across various AWS services.

Visit the NVIDIA API catalog to try out over 100 different NIM-optimized models, and request either a developer license or 90-day NVIDIA AI Enterprise trial license to get started deploying the microservices on AWS services. Developers can also explore NIM microservices in the AWS Marketplace, Amazon Bedrock Marketplace or Amazon SageMaker JumpStart.

See notice regarding software product information.

Read More

Introducing Amazon Kendra GenAI Index – Enhanced semantic search and retrieval capabilities

Introducing Amazon Kendra GenAI Index – Enhanced semantic search and retrieval capabilities

Amazon Kendra is an intelligent enterprise search service that helps you search across different content repositories with built-in connectors. AWS customers use Amazon Kendra with large language models (LLMs) to quickly create secure, generative AI–powered conversational experiences on top of your enterprise content.

As enterprises adopt generative AI, many are developing intelligent assistants powered by Retrieval Augmented Generation (RAG) to take advantage of information and knowledge from their enterprise data repositories. This approach combines a retriever with an LLM to generate responses. A retriever is responsible for finding relevant documents based on the user query. Customers seek to build comprehensive generative AI systems that use this approach with their choice of index, LLMs, and other components. The combination of retrievers and LLMs offers powerful capabilities, but organizations face significant challenges in building effective retrieval systems.

The core challenge lies in developing data pipelines that can handle diverse data sources, the multitude of data entities in each data source, their metadata and access control information, while maintaining accuracy. This requires implementing information extraction models, optimizing text processing, and balancing sparse and dense retrieval methods. These diverse data sources come with their own ways of encapsulating entities of information. These entities can be documents in Amazon Simple Storage Service (Amazon S3), HTML pages in a web server, accounts in Salesforce, or incidents in ServiceNow. Each data source can have multiple ways to authenticate such as OAuth 2.0 (for example, client credentials flow or refresh token flow), Network Level Trust Manager(), basic authentication, and others.

Entities also come with access control information for each entity, such as the user email and groups that are authorized to access the entity. The data source administrators and users also add a multitude of metadata fields to each entity that contain critical information about the entity, such as created date or author. Organizations must also fine-tune technical parameters, including embedding models, dimensionality, and nearest neighbor algorithms for optimal performance. This complexity often requires significant expertise and resources, making it difficult for many organizations to implement effective retrieval systems for their generative AI solutions.

Amazon Bedrock Knowledge Bases provides managed workflows for RAG pipelines with customizable features for chunking, parsing, and embedding. However, customers seek a more streamlined experience with pre-optimized parameters and simplified data source integration. They also want the ability to reuse indexed content across their generative AI solutions.

Amazon Q Business is a fully managed, generative AI–powered assistant that you can configure to answer questions, provide summaries, generate content, and complete tasks based on your enterprise data. It allows end users to receive immediate, permissions-aware responses from enterprise data sources with citations, for use cases such as IT, human resources (HR), and benefits help desks.

Amazon Q Business also helps streamline tasks and accelerate problem solving. You can use Amazon Q Business to create and share task automation applications or perform routine actions like submitting time-off requests and sending meeting invites. However, Amazon Q Business customers who have already made investments in Amazon Kendra for their enterprise search needs are seeking ways to get RAG-based enhanced semantic search against Amazon Kendra index and save on cost and time.

Amazon Kendra GenAI Index is a new index in Amazon Kendra designed for RAG and intelligent search to help enterprises build digital assistants and intelligent search experiences more efficiently and effectively. This index offers high retrieval accuracy, using advanced semantic models and the latest information retrieval technologies. It can be integrated with Amazon Bedrock Knowledge Bases and other Amazon Bedrock tools to create RAG-powered digital assistants, or it can be used with Amazon Q Business for a fully managed digital assistant solution.

Amazon Kendra GenAI Index addresses common challenges in building retrievers for generative AI assistants, including data ingestion, model selection, and integration with various generative AI tools. Its features include a managed retriever with high semantic accuracy, a hybrid index combining vector and keyword search, pre-optimized parameters, connectors to a variety of enterprise data sources, and metadata-based user permissions filtering.

A single Amazon Kendra GenAI Index can be used across multiple Amazon Q Business applications and Amazon Bedrock Knowledge Bases, benefiting from features such as relevance tuning, document enrichment, and metadata filtering. This new offering joins our existing Amazon Kendra Developer and Enterprise editions, providing customers with more options to meet their specific search needs. This index will support most of the popular features (with some exceptions listed later in this post) such as connectors, user context filtering, metadata support, relevance tuning, and others that customers love to use in Amazon Kendra.

Benefits

Amazon Kendra GenAI Index offers a managed retriever solution that delivers high semantic accuracy for RAG while enabling organizations to use their Amazon Web Services (AWS) generative AI investments across multiple services through built-in integration with Amazon Bedrock Knowledge Bases and Amazon Q Business without needing to rebuild indexes for different applications. Amazon Kendra Gen AI Index also supports connectors to 43 enterprise sources such as SharePoint, OneDrive, Google Drive, Salesforce, and others with integrated metadata-based user permissions filtering, reducing the burden of building custom connectors.

Because Amazon Kendra GenAI Index is a managed RAG option within Amazon Bedrock Knowledge Bases, customers can build generative AI assistants using Amazon Bedrock tooling such as agents and prompt flows. Organizations can select their preferred language models, customize prompts, and manage costs through pay-per-token pricing.

For those seeking a fully managed experience, Amazon Kendra Gen AI Index integrates seamlessly with Amazon Q Business, removing the complexity of LLM selection and prompt engineering. Customers can also use a single Amazon Kendra GenAI Index that serves multiple Amazon Q Business applications and Amazon Bedrock Knowledge Bases. As a result, they can index one time and reuse that indexed content across use cases. Additionally, features such as relevance tuning, document enrichment, and metadata filtering enable businesses to optimize content relevance for their specific needs.

Enhanced semantic understanding

Amazon Kendra GenAI Index incorporates significant upgrades to the underlying search and retrieval technologies, along with improved semantic models. These enhancements provide higher accuracy in the retrieval API, making it especially valuable for RAG applications. It offers high accuracy out-of-the-box for search and retrieval use cases, powered by the latest information retrieval technologies, semantic embedding, and reranker models tested across a variety of datasets. The high retrieval accuracy is provided through its hybrid indexing system, which combines vector and keyword search using advanced semantic relevance models with pre-optimized parameters.

Optimized resource management

The Amazon Kendra GenAI Index introduces smaller index units, leading to improved capacity utilization. This optimization enables organizations to manage their search infrastructure more efficiently while maintaining high performance levels. The streamlined architecture reduces operational overhead and allows for more flexible scaling based on actual usage patterns.

Single index seamless integration with AWS services

Amazon Kendra GenAI Index enables organizations to use a single index across the AWS generative AI stack without having to rebuild indexes. Through deep integration with both Amazon Q Business and Amazon Bedrock Knowledge Bases, organizations can choose between a fully managed experience or a customizable approach. The Amazon Q Business integration provides a streamlined path for building generative AI assistants, and Amazon Bedrock Knowledge Bases offers greater control over prompt customization, model selection, and orchestration with pay-per-token pricing. This flexibility allows organizations to adapt their implementation as needs evolve, protecting their investment in content indexing.

How to create and use the Amazon Kendra Gen AI Index

As mentioned, you have the option to use Amazon Kendra GenAI Index as a standalone index for search use cases using Amazon Kendra. You also have the option to use the new Amazon Kendra GenAI Index as a retriever for Amazon Q Business and as part of Amazon Bedrock Knowledge Bases.

Option 1: Use Amazon Kendra Gen AI Index within Amazon Kendra standalone

The steps to create an Amanzon Kendra GenAI index are similar to Creating an index as described in the Amazon Kendra Developer Guide.

To get started with Amazon Kendra GenAI Index:

  1. On the Amazon Kendra console, choose Create index.
  2. Select GenAI edition as your index type and choose Next, as shown in the following screenshot.
  1. Choose defaults under Configure user access control and choose Next, as shown in the following screenshot.
  1. Choose the defaults under Review and create and choose Create, as shown in the following screenshot.
  1. You can validate the Amazon Kendra Edition type by selecting the created index from the list of indexes created. By clicking on the Settings tab, you to validate the Edition type.

  1. Your index is now ready to add data sources. In the left navigation pane, choose Data sources, then choose Add data source, as shown in the following screenshot.
  1. Choose Select sample dataset (Amazon S3 data source).
  1. Add a Data source name and choose defaults. Choose Add data source, as shown in the following screenshot.
  1. It will take a few seconds to propagate the AWS Identity and Access Management (IAM) role. When it’s done, sync the data source by choosing Sync now explicitly, or it should also automatically start syncing.
  1. After it’s done crawling and indexing, in Sync history under Status, you should notice it has Completed. Confirm Total items scanned.
  1. Check the search results against the newly created Amazon Kendra GenAI index. Select the newly created index and choose Search indexed content, which presents a user interface to search.

The following image shows a comparison of the results for the same query to a Non GenAI index. You can observe that the semantic relevancy increased, making the result as part of Amazon Kendra suggested answers. Also, the number of output tokens increased, providing more context and relevance.

You can also visit the Amazon Kendra Developer Guide to learn how to add data sources to your index by using one of the available data sources or adding a document directly to batch upload.

Option 2: Use Amazon Kendra GenAI Index as a retriever with Amazon Q Business

One of the main benefits of the Amazon Kendra GenAI Index is the usability of the index across multiple AWS services. In Amazon Q Business, administrators can now use the same Amazon Kendra GenAI index created in the previous steps to attach to an application.

To create an Amazon Q Business application, refer to Creating an Amazon Q Business application environment in the Amazon Q User Guide.

  1. When the Amazon Q Business application is ready, in the left navigation pane, select Data sources, then choose Add an index, as shown in the following screenshot.
  1. Select Use an existing Amazon Kendra index. Under Select an index, notice the newly created GenAI

NOTE: After adding the Amazon Kendra index as a retriever in your Amazon Q Business application, you can manage the index and add documents and data sources through the Amazon Kendra GenAI Index console.

  1. After the indexed is attached, click on the web experience link. In the left navigation pane, select Amazon Q Business. Under Web experience settings, choose Deployed URL, as shown in the following screenshot to interact with Q Business AI assistant.
  1. When you’re in the Amazon Q Business web chat, pose the same question as in the previous steps. This query will use the same Amazon Kendra GenAI Index created in Amazon Kendra.

Option 3: Use Amazon Kendra GenAI Index with Amazon Bedrock Knowledge Bases

Similar to Option 2, you can seamlessly use Amazon Kendra GenAI Index as a data source with Amazon Bedrock Knowledge Bases.

To create Amazon Bedrock Knowledge bases, refer to Build a knowledge base by connecting to a data source in the Amazon Bedrock User Guide.

  1. On the Amazon Bedrock console, choose Knowledge , as shown in the following screenshot.
  1. You will be presented with Knowledge Base creation with Amazon Kendra GenAI Index screen, enter the details shown below and select Amazon Kendra GenAI index created from the options.
  1. After your knowledge base is created, you can validate that the Retrieval-Augmented Generation (RAG) type is listed as Kendra GenAI Index. To manage data sources, you can choose Add. The Amazon Kendra console will open, where you can manage all data sources for the index.
  1. After the knowledge base is created, select it to test the query.

Conclusion

Amazon Kendra GenAI Index represents a significant advancement in enterprise search and retrieval capabilities, offering organizations a streamlined path to implementing effective RAG solutions. Whether organizations choose to use it as a standalone search solution, integrate it with Amazon Q Business, or use it through Amazon Bedrock Knowledge Bases, Amazon Kendra GenAI Index provides the flexibility and efficiency needed to make enterprise content more accessible and actionable.

To know more about Amazon Kendra, visit Amazon Kendra Documentation.

Pricing and availability

For information about the AWS Regions in which Amazon Kendra GenAI Index is available, refer to the Amazon Kendra endpoints and quotas page. For detailed pricing information, visit the Amazon Kendra Pricing page.


About the Authors

Krishna Mudda is Senior Manager of Gen AI World Wide Specialist Solution Architects with in Amazon Q Business team.

Marcel Pividal is a Senior AI Services SA in the World- Wide Specialist Organization, bringing over 22 years of expertise in transforming complex business challenges into innovative technological solutions. As a thought leader in generative AI implementation, he specializes in developing secure, compliant AI architectures for enterprise- scale deployments across multiple industries.

Nikhil Shetty is Senior Product Manager of Amazon Kendra.

Aakash Upadhyay is a Senior Software Engineer at AWS, specializing in building scalable NLP and Generative AI cloud services. Over the past six years, he has contributed to the development and enhancement of products like Amazon Translate, Kendra, and Q-Business.

Vijai Gandikota is a Principal Product Manager on the Amazon Q and Amazon Kendra team of Amazon Web Services. He is responsible for Region expansion, language support, guardrails, ingestion, security, and other aspects of Amazon Q and Amazon Kendra.

Kristy Lin is a Software Development Engineer with Amazon Bedrock Knowledge Bases, helping customers build scalable RAG applications.

Read More

Research Focus: Week of December 2, 2024

Research Focus: Week of December 2, 2024

Welcome to Research Focus, a series of blog posts that highlights notable publications, events, code/datasets, new hires and other milestones from across the research community at Microsoft.

Research Focus: Week of December 2, 2024

Adaptive Security, Erasures, and Network Assumptions in Communication-Local MPC

n-party Multi-Party Computation (MPC) is a cryptographic protocol technique that allows separate parties to securely compute a function on their joint data while keeping their inputs private. To build such a protocol, most works require all pairs of participating parties to be able to securely and reliably communicate with each other. Recently, the problem of Communication-Local (CL) MPC has been explored where this assumption is modelled more realistically – e.g. by only requiring that participating parties can securely and reliably communicate with a few other participating parties (as for example in networks like blockchains). However, few solutions exist that guarantee adaptive security—resilience to dynamic corruption of parties—and most rely on strong assumptions about party actions.

In a recent paper: Adaptive Security, Erasures, and Network Assumptions in Communication-Local MPC, researchers from Microsoft and external collaborators revisit assumptions made in earlier work. The authors conclude that for secure, adaptive CL-MPC, some previously assumed capabilities (like secure erasure and multisend) can be bypassed under certain conditions; however, fully reducing all-to-all to all-to-one communication remains unachievable in CL settings without some minimal assumptions. They propose a new SOS-RMT protocol, enabling more efficient CL-MPC under specific feasibility bounds and additional cryptographic assumptions.


Cuttlefish: A Fair, Predictable Execution Environment for Cloud-hosted Financial Exchanges

Low-latency algorithmic trading is driving efficiency in modern financial markets by promoting accurate/timely pricing of securities, higher liquidity, and lower trade costs for investors. The goal is to process incoming market data and issue trades as quickly as possible to take advantage of ephemeral market-making and arbitrage opportunities. Interest in cloud-hosted financial exchanges is growing, as they promise a cost-effective platform more accessible to market participants, among other benefits.

Unfortunately, one of the major roadblocks in cloud environments is to ensure equal network and compute despite the unpredictable network latencies as well as non-deterministic computation times.

In a recent preprint: Cuttlefish: A Fair, Predictable Execution Environment for Cloud-hosted Financial Exchanges, researchers from Microsoft and external collaborators present a fair-by-design algorithmic trading platform that can run in cloud environments. Cuttlefish aims to apply efficient and robust mapping of real operations to a novel formulation of ‘virtual time’. This allows Cuttlefish to push fairness to the extreme, regardless of the underlying network communication and computation hardware. The researchers’ implementation and evaluation validate the practicality of Cuttlefish and shows its operational efficiency on public cloud platforms. This paper builds on previous work: Rethinking Cloud-hosted Financial Exchanges for Response Time Fairness and DBO: Fairness for Cloud-Hosted Financial Exchanges. 


Spotlight: Blog post

Research Focus: Week of September 9, 2024

Investigating vulnerabilities in LLMs; A novel total-duration-aware (TDA) duration model for text-to-speech (TTS); Generative expert metric system through iterative prompt priming; Integrity protection in 5G fronthaul networks.


LLM2CLIP: Powerful language model unlocks richer visual representation

CLIP is a prominent multimodal foundational model, aligning visual and textual signals into a shared feature space. It supports various tasks, including zero-shot classification, detection, segmentation, and cross-modal retrieval, significantly influencing the entire multimodal domain. As a feature extractor, it has become dominant in cross-modal representation tasks such as image understanding, video understanding, and text-to-image/video generation. However, rapid advancements in large language models (LLMs) are continually pushing the boundaries of language comprehension and generation. Can the capabilities of LLMs be harnessed to further improve multimodal representation learning?

In a recent article: LLM2CLIP: Powerful Language Model Unlock Richer Visual Representation, researchers from Microsoft and external collaborators propose LLM2CLIP, a novel approach to unlock CLIP’s potential, focusing on fundamental optimizations of promising foundation models. By fine-tuning the LLM in the caption space with contrastive learning, they extract its textual capabilities into the output embeddings, significantly improving the output layer’s textual discriminability. The researchers then design a training process where the fine-tuned LLM acts as a powerful teacher for CLIP’s visual encoder. The LLM’s presence allows them to incorporate longer and more complex captions without being restricted by CLIP’s text encoder’s context window and ability limitations. Their experiments demonstrate that this approach brings substantial improvements in cross-modal tasks.


LORASC: Expressive and Generalizable Low-rank Adaptation for Large Models via Slow Cascaded Learning

Foundation models, which are large-scale models pre-trained on extensive datasets and subsequently adapted for specific downstream tasks, have become integral to contemporary machine learning frameworks. Fine-tuning these models is essential, yet full parameter fine-tuning often encounters significant memory and computational bottlenecks. Parameter-efficient finetuning (PEFT) techniques aim to minimize the number of trainable parameters to reduce training costs and improve training stability. Among these techniques, Low-Rank Adaptation (LoRA) is highly efficient, although it has limitations in terms of expressiveness and generalization have been noted.

In a recent paper: LORASC: Expressive and Generalizable Low-rank Adaptation for Large Models via Slow Cascaded Learning, researchers from Microsoft and external collaborators present an innovative technique designed to enhance LoRA’s expressiveness and generalization capabilities while preserving its training efficiency. Their cascaded learning strategy enables a mixture-of-low-rank adaptation, thereby increasing the model’s ability to capture complex patterns. They also introduce a slow-fast update mechanism and cascading noisy tuning to bolster generalization. Their extensive experiments on various language and vision datasets, as well as robustness benchmarks, show that the proposed method significantly outperforms existing baselines, while also mitigating overfitting, enhancing model stability, and improving out-of-distribution (OOD) robustness.

Microsoft Research in the news


Can AI spot the next bomb cyclone far in advance? Microsoft hopes so 

Seattle Times | November 23, 2024

Microsoft claims that Aurora, a deep-learning model that’s constantly being trained, can produce weather forecasts much faster than — and with accuracy that meets or exceeds — traditional forecasting models.


How Microsoft’s next-gen BitNet architecture is turbocharging LLM efficiency  

VentureBeat | November 13, 2024

One-bit large language models (LLMs) have emerged as a promising approach to making generative AI more accessible and affordable. In a new paper, Microsoft researchers introduce Binet a4.8, a new technique that further improves the efficiency of one-bit LLMs without sacrificing their performance.


2024 Ellison Cliffe Lecture: AI in science and medicine with Christopher Bishop 

Royal Society of Medicine | November 13, 2024

Christopher Bishop, Technical Fellow and Director of Microsoft Research AI for Science, discusses the extraordinary advances in the deep learning technology that underpins the AI revolution, including crucial progress in the fields of scientific discovery and medicine. This recent speech at the Royal Society of Medicine includes current examples of AI’s impact in materials design, drug discovery, and healthcare.

The post Research Focus: Week of December 2, 2024 appeared first on Microsoft Research.

Read More

Building Generative AI and ML solutions faster with AI apps from AWS partners using Amazon SageMaker

Building Generative AI and ML solutions faster with AI apps from AWS partners using Amazon SageMaker

Organizations of every size and across every industry are looking to use generative AI to fundamentally transform the business landscape with reimagined customer experiences, increased employee productivity, new levels of creativity, and optimized business processes. A recent study by Telecom Advisory Services, a globally recognized research and consulting firm that specializes in economic impact studies, shows that cloud-enabled AI will add more than $1 trillion to global GDP from 2024 to 2030.

Organizations are looking to accelerate the process of building new AI solutions. They use fully managed services such as Amazon SageMaker AI to build, train and deploy generative AI models. Oftentimes, they also want to integrate their choice of purpose-built AI development tools to build their models on SageMaker AI.

However, the process of identifying appropriate applications is complex and demanding, requiring significant effort to make sure that the selected application meets an organization’s specific business needs. Deploying, upgrading, managing, and scaling the selected application also demands considerable time and effort. To adhere to rigorous security and compliance protocols, organizations also need their data to stay within the confines of their security boundaries without the need to store it in a software as a service (SaaS) provider-owned infrastructure.

This increases the time it takes for customers to go from data to insights. Our customers want a simple and secure way to find the best applications, integrate the selected applications into their machine learning (ML) and generative AI development environment, manage and scale their AI projects.

Introducing Amazon SageMaker partner AI apps

Today, we’re excited to announce that AI apps from AWS Partners are now available in SageMaker. You can now find, deploy, and use these AI apps privately and securely, all without leaving SageMaker AI, so you can develop performant AI models faster.

Industry-leading app providers

The first group of partners and applications—shown in the following figure—that we’re including are Comet and its model experiment tracking application, Deepchecks and its large language model (LLM) quality and evaluation application, Fiddler and its model observability application, and Lakera and its AI security application.

Managed and secure

These applications are fully managed by SageMaker AI, so customers don’t have to worry about provisioning, scaling, and maintaining the underlying infrastructure. SageMaker AI makes sure that sensitive data stays completely within each customer’s SageMaker environment and will never be shared with a third party.

Available in SageMaker AI and SageMaker Unified Studio (preview)

Data scientists and ML engineers can access these applications from Amazon SageMaker AI (formerly known as Amazon SageMaker) and from SageMaker Unified Studio. This capability enables data scientists and ML engineers to seamlessly access the tools they require, enhancing their productivity and accelerating the development and deployment of AI products. It also empowers data scientists and ML engineers to do more with their models by collaborating seamlessly with their colleagues in data and analytics teams.

Seamless workflow integration

Direct integration with SageMaker AI provides a smooth user experience, from model building and deployment to ongoing production monitoring, all within your SageMaker development environment. For example, a data scientist can run experiments in their SageMaker Studio or SageMaker Unified Studio Jupyter notebook and then use the Comet ML app for visualizing and comparing those experiments.

Streamlined access

Use AWS credits to use partner apps without navigating lengthy procurement or approval processes, accelerating adoption and scaling of AI observability.

Application deep dive

The integration of these AI apps within SageMaker Studio enables you to build AI models and solutions without leaving your SageMaker development environment. Let’s take a look at the initial group of apps launched at re:Invent 2024.

Comet

Comet provides an end-to-end model evaluation solution for AI developers with best-in-class tooling for experiment tracking and model production monitoring. Comet has been trusted by enterprise customers and academic teams since 2017. Within SageMaker Studio, Notebooks and Pipelines, data scientists, ML engineers, and AI researchers can use Comet’s robust tracking and monitoring capabilities to oversee model lifecycles from training through production, bringing transparency and reproducibility to ML workflows.

You can access the Comet UI directly from SageMaker Studio and SageMaker Unified Studio without the need to provide additional credentials. The app infrastructure is deployed, managed, and supported by AWS, providing a holistic experience and seamless integration. This means each Comet deployment through SageMaker AI is securely isolated and provisioned automatically. You can seamlessly integrate Comet’s advanced tools without altering their existing your SageMaker AI workflows. To learn more, visit Comet.

Deepchecks

Deepchecks specializes in LLM evaluation. Their validation capabilities include automatic scoring, version comparison, and auto-calculated metrics for properties such as relevance, coverage, and grounded-in-context. These capabilities enable organizations to rigorously test, monitor, and improve their LLM applications while maintaining complete data sovereignty.

Deepchecks’s state-of-the-art automatic scoring capabilities for LLM applications, paired with the infrastructure and purpose-built tools provided by SageMaker AI for each step of the ML and FM lifecycle, makes it possible for AI teams to improve their models’ quality and compliance.

Starting today, organizations using AWS can immediately work with Deepchecks’s LLM evaluation tools in their environment, minimizing security and privacy concerns because data remains fully contained within their AWS environments. This integration also removes the overhead of onboarding a third-party vendor, because legal and procurement aspects are streamlined by AWS. To learn more, visit Deepchecks.

Fiddler AI

The Fiddler AI Observability solution allows data science, engineering, and line-of-business teams to validate, monitor, analyze, and improve ML models deployed on SageMaker AI.

With Fiddler’s advanced capabilities, users can track model performance, monitor for data drift and integrity, and receive alerts for immediate diagnostics and root cause analysis. This proactive approach allows teams to quickly resolve issues, continuously improving model reliability and performance. To learn more, visit Fiddler.

Lakera

Lakera partners with enterprises and high-growth technology companies to unlock their generative AI transformation. Lakera’s application Lakera Guard provides real-time visibility, protection, and control for generative AI applications. By protecting sensitive data, mitigating prompt attacks, and creating guardrails, Lakera Guard makes sure that your generative AI always interacts as expected.

Starting today, you can set up a dedicated instance of Lakera Guard within SageMaker AI that ensures data privacy and delivers low-latency performance, with the flexibility to scale alongside your generative AI application’s evolving needs. To learn more, visit Lakera.

See how customers are using partner apps

“The AI/ML team at Natwest Group leverages SageMaker and Comet to rapidly develop customer solutions, from swift fraud detection to in-depth analysis of customer interactions. With Comet now a SageMaker partner app, we streamline our tech and enhance our developers’ workflow, improving experiment tracking and model monitoring. This leads to better results and experiences for our customers.”
– Greig Cowan, Head of AI and Data Science, NatWest Group.

“Amazon SageMaker plays a pivotal role in the development and operation of Ping Identity’s homegrown AI and ML infrastructure. The SageMaker partner AI apps capability will enable us to deliver faster, more effective ML-powered functionality to our customers as a private, fully managed service, supporting our strict security and privacy requirements while reducing operational overhead.”
– Ran Wasserman, Principal Architect, Ping Identity.

Start building with AI apps from AWS partners

Amazon SageMaker AI provides access to a highly curated selection of apps from industry leading providers that are designed and certified to run natively and privately on SageMaker AI. Data scientists and developers can quickly find, deploy, and use these applications within SageMaker AI and the new unified studio to accelerate their ML and generative AI model building journey.

You can access all available SageMaker partner AI apps directly from SageMaker AI and SageMaker Unified Studio. Click through to view a specific app’s functionality, licensing terms, and estimated costs for deployment. After subscribing, you can configure the infrastructure that your app will run on by selecting a deployment tier and additional configuration parameters. After the app finishes the provisioning process, you will be able to assign access to your users, who will find the app ready to use in their SageMaker Studio and SageMaker Unified Studio environments.


About the authors

Gwen Chen is a Senior Generative AI Product Marketing Manager at AWS. She started working on AI products in 2018. Gwen has launched an NLP-powered app building product, MLOps, generative AI-powered assistants for data integration and model building, and inference capabilities. Gwen graduated from a dual master degree program of science and business with Duke and UNC Kenan-Flagler. Gwen likes listening to podcasts, skiing, and dancing.

Naufal Mir is a Senior Generative AI/ML Specialist Solutions Architect at AWS. He focuses on helping customers build, train, deploy, and migrate ML workloads to SageMaker. He previously worked at financial services institutes developing and operating systems at scale. He enjoys ultra-endurance running and cycling.

Kunal Jha is a Senior Product Manager at AWS. He is focused on building Amazon SageMaker Studio as the IDE of choice for all ML development steps. In his spare time, Kunal enjoys skiing, scuba diving and exploring the Pacific Northwest. You can find him on LinkedIn.

Eric Peña is a Senior Technical Product Manager in the AWS Artificial Intelligence Platforms team, working on Amazon SageMaker Interactive Machine Learning. He currently focuses on IDE integrations on SageMaker Studio. He holds an MBA degree from MIT Sloan and outside of work enjoys playing basketball and football.

Arkaprava De is a manager leading the SageMaker Studio Apps team at AWS. He has been at Amazon for over 9 years and is currently working on improving the Amazon SageMaker Studio IDE experience. You can find him on LinkedIn.

Zuoyuan Huang is a Software Development Manager at AWS. He has been at Amazon for over 5 years, and has been focusing on building SageMaker Studio apps and IDE experience. You can find him on LinkedIn.

Read More

MarS: A unified financial market simulation engine in the era of generative foundation models

MarS: A unified financial market simulation engine in the era of generative foundation models

MarS illustration with document workflow and chatbot icons on a purple gradient background

Introduction

Generative foundation models have transformed various domains, creating new paradigms for content generation. Integrating these models with domain-specific data enables industry-specific applications. Microsoft Research has used this approach to develop the large market model (LMM) and the Financial Market Simulation Engine (MarS) for the financial domain. These innovations have the potential to empower financial researchers to customize generative models for diverse scenarios, establishing a new paradigm for applying generative models to downstream tasks in financial markets. This integration may provide enhanced efficiency, more accurate insights, and significant advancements in the financial domain. 

Applying generative models to financial markets

In recent years, generative foundation models have achieved notable success in fields like natural language processing and media generation. Their rise has sparked a new wave of research and industrial adoption, reshaping production processes across industries. These models excel due to three essential elements: a large volume of high-quality training data; effective tokenization and serialization of core information (such as semantic information in text); and an auto-regressive training approach that models data comprehensively, enabling implicit reasoning. 

Building on years of AI applications across industries, Microsoft researchers recognized that combining generative models with domain-specific data could lead to impactful solutions, particularly in finance. The financial market is a prime example, notably for its vast amount of order data, which are characterized by three key features: 

  • Fine granularity: Orders, as the atomic data in the financial market, provide a comprehensive and detailed representation of the real market. Combined with matching rules, one can reproduce the entire market operation process. 
  • Large scale: Electronic trading has resulted in the accumulation of massive trade-order data across global exchanges
  • Well-structured: The structured nature of order data makes it ideal for tokenization and sequential modeling

These characteristics position order flow data as a critical foundation for generative modeling in financial markets. To this end, Microsoft Research developed the LMM and the MarS, which financial researchers can use to customize generative models for various applications, thus fostering a new paradigm of generative solutions for all downstream tasks in finance. This has the potential to advance efficiency and insight generation in the financial industry. 

Figure 1: Illustration of Stock Market and Orders. On the left, a document icon shows order details. An arrow points to the right where multiple icons (robots and human figures) interact with charts and graphs representing market data.
Figure 1: Illustration of stock market and orders

Tokenization of order flow information

Order flow data is vital for generative models in finance, reflecting real-time interactions among market participants. It offers two types of value: 

  • Fine-grained market feedback: Each order, especially large ones, may influence others’ decisions, providing a micro-level view of pricing behavior. 
  • Macroscopic market dynamics: Collective interactions shape trading dynamics over time, capturing the evolution and resolution of conflicts between market forces. 

Researchers at Microsoft developed LMM by modeling both individual orders and entire order sets over time. This two-tiered approach captures both fine-grained feedback and macro-level dynamics of competition. Figure 2 shows the tokenization techniques for these models, enabling high-fidelity simulations of complex market dynamics. 

Figure 2: Illustration of Tokenization for Individual Orders (Top) and Batch Orders (Bottom) . At the top left, a green document labeled 'Type Price Volume Interval' is connected by dotted lines to another document icon. To the right, a bar chart with red and green bars shows volume on the y-axis and numbers on the x-axis. Below, an arrow points from an 'Order Batch' section with three documents to three grids.
Figure 2: Tokenization for individual orders (top) and batch orders (bottom) 

Expansion law of large market model: Unlocking the potential of financial data 

The effectiveness of generative models improves significantly with larger training datasets and model parameters. Researchers at Microsoft used two tokenization strategies to design models based on the Transformer architecture, testing them across varying data scales. Figure 3 illustrates the scaling behavior of both the order and order batch models, highlighting insights from historical trading data. This integration enhances the model’s ability to generate order flows with a deep understanding of market intricacies, enabling more accurate time-series modeling. 

Figure 3: Two line graphs comparing validation loss against the number of training tokens for different model sizes. The left graph, titled 'Order Model,' shows curves for model sizes ranging from 2M to 1.02B, with validation loss decreasing as the number of training tokens increases. The right graph, titled 'Order-Batch Model,' displays curves for model sizes ranging from 150M to 3B, also showing a decrease in validation loss with an increase in training tokens.
Figure 3: Scaling curves of order and batch order models under different parameter sizes 

Microsoft research podcast

Collaborators: Silica in space with Richard Black and Dexter Greene

College freshman Dexter Greene and Microsoft research manager Richard Black discuss how technology that stores data in glass is supporting students as they expand earlier efforts to communicate what it means to be human to extraterrestrials.


MarS based on LMM

A customizable generative model for financial scenarios

Generative models, once trained, can be easily adapted for a range of downstream tasks, often outperforming traditional models tailored for specific scenarios. Building on the development of LMM, researchers further analyzed the needs of various financial scenarios and designed MarS as a versatile financial market simulation engine. MarS not only serves as a general-purpose simulation tool but also introduces a novel framework for applying generative models across diverse financial tasks, from market prediction and risk assessment to trading strategy optimization. 

Figure 4: Diagram of the MarS framework showing data flow and interactions between components like the current market & environment data, order-level historical market data, large marke model, generated order sequences, simulated market trajectories, and applications.
Figure 4: Framework of MarS

Constructing a unified paradigm for prediction and detection tasks 

Traditional financial prediction solutions often require the development of specialized algorithms, which must be frequently adjusted, consuming time and resources. LMM’s capacity to model financial markets in depth allows for periodic updates based on the latest data. MarS creates a virtual exchange to match order flows generated by LMM, simulating trades and deriving simulated market trajectories (see the top right of Figure 4). This approach can effectively address common prediction and detection tasks in financial scenarios, introducing innovative solutions within the generative model framework. 

Applications in prediction tasks

Prediction tasks, vital in finance, involve estimating future market metrics. Traditional models require modifications with any change in prediction targets. MarS addresses this by continuously generating future order flows from recent data, which are matched in a virtual exchange, allowing for the simulation of potential future market trajectories. This provides a significant advancement in forecasting capabilities.

Figure 5 demonstrates the use of MarS in forecasting stock-price movements, where its performance significantly outperforms traditional benchmark algorithms. Taking the Order Model (1.02B) for instance, its performance exceeds that of DeepLOB by approximately (0.662/0.583−1=13.5%) at a 1-minute horizon and increases to (0.579/0.473−1=22.4%) at a 5-minute horizon This widening performance gap suggests that the Order Model maintains its predictive accuracy more effectively over longer horizons, highlighting its superior generalization capability compared to baseline, especially as the prediction task becomes more challenging over extended timeframes. This provides an attractive solution for prediction tasks in financial markets, while also highlighting LMM’s capability in modeling stock market dynamics. 

Figure 5: Line graph comparing prediction accuracy over time for three models: DeepLOB, Order Model (0.22B), and Order Model (1.02B). Prediction accuracy decreases as time increases from 1 to 5 minutes, with DeepLOB showing the lowest accuracy and Order Model (1.02B) showing the highest.
Figure 5: Predicting stock price trends with MarS

Applications in detection tasks

For regulators, detecting systemic risks or market abuse is critical for market stability. LMM models typical market patterns, enabling the identification of anomalies by comparing real market trajectories with those generated by MarS. Figure 6 shows the differences in the spread distribution (i.e., the difference between the best buy and sell prices, which reflects asset liquidity) between simulated and real market trajectories during a confirmed malicious market manipulation incident. This comparison can uncover subtle deviations indicative of unusual activities, offering regulators effective tools for monitoring market integrity.

Figure 6: Three bar graphs comparing the distribution similarity of data across three different periods: pre-manipulation, manipulation period, and post-manipulation. Each graph shows the probability distribution for 2 types of data: Replay and Simulation. The distribution similarity scores are 0.870 for pre-manipulation, 0.835 for the manipulation period, and 0.873 for post-manipulation.
Figure 6: Spread correlation between simulated and real market during market manipulation 

Defining new FinTech scenarios 

Generative models can create tailored content from simple descriptions. In MarS, a mechanism generates specific order flows from natural language descriptions of market conditions. To address extreme conditions, researchers developed a control signal system using a hierarchical diffusion model to generate high-fidelity signals during rare events, such as stock market crashes and circuit breakers. This capability transforms broad descriptions into precise order flow controls. 

By integrating controlled order generation with real-time feedback, MarS creates a unified framework for prediction and detection tasks, redefining financial research, applications, and market understanding. Key applications include “What If” analyses and training environments for reinforcement learning algorithms in realistic market conditions. 

“What If” analysis for financial research

The question “What would happen if different sizes of trading orders were executed under different market conditions?” is crucial for understanding market behavior. Traditional methods, relying on real orders, experience, and assumptions, are costly and slow. Generative models provide a breakthrough solution.

Figure 7 illustrates how MarS can simulate market impact: the top left shows how buy orders affect asset price trajectories, while the top right presents market impact curves of different strategies, matching traditional patterns. Researchers also used MarS to generate large-scale simulated data, constructing a market impact model using ordinary differential equations (ODE). The bottom left of Figure 7) shows the derived impact formula, and the bottom right demonstrates its interpretability. These advancements highlight MarS’s potential to enhance “What If” research through deep market modeling. 

Figure 7: Composite image of four graphs related to sample research results for market impact of orders Using MarS. The top left graph shows mid-price over time with two lines representing simulation and replay actions. The top right graph displays market impact for different agent types over time. The bottom left graph illustrates the auto-correlation of market impact decay for learned ODE, base ODE, and synthetic Seq. The bottom right heatmap shows interaction weights of the learned ODE with various features on the x-axis and log-transformed time on the y-axis.
Figure 7: Sample research results for market impact of orders using MarS 

Training environments for reinforcement learning in financial markets

Reinforcement learning (RL) algorithms require controlled environments for testing and optimization. Financial market behaviors often manifest through order flow changes, impacting the market. If the simulation cannot reflect these impacts accurately, an RL algorithm may fail in real-world scenarios.

MarS provides high-fidelity generation and real-time feedback, creating a comprehensive environment for RL in finance. Figure 8 shows the training process of trading agents, highlighting significant improvements in performance over time and demonstrating MarS’s effectiveness as an RL training ground. 

Figure 8: Line graph titled 'Price Advantage' on the y-axis and 'Step' on the x-axis. The graph shows an upward trend as the steps increase.
Figure 8: Performance of reinforcement learning trading agents trained in MarS. During training, the agent’s performance improved significantly, showcasing MarS’s ability to aid in developing robust reinforcement learning algorithms for real market conditions. 

Disclaimer: The research mentioned in this article, conducted by Microsoft Research, focuses on scientific exploration, aiming to advance knowledge and provide theoretical and technological support for research and applications in the financial field. All studies adhere to Microsoft’s responsible AI guidelines, ensuring principles such as fairness, inclusiveness, reliability and safety, transparency, privacy, and accountability are maintained. The technologies and methods discussed are still under research and development, not yet forming any commercial products or services, nor constituting any financial solutions. Readers are advised to consult certified financial professionals before making any financial decisions. 

The post MarS: A unified financial market simulation engine in the era of generative foundation models appeared first on Microsoft Research.

Read More