Accelerate your ML lifecycle using the new and improved Amazon SageMaker Python SDK – Part 2: ModelBuilder

Accelerate your ML lifecycle using the new and improved Amazon SageMaker Python SDK – Part 2: ModelBuilder

In Part 1 of this series, we introduced the newly launched ModelTrainer class on the Amazon SageMaker Python SDK and its benefits, and showed you how to fine-tune a Meta Llama 3.1 8B model on a custom dataset. In this post, we look at the enhancements to the ModelBuilder class, which lets you seamlessly deploy a model from ModelTrainer to a SageMaker endpoint, and provides a single interface for multiple deployment configurations.

In November 2023, we launched the ModelBuilder class (see Package and deploy models faster with new tools and guided workflows in Amazon SageMaker and Package and deploy classical ML and LLMs easily with Amazon SageMaker, part 1: PySDK Improvements), which reduced the complexity of initial setup of creating a SageMaker endpoint such as creating an endpoint configuration, choosing the container, serialization and deserialization, and more, and helps you create a deployable model in a single step. The recent update enhances usability of the ModelBuilder class for a wide range of use cases, particularly in the rapidly evolving field of generative AI. In this post, we deep dive into the enhancements made to the ModelBuilder class, and show you how to seamlessly deploy the fine-tuned model from Part 1 to a SageMaker endpoint.

Improvements to the ModelBuilder class

We’ve made the following usability improvements to the ModelBuilder class:

  • Seamless transition from training to inference – ModelBuilder now integrates directly with SageMaker training interfaces to make sure that the correct file path to the latest trained model artifact is automatically computed, simplifying the workflow from model training to deployment.
  • Unified inference interface – Previously, the SageMaker SDK offered separate interfaces and workflows for different types of inference, such as real-time, batch, serverless, and asynchronous inference. To simplify the model deployment process and provide a consistent experience, we have enhanced ModelBuilder to serve as a unified interface that supports multiple inference types.
  • Ease of development, testing, and production handoff – We are adding support for local mode testing with ModelBuilder so that users can effortlessly debug and test their processing and inference scripts with faster local testing without including a container, and a new function that outputs the latest container image for a given framework so you don’t have to update the code each time a new LMI release comes out.
  • Customizable inference preprocessing and postprocessing – ModelBuilder now allows you to customize preprocessing and postprocessing steps for inference. By enabling scripts to filter content and remove personally identifiable information (PII), this integration streamlines the deployment process, encapsulating the necessary steps within the model configuration for better management and deployment of models with specific inference requirements.
  • Benchmarking support – The new benchmarking support in ModelBuilder empowers you to evaluate deployment options—like endpoints and containers—based on key performance metrics such as latency and cost. With the introduction of a Benchmarking API, you can test scenarios and make informed decisions, optimizing your models for peak performance before production. This enhances efficiency and provides cost-effective deployments.

In the following sections, we discuss these improvements in more detail and demonstrate how to customize, test, and deploy your model.

Seamless deployment from ModelTrainer class

ModelBuilder integrates seamlessly with the ModelTrainer class; you can simply pass the ModelTrainer object that was used for training the model directly to ModelBuilder in the model parameter. In addition to the ModelTrainer, ModelBuilder also supports the Estimator class and the result of the SageMaker Core TrainingJob.create() function, and automatically parses the model artifacts to create a SageMaker Model object. With resource chaining, you can build and deploy the model as shown in the following example. If you followed Part 1 of this series to fine-tune a Meta Llama 3.1 8B model, you can pass the model_trainer object as follows:

# set container URI
image_uri = "763104351884.dkr.ecr.us-west-2.amazonaws.com/huggingface-pytorch-tgi-inference:2.3.0-tgi2.2.0-gpu-py310-cu121-ubuntu22.04-v2.0"

model_builder = ModelBuilder(
    model=model_trainer,  # ModelTrainer object passed onto ModelBuilder directly
    role_arn=role,
    image_uri=image_uri,
    inference_spec=inf_spec,
    instance_type="ml.g5.2xlarge"
)
# deploy the model
model_builder.build().deploy()

Customize the model using InferenceSpec

The InferenceSpec class allows you to customize the model by providing custom logic to load and invoke the model, and specify any preprocessing logic or postprocessing logic as needed. For SageMaker endpoints, preprocessing and postprocessing scripts are often used as part of the inference pipeline to handle tasks that are required before and after the data is sent to the model for predictions, especially in the case of complex workflows or non-standard models. The following example shows how you can specify the custom logic using InferenceSpec:

from sagemaker.serve.spec.inference_spec import InferenceSpec

class CustomerInferenceSpec(InferenceSpec):
    def load(self, model_dir):
        from transformers import AutoModel
        return AutoModel.from_pretrained(HF_TEI_MODEL, trust_remote_code=True)

    def invoke(self, x, model):
        return model.encode(x)

    def preprocess(self, input_data):
        return json.loads(input_data)["inputs"]

    def postprocess(self, predictions):
        assert predictions is not None
        return predictions

Test using local and in process mode

Deploying a trained model to a SageMaker endpoint involves creating a SageMaker model and configuring the endpoint. This includes the inference script, any serialization or deserialization required, the model artifact location in Amazon Simple Storage Service (Amazon S3), the container image URI, the right instance type and count, and more. The machine learning (ML) practitioners need to iterate over these settings before finally deploying the endpoint to SageMaker for inference. The ModelBuilder offers two modes for quick prototyping:

  • In process mode – In this case, the inferences are made directly within the same inference process. This is highly useful in quickly testing the inference logic provided through InferenceSpec and provides immediate feedback during experimentation.
  • Local mode – The model is deployed and run as a local container. This is achieved by setting the mode to LOCAL_CONTAINER when you build the model. This is helpful to mimic the same environment as the SageMaker endpoint. Refer to the following notebook for an example.

The following code is an example of running inference in process mode, with a custom InferenceSpec:

from sagemaker.serve.spec.inference_spec import InferenceSpec
from transformers import pipeline
from sagemaker.serve import Mode
from sagemaker.serve.builder.schema_builder import SchemaBuilder
from sagemaker.serve.builder.model_builder import ModelBuilder

value: str = "Girafatron is obsessed with giraffes, the most glorious animal on the face of this Earth. Giraftron believes all other animals are irrelevant when compared to the glorious majesty of the giraffe.nDaniel: Hello, Girafatron!nGirafatron:"
schema = SchemaBuilder(value,
            {"generated_text": "Girafatron is obsessed with giraffes, the most glorious animal on the face of this Earth. Giraftron believes all other animals are irrelevant when compared to the glorious majesty of the giraffe.\nDaniel: Hello, Girafatron!\nGirafatron: Hi, Daniel. I was just thinking about how magnificent giraffes are and how they should be worshiped by all.\nDaniel: You and I think alike, Girafatron. I think all animals should be worshipped! But I guess that could be a bit impractical...\nGirafatron: That's true. But the giraffe is just such an amazing creature and should always be respected!\nDaniel: Yes! And the way you go on about giraffes, I could tell you really love them.\nGirafatron: I'm obsessed with them, and I'm glad to hear you noticed!\nDaniel: I'"})

# custom inference spec with hugging face pipeline
class MyInferenceSpec(InferenceSpec):
    def load(self, model_dir: str):
        ...
    def invoke(self, input, model):
        ...
    def preprocess(self, input_data):
        ...
    def postprocess(self, predictions):
        ...
        
inf_spec = MyInferenceSpec()

# Build ModelBuilder object in IN_PROCESS mode
builder = ModelBuilder(inference_spec=inf_spec,
                       mode=Mode.IN_PROCESS,
                       schema_builder=schema
                      )
                      
# Build and deploy the model
model = builder.build()
predictor=model.deploy()

# make predictions
predictor.predict("How are you today?")

As the next steps, you can test it in local container mode as shown in the following code, by adding the image_uri. You will need to include the model_server argument when you include the image_uri.

image_uri = '763104351884.dkr.ecr.us-west-2.amazonaws.com/huggingface-pytorch-inference:2.0.0-transformers4.28.1-gpu-py310-cu118-ubuntu20.04'

builder = ModelBuilder(inference_spec=inf_spec,
                       mode=Mode.LOCAL_CONTAINER,  # you can change it to Mode.SAGEMAKER_ENDPOINT for endpoint deployment
                       schema_builder=schema,
                       image_uri=image,
                       model_server=ModelServer.TORCHSERVE
                      )

model = builder.build()                      
predictor = model.deploy()

predictor.predict("How are you today?")

Deploy the model

When testing is complete, you can now deploy the model to a real-time endpoint for predictions by updating the mode to mode.SAGEMAKER_ENDPOINT and providing an instance type and size:

sm_predictor = model.deploy(
    initial_instance_count=1,
    instance_type="ml.g5.2xlarge",
    mode=Mode.SAGEMAKER_ENDPOINT,
    role=execution_role,
)

sm_predictor.predict("How is the weather?")

In addition to real-time inference, SageMaker supports serverless inference, asynchronous inference, and batch inference modes for deployment. You can also use InferenceComponents to abstract your models and assign CPU, GPU, accelerators, and scaling policies per model. To learn more, see Reduce model deployment costs by 50% on average using the latest features of Amazon SageMaker.

After you have the ModelBuilder object, you can deploy to any of these options simply by adding the corresponding inference configurations when deploying the model. By default, if the mode is not provided, the model is deployed to a real-time endpoint. The following are examples of other configurations:

from sagemaker.serverless.serverless_inference_config import ServerlessInferenceConfig
predictor = model_builder.deploy(
    endpoint_name="serverless-endpoint",
    inference_config=ServerlessInferenceConfig(memory_size_in_mb=2048))
from sagemaker.async_inference.async_inference_config import AsyncInferenceConfig
from sagemaker.s3_utils import s3_path_join

predictor = model_builder.deploy(
    endpoint_name="async-endpoint",
    inference_config=AsyncInferenceConfig(
        output_path=s3_path_join("s3://", bucket, "async_inference/output")))
from sagemaker.batch_inference.batch_transform_inference_config import BatchTransformInferenceConfig

transformer = model_builder.deploy(
    endpoint_name="batch-transform-job",
    inference_config=BatchTransformInferenceConfig(
        instance_count=1,
        instance_type='ml.m5.large',
        output_path=s3_path_join("s3://", bucket, "batch_inference/output"),
        test_data_s3_path = s3_test_path
    ))
print(transformer)
  • Deploy a multi-model endpoint using InferenceComponent:
from sagemaker.compute_resource_requirements.resource_requirements import ResourceRequirements

predictor = model_builder.deploy(
    endpoint_name="multi-model-endpoint",
    inference_config=ResourceRequirements(
        requests={
            "num_cpus": 0.5,
            "memory": 512,
            "copies": 2,
        },
        limits={},
))

Clean up

If you created any endpoints when following this post, you will incur charges while it is up and running. As best practice, delete any endpoints if they are no longer required, either using the AWS Management Console, or using the following code:

predictor.delete_model() 
predictor.delete_endpoint()

Conclusion

In this two-part series, we introduced the ModelTrainer and the ModelBuilder enhancements in the SageMaker Python SDK. Both classes aim to reduce the complexity and cognitive overhead for data scientists, providing you with a straightforward and intuitive interface to train and deploy models, both locally on your SageMaker notebooks and to remote SageMaker endpoints.

We encourage you to try out the SageMaker SDK enhancements (SageMaker Core, ModelTrainer, and ModelBuilder) by referring to the SDK documentation and sample notebooks on the GitHub repo, and let us know your feedback in the comments!


About the Authors

Durga Sury is a Senior Solutions Architect on the Amazon SageMaker team. Over the past 5 years, she has worked with multiple enterprise customers to set up a secure, scalable AI/ML platform built on SageMaker.

Shweta Singh is a Senior Product Manager in the Amazon SageMaker Machine Learning (ML) platform team at AWS, leading SageMaker Python SDK. She has worked in several product roles in Amazon for over 5 years. She has a Bachelor of Science degree in Computer Engineering and a Masters of Science in Financial Engineering, both from New York University.

Read More

Accelerate your ML lifecycle using the new and improved Amazon SageMaker Python SDK – Part 1: ModelTrainer

Accelerate your ML lifecycle using the new and improved Amazon SageMaker Python SDK – Part 1: ModelTrainer

Amazon SageMaker has redesigned its Python SDK to provide a unified object-oriented interface that makes it straightforward to interact with SageMaker services. The new SDK is designed with a tiered user experience in mind, where the new lower-level SDK (SageMaker Core) provides access to full breadth of SageMaker features and configurations, allowing for greater flexibility and control for ML engineers. The higher-level abstracted layer is designed for data scientists with limited AWS expertise, offering a simplified interface that hides complex infrastructure details.

In this two-part series, we introduce the abstracted layer of the SageMaker Python SDK that allows you to train and deploy machine learning (ML) models by using the new ModelTrainer and the improved ModelBuilder classes.

In this post, we focus on the ModelTrainer class for simplifying the training experience. The ModelTrainer class provides significant improvements over the current Estimator class, which are discussed in detail in this post. We show you how to use the ModelTrainer class to train your ML models, which includes executing distributed training using a custom script or container. In Part 2, we show you how to build a model and deploy to a SageMaker endpoint using the improved ModelBuilder class.

Benefits of the ModelTrainer class

The new ModelTrainer class has been designed to address usability challenges associated with Estimator class. Moving forward, ModelTrainer will be the preferred approach for model training, bringing significant enhancements that greatly improve the user experience. This evolution marks a step towards achieving a best-in-class developer experience for model training. The following are the key benefits:

  • Improved intuitiveness – The ModelTrainer class reduces complexity by consolidating configurations into just few core parameters. This streamlining minimizes cognitive overload, allowing users to focus on model training rather than configuration intricacies. Additionally, it employs intuitive config classes for straightforward platform interactions.
  • Simplified script mode and BYOC – Transitioning from local development to cloud training is now seamless. The ModelTrainer automatically maps source code, data paths, and parameter specifications to the remote execution environment, eliminating the need for special handshakes or complex setup processes.
  • Simplified distributed training – The ModelTrainer class provides enhanced flexibility for users to specify custom commands and distributed training strategies, allowing you to directly provide the exact command you want to run in your container through the command parameter in the SourceCode This approach decouples distributed training strategies from the training toolkit and framework-specific estimators.
  • Improved hyperparameter contracts – The ModelTrainer class passes the training job’s hyperparameters as a single environment variable, allowing the you to load the hyperparameters using a single SM_HPSvariable.

To further explain each of these benefits, we demonstrate with examples in the following sections, and finally show you how to set up and run distributed training for the Meta Llama 3.1 8B model using the new ModelTrainer class.

Launch a training job using the ModelTrainer class

The ModelTrainer class simplifies the experience by letting you customize the training job, including providing a custom script, directly providing a command to run the training job, supporting local mode, and much more. However, you can spin up a SageMaker training job in script mode by providing minimal parameters—the SourceCode and the training image URI.

The following example illustrates how you can launch a training job with your own custom script by providing just the script and the training image URI (in this case, PyTorch), and an optional requirements file. Additional parameters such as the instance type and instance size are automatically set by the SDK to preset defaults, and parameters such as the AWS Identity and Access Management (IAM) role and SageMaker session are automatically detected from the current session and user’s credentials. Admins and users can also overwrite the defaults using the SDK defaults configuration file. For the detailed list of pre-set values, refer to the SDK documentation.

from sagemaker.modules.train import ModelTrainer
from sagemaker.modules.configs import SourceCode, InputData

# image URI for the training job
pytorch_image = "763104351884.dkr.ecr.us-west-2.amazonaws.com/pytorch-training:2.0.0-cpu-py310"
# you can find all available images here
# https://docs.aws.amazon.com/sagemaker/latest/dg-ecr-paths/sagemaker-algo-docker-registry-paths.html

# define the script to be run
source_code = SourceCode(
    source_dir="basic-script-mode",
    requirements="requirements.txt",
    entry_script="custom_script.py",
)

# define the ModelTrainer
model_trainer = ModelTrainer(
    training_image=pytorch_image,
    source_code=source_code,
    base_job_name="script-mode",
)

# pass the input data
input_data = InputData(
    channel_name="train",
    data_source=training_input_path,  #s3 path where training data is stored
)

# start the training job
model_trainer.train(input_data_config=[input_data], wait=False)

With purpose-built configurations, you can now reuse these objects to create multiple training jobs with different hyperparameters, for example, without having to re-define all the parameters.

Run the job locally for experimentation

To run the preceding training job locally, you can simply set the training_mode parameter as shown in the following code:

from sagemaker.modules.train.model_trainer import Mode

...
model_trainer = ModelTrainer(
    training_image=pytorch_image,
    source_code=source_code,
    base_job_name="script-mode-local",
    training_mode=Mode.LOCAL_CONTAINER,
)
model_trainer.train()

The training job runs remotely because training_mode is set to Mode.LOCAL_CONTAINER. If not explicitly set, the ModelTrainer runs a remote SageMaker training job by default. This behavior can also be enforced by changing the value to Mode.SAGEMAKER_TRAINING_JOB. For a full list of the available configs, including compute and networking, refer to the SDK documentation.

Read hyperparameters in your custom script

The ModelTrainer supports multiple ways to read the hyperparameters that are passed to a training job. In addition to the existing support to read the hyperparameters as command line arguments in your custom script, ModelTrainer also supports reading the hyperparameters as individual environment variables, prefixed with SM_HPS_<hyperparameter-key>, or as a single environment variable dictionary, SM_HPS.

Suppose the following hyperparameters are passed to the training job:

hyperparams = {
    "learning_rate": 1e-5,
    "epochs": 2,
}

model_trainer = ModelTrainer(
    ...
    hyperparameters=hyperparams,
    ...
)

You have the following options:

  • Option 1 – Load the hyperparameters into a single JSON dictionary using the SM_HPS environment variable in your custom script:
def main():
    hyperparams = json.loads(os.environ["SM_HPS"])
    learning_rate = hyperparams.get("learning_rate")
    epochs = hyperparams.get("epochs", 1)
    ...
  • Option 2 – Read the hyperparameters as individual environment variables, prefixed by SM_HP as shown in the following code (you need to explicitly specify the correct input type for these variables):
def main():
    learning_rate = float(os.environ.get("SM_HP_LEARNING_RATE", 3e-5))
    epochs = int(os.environ.get("SM_HP_EPOCHS", 1)
    ...
  • Option 3 – Read the hyperparameters as AWS CLI arguments using parse.args:
def main():
    parser = argparse.ArgumentParser()
    parser.add_argument("--learning_rate", type=float, default=3e-5)
    parser.add_argument("--epochs", type=int, default=1)
    
    args = parse_args()
    
    learning_rate = args.learning_rate
    epochs = args.epochs

Run distributed training jobs

SageMaker supports distributed training to support training for deep learning tasks such as natural language processing and computer vision, to run secure and scalable data parallel and model parallel jobs. This is usually achieved by providing the right set of parameters when using an Estimator. For example, to use torchrun, you would define the distribution parameter in the PyTorch Estimator and set it to "torch_distributed": {"enabled": True}.

The ModelTrainer class provides enhanced flexibility for users to specify custom commands directly through the command parameter in the SourceCode class, and supports torchrun, torchrun smp, and the MPI strategies. This capability is particularly useful when you need to launch a job with a custom launcher command that is not supported by the training toolkit.

In the following example, we show how to fine-tune the latest Meta Llama 3.1 8B model using the default launcher script using Torchrun on a custom dataset that’s preprocessed and saved in an Amazon Simple Storage Service (Amazon S3) location:

from sagemaker.modules.train import ModelTrainer
from sagemaker.modules.distributed import Torchrun
from sagemaker.modules.configs import Compute, SourceCode, InputData

# provide  image URI - update the URI if you're in a different region
pytorch_image = "763104351884.dkr.ecr.us-west-2.amazonaws.com/pytorch-training:2.2.0-gpu-py310"

# Define the source code configuration for the distributed training job
source_code = SourceCode(
    source_dir="distributed-training-scripts",    
    requirements="requirements.txt",  
    entry_point="fine_tune.py",
)

torchrun = Torchrun()

hyperparameters = {
    ...
}

# Compute configuration for the training job
compute = Compute(
    instance_count=1,
    instance_type="ml.g5.12xlarge",
    volume_size_in_gb=96,
    keep_alive_period_in_seconds=3600,
)


# Initialize the ModelTrainer with the specified configurations
model_trainer = ModelTrainer(
    training_image=pytorch_image,  
    source_code=source_code,
    compute=compute,
    distributed_runner=torchrun,
    hyperparameters=hyperparameters,
)

# pass the input data
input_data = InputData(
    channel_name="dataset",
    data_source="s3://your-bucket/your-prefix",  # this is the s3 path where processed data is stored
)

# Start the training job
model_trainer.train(input_data_config=[input_data], wait=False)

If you wanted to customize your torchrun launcher script, you can also directly provide the commands using the command parameter:

# Define the source code configuration for the distributed training job
source_code = SourceCode(
    source_dir="distributed-training-scripts",    
    requirements="requirements.txt",    
    # Custom command for distributed training launcher script
    command="torchrun --nnodes 1 
            --nproc_per_node 4 
            --master_addr algo-1 
            --master_port 7777 
            fine_tune_llama.py"
)


# Initialize the ModelTrainer with the specified configurations
model_trainer = ModelTrainer(
    training_image=pytorch_image,  
    source_code=source_code,
    compute=compute,
)

# Start the training job
model_trainer.train(..)

For more examples and end-to-end ML workflows using the SageMaker ModelTrainer, refer to the GitHub repo.

Conclusion

The newly launched SageMaker ModelTrainer class simplifies the user experience by reducing the number of parameters, introducing intuitive configurations, and supporting complex setups like bringing your own container and running distributed training. Data scientists can also seamlessly transition from local training to remote training and training on multiple nodes using the ModelTrainer.

We encourage you to try out the ModelTrainer class by referring to the SDK documentation and sample notebooks on the GitHub repo. The ModelTrainer class is available from the SageMaker SDK v2.x onwards, at no additional charge. In Part 2 of this series, we show you how to build a model and deploy to a SageMaker endpoint using the improved ModelBuilder class.


About the Authors

Durga Sury is a Senior Solutions Architect on the Amazon SageMaker team. Over the past 5 years, she has worked with multiple enterprise customers to set up a secure, scalable AI/ML platform built on SageMaker.

Shweta Singh is a Senior Product Manager in the Amazon SageMaker Machine Learning (ML) platform team at AWS, leading SageMaker Python SDK. She has worked in several product roles in Amazon for over 5 years. She has a Bachelor of Science degree in Computer Engineering and a Masters of Science in Financial Engineering, both from New York University.

Read More

Amazon Q Apps supports customization and governance of generative AI-powered apps

Amazon Q Apps supports customization and governance of generative AI-powered apps

We are excited to announce new features that allow creation of more powerful apps, while giving more governance control using Amazon Q Apps, a capability within Amazon Q Business that allows you to create generative AI-powered apps based on your organization’s data. These features enhance app customization options that let business users tailor solutions to their specific individual or organizational requirements. We have introduced new governance features for administrators to endorse user-created apps with app verification, and to organize app libraries with customizable label categories that reflect their organizations. App creators can now share apps privately and build data collection apps that can collate inputs across multiple users. These additions are designed to improve how companies use generative AI in their daily operations by focusing on admin controls and capabilities that unlock new use cases.

In this post, we examine how these features enhance the capabilities of Amazon Q Apps. We explore the new customization options, detailing how these advancements make Amazon Q Apps more accessible and applicable to a wider range of enterprise customers. We focus on key features such as custom labels, verified apps, private sharing, and data collection apps (preview).

Endorse quality apps and customize labels in the app library

To help with discoverability of published Amazon Q Apps and address questions about quality of user-created apps, we have launched verified apps. Verified apps are endorsed by admins, indicating they have undergone approval based on your company’s standards. Admins can endorse published Amazon Q Apps by updating their status from Default to Verified directly on the Amazon Q Business console. Admins can work closely with their business stakeholders to determine the criteria for verifying apps, based on their organization’s specific needs and policies. This admin-led labeling capability is a reactive approach to endorsing published apps, without gating the publishing process for app creators.

When users access the library, they will see a distinct blue checkmark icon on any apps that have been marked as Verified by admins (as shown in the following screenshot). Additionally, verified apps are automatically surfaced to the top of the app list within each category, making them easily discoverable. To learn more about verifying apps, refer to Understanding and managing Verified Amazon Q Apps.

Verified apps in Amazon Q Apps library

The next feature we discuss is custom labels. Admins can create custom category labels for app users to organize and classify apps in the library to reflect their team functions or organizational structure. This feature enables admins to create and manage these labels on the Amazon Q Business console, and end-users can use them at app creation and to discover relevant apps in the library. Admins can update the category labels at any time to tailor towards specific business needs depending on their use cases. For example, admins that manage Amazon Q Business app environments for marketing organizations might add labels like Product Marketing, PR, Ads, or Sales solely for the users on the marketing team to use (see the following screenshot).

Custom labels in Amazon Q Business console for Amazon Q Apps

Users on the marketing team who create apps can use the custom labels to slot their app in the right category, which will help other users discover apps in the library based on their focus area (as shown in the following screenshot). To learn more about custom labels, see Custom labels for Amazon Q Apps.

Custom labels in Amazon Q Apps library

Share your apps with select users

App creators can now use advanced sharing options to create more granular controls over apps and facilitate collaboration within their organizations. With private sharing, you have the option to share an app with select individuals or with all app users (which was previously possible). Sharing of any extent will still display the app in the library, but with private sharing, it will only be visible to app users with whom it has been shared. This means the library continues to be the place where users discover apps that they have access to. This feature unlocks the ability to enable apps only to the intended audience and helps reduce “noise” in the library from apps that aren’t necessarily relevant for all users. App creators have the ability to test updates before they are ready to publish changes, helping make sure app iterations and refinements aren’t shared before they are ready to widely publish the revised version.

To share an app with specific users, creators can add each user using their full email address (see the following screenshot). Users are only added after the email address match is found, making sure creators don’t unknowingly give access to someone who doesn’t have access to that Amazon Q Business app environment. To learn more about private sharing, see Sharing Amazon Q Apps.

Private sharing in Amazon Q Apps

Unlock new use cases with data collection

The last feature we share in this post is data collection apps (preview), a new capability that allows you to record inputs provided by other app users, resulting in a new genre of Amazon Q Apps such as team surveys and project retrospectives. This enhancement enables you to collate data across multiple users within your organization, further enhancing the collaborative quality of Amazon Q Apps for various business needs. These apps can further use generative AI to analyze the collected data, identify common themes, summarize ideas, and provide actionable insights.

After publishing a data collection app to the library, creators can share the unique link to invite their colleagues to participate. You must share the unique link to get submissions for your specific data collection. When app users open the data collection app from the library, it triggers a fresh data collection with its own unique shareable link, for which they are the designated owner. As the owner of a data collection, you can start new rounds and manage controls to start and stop accepting new data submissions, as well as reveal or hide the collected data. To learn more about data collection apps, see Data collection in Amazon Q Apps.

Amazon Q Apps data collection app

Conclusion

In this post, we discussed how these new features for Amazon Q Apps in Amazon Q Business make generative AI more customizable and governable for enterprise users. From custom labels and verified apps to private sharing and data collection capabilities, these innovations enable organizations to create, manage, and share AI-powered apps that align with their specific business needs while maintaining appropriate controls.

For more information, see Creating purpose-built Amazon Q Apps.


About the Author

Tiffany Myers, Product ManagerTiffany Myers is a Product Manager at AWS, where she leads bringing in new capabilities while maintaining the simplicity of Amazon Q Business and Amazon Q Apps, drawing inspiration from the adaptive intelligence of amphibians in nature to help customers transform and evolve their businesses through generative AI.

Read More

Answer questions from tables embedded in documents with Amazon Q Business

Answer questions from tables embedded in documents with Amazon Q Business

Amazon Q Business is a generative AI-powered assistant that can answer questions, provide summaries, generate content, and securely complete tasks based on data and information in your enterprise systems. A large portion of that information is found in text narratives stored in various document formats such as PDFs, Word files, and HTML pages. Some information is also stored in tables (such as price or product specification tables) embedded in those same document types, CSVs, or spreadsheets. Although Amazon Q Business can provide accurate answers from narrative text, getting answers from these tables requires special handling of more structured information.

On November 21, 2024, Amazon Q Business launched support for tabular search, which you can use to extract answers from tables embedded in documents ingested in Amazon Q Business. Tabular search is a built-in feature in Amazon Q Business that works seamlessly across many domains, with no setup required from admin or end users.

In this post, we ingest different types of documents that have tables and show you how Amazon Q Business responds to questions related to the data in the tables.

Prerequisites

To follow along with this walkthrough, you need to have the following prerequisites in place:

  • An AWS Account where you can follow the instructions in this post.
  • At least one Amazon Q Business user is required. For information, refer to Amazon Q Business pricing.
  • Requires cross-Region inference enabled on the Amazon Q application.
  • Amazon Q Business applications created on or after November 21, 2024, will automatically benefit from the new capability. If your application was created before this date, you are required to reingest your content to update their indexes.

Overview of tabular search

Tabular search extends Amazon Q Business capabilities to find answers beyond text paragraphs, analyzing tables embedded in enterprise documents so you can get answers to a wide range of queries, including factual lookup from tables.

With tabular search in Amazon Q Business, you can ask questions such as, “what’s the credit card with the lowest APR and no annual fees?” or “which credit cards offer travel insurance?” where the answers may be found in a product-comparison table, inside a marketing PDF stored in an internal repository, or on a website.

This feature supports a wide range of file formats, including PDF, Word documents, CSV files, Excel spreadsheets, HTML, and SmartSheet (via SmartSheet connector). Notably, tabular search can also extract data from tables represented as images within PDFs and retrieve information from single or multiple cells. Additionally, it can perform aggregations on numerical data, providing users with valuable insights.

Ingest documents in Amazon Q Business

To create an Amazon Q Business application, retriever, and index to pull data in real time during a conversation, follow the steps under the Create and configure your Amazon Q application section in the AWS Machine Learning Blog post, Discover insights from Amazon S3 with Amazon Q S3 connector.

For this post, we use The World’s Billionaires, which lists the world’s top 10 billionaires from 1987 through 2024 in a tabular format. You can download this data as a PDF from Wikipedia using the Tools menu. Upload the PDF to an Amazon Simple Storage Service (Amazon S3) bucket and use it as a data source in your Amazon Q Business application.

Run queries with Amazon Q

You can start asking questions to Amazon Q using the Web experience URL, which can be found on the Applications page, as shown in the following screenshot.

Suppose we want to know the ratio of men to women who appeared on the Forbes 2024 list of the world’s billionaires. As you can tell from the following screenshot of The World’s Billionaires PDF, there were 383 women and 2398 men.

To use Amazon Q Business to elicit that information from the PDF, enter the following in the web experience chatbot

“In 2024, what is the ratio of men to women who appeared in the Forbes 2024 billionaire’s list?”

Amazon Q Business supplies the answer, as shown in the following screenshot.

The following screenshot is a list of the top 10 Billionaires from 2009.

We enter “How many of the top 10 billionaires in 2009 were from countries outside the United States?”

Amazon Q Business provides an answer, as shown in the following screenshot.

Next, to demonstrate how Amazon Q Business can pull data from a CSV file, we used the example of crime statistics found here.

We enter the question, “How many incidents of crime were reported in Hollywood?”

Amazon Q Business provides the answer, as shown in the following screenshot.

Metadata boosting

To improve the accuracy of responses from Amazon Q Business application with CSV files, you can add metadata to documents in an S3 bucket by using a metadata file. Metadata is additional information about a document describing it further in order to improve retrieval accuracy for context-poor document formats for example, a CSV with cryptic column names. Additional fields such as its title and the date and time it was created can also be useful if you want to search the titles or want documents from certain time period.

You can do this by following Enable document attributes for search in Amazon Q Business.

Additional details about metadata boosting can be found at Configuring document attributes for boosting in Amazon Q Business in the Amazon Q User Guide.

Clean up

To avoid incurring future charges and to clean out unused roles and policies, delete the resources you created: the Amazon Q application, data sources, and corresponding IAM roles.

To delete the Amazon Q application, follow these steps:

  1. On the Amazon Q console, choose Applications and then select your application.
  2. On the Actions drop-down menu, choose Delete.
  3. To confirm deletion, enter delete in the field and choose Delete. Wait until you get the confirmation message; the process can take up to 15 minutes.

To delete the S3 bucket created in Prepare your S3 bucket as a data source, follow these steps:

  1. Follow the instructions in Emptying a bucket
  2. Follow the steps in Deleting a bucket

To delete the IAM Identity center instance you created as part of the prerequisites, follow the steps at Delete your IAM Identity Center instance.

Conclusion

By following this post, you can ingest different types of documents that contain tables in them. Then, you can ask Amazon Q questions related to information in the table and have Amazon Q provide you answers in natural language.

To learn about metadata search, refer to Configuring metadata controls in Amazon Q Business.

For S3 data source setup refer to Set up Amazon Q Business application with S3 data source.


About the author

jdJiten Dedhia is a Sr. AIML Solutions Architect with over 20 years of experience in the software industry. He has helped Fortune 500 companies with their AIML/Generative AI needs.

smSapna Maheshwari is a Sr. Solutions Architect at AWS, with a passion for designing impactful tech solutions. She is an engaging speaker who enjoys sharing her insights at conferences.

Read More

Human-AI Collaboration in Physical Tasks

Human-AI Collaboration in Physical Tasks

TL;DR: At SmashLab, we’re creating an intelligent assistant that uses the sensors in a smartwatch to support physical tasks such as cooking and DIY. This blog post explores how we use less intrusive scene understanding—compared to cameras—to enable helpful, context-aware interactions for task execution in their daily lives.

Thinking about AI assistants for tasks beyond just the digital world? Every day, we perform many tasks, including cooking, crafting, and medical self-care (like the COVID-19 self-test kit), which involve a series of discrete steps. Accurately executing all the steps can be difficult; when we try a new recipe, for example, we might have questions at any step and might make mistakes by skipping important steps or doing them in the wrong order.

This project, Procedural Interaction from Sensing Module (PrISM), aims to support users in executing these kinds of tasks through dialogue-based interactions. By using sensors such as a camera, wearable devices like a smartwatch, and privacy-preserving ambient sensors like a Doppler Radar, an assistant can infer the user’s context (what they are doing within the task) and provide contextually situated help.

Overview of the PrISM framework: multimodal sensing, user state tracking, context-aware interactions, and co-adaptation to achieve the shared goal.

To achieve human-like assistance, we must consider many things: how does the agent understand the user’s context? How should it respond to user’s spontaneous questions? When should it decide to intervene proactively? And most importantly, how do both human users and AI assistants evolve together through everyday interactions?

While different sensing platforms (e.g., cameras, LiDAR, Doppler Radars, etc.) can be used in our framework, we focus on a smartwatch-based assistant in the following. The smartwatch is chosen for its ubiquity, minimal privacy concerns compared to camera-based systems, and capability for monitoring a user across various daily activities.

Tracking User Actions with Multimodal Sensing

PrISM-Tracker uses a transition graph to improve frame-level multimodal Human Activity Recognition within procedural tasks.

Human Activity Recognition (HAR) is a technique to identify user activity contexts from sensors. For example, a smartwatch has motion and audio sensors to detect different daily activities such as hand washing and chopping vegetables [1]. However, out of the box, state-of-the-art HAR struggles from noisy data and less-expressive actions that are often part of daily life tasks.

PrISM-Tracker (IMWUT’22) [2] improves tracking by adding state transition information, that is, how users transition from one step to another and how long they usually spend at each step. The tracker uses an extended version of the Viterbi algorithm [3] to stabilize the frame-by-frame HAR prediction.

The latte-making task consists of 19 steps. PrISM-Tracker (right) improves the raw classifier’s tracking accuracy (left) with an extended version of the Viterbi algorithm.

As shown in the above figure, PrISM-Tracker improves the accuracy of frame-by-frame tracking. Still, the overall accuracy is around 50-60%, highlighting the challenge of using just a smartwatch to precisely track the procedure state at the frame level. Nevertheless, we can develop helpful interactions out of this imperfect sensing.

Responding to User Ambiguous Queries

Demo of PrISM-Q&A in a latte-making scenario (1:06-)

Voice assistants (like Siri and Amazon Alexa), capable of answering user queries during various physical tasks, have shown promise in guiding users through complex procedures. However, users often find it challenging to articulate their queries precisely, especially when unfamiliar with the specific vocabulary. Our PrISM-Q&A (IMWUT’24) [4] can resolve such issues with context derived from PrISM-Tracker.

Overview of how PrISM-Q&A processes user queries in real-time

When a question is posed, sensed contextual information is supplied to Large Language Models (LLMs) as part of the prompt context used to generate a response, even in the case of inherently vague questions like “What should I do next with this?” and “Did I miss any step?” Our studies demonstrated improved accuracy in question answering and preferred user experience compared to existing voice assistants in multiple tasks: cooking, latte-making, and skin care.

Because PrISM-Tracker can make mistakes, the output of PrISM-Q&A may also be incorrect. Thus, if the assistant uses the context information, the assistant first characterizes its current understanding of the context in the response to avoid confusing the user, for instance, “If you are washing your hands, then the next step is cutting vegetables.” This way, it tries to help users identify the error and quickly correct it interactively to get the desired answer.

Intervening with Users Proactively to Prevent Errors

Demo of PrISM-Observer in a cooking scenario (3:38-)

Next, we extended the assistant’s capability by incorporating proactive intervention to prevent errors. Technical challenges include noise in sensing data and uncertainties in user behavior, especially since users are allowed flexibility in the order of steps to complete tasks. To address these challenges, PrISM-Observer (UIST’24) [5] employs a stochastic model to try to account for uncertainties and determine the optimal timing for delivering reminders in real time.

PrISM-Observer continuously models the remaining time to the target step, which involves two uncertainties: the current step and the user’s future transition behavior.

Crucially, the assistant does not impose a rigid, predefined step-by-step sequence; instead, it monitors user behavior and intervenes proactively when necessary. This approach balances user autonomy and proactive guidance, enabling individuals to perform essential tasks safely and accurately.

Future Directions

Our assistant system has just been rolled out, and plenty of future work is still on the horizon.

Minimizing the data collection effort

To train the underlying human activity recognition model on the smartwatch and build a transition graph, we currently conduct 10 to 20 sessions of the task, each annotated with step labels. Employing a zero-shot multimodal activity recognition model and refining step granularity are essential for scaling the assistant to handle various daily tasks.

Co-adaptation of the user and AI assistant

In the health application, our assistants and users learn from each other over time through daily interactions to achieve a shared goal.

As future work, we’re excited to deploy our assistants in healthcare settings to support everyday care for post-operative skin cancer patients and individuals with dementia.

Mackay [6] introduced the idea of a human-computer partnership, where humans and intelligent agents collaborate to outperform either working alone. Also, reciprocal co-adaptation [7] refers to where both the user and the system adapt to and affect the others’ behavior to achieve certain goals. Inspired by these ideas, we’re actively exploring ways to fine-tune our assistant through interactions after deployment. This helps the assistant improve context understanding and find a comfortable control balance by exploring the mixed-initiative interaction design [8].

Conclusion

There are many open questions when it comes to perfecting assistants for physical tasks. Understanding user context accurately during these tasks is particularly challenging due to factors like sensor noise. Through our PrISM project, we aim to overcome these challenges by designing interventions and developing human-AI collaboration strategies. Our goal is to create helpful and reliable interactions, even in the face of imperfect sensing.

Our code and datasets are available on GitHub. We are actively working in this exciting research field. If you are interested, please contact Riku Arakawa (HCII Ph.D. student).

Acknowledgments

The author thanks every collaborator in the project. The development of the PrISM assistant for health applications is in collaboration with University Hospitals of Cleveland Department of Dermatology and Fraunhofer Portugal AICOS.

References

[1] Mollyn, V., Ahuja, K., Verma, D., Harrison, C., & Goel, M. (2022). SAMoSA: Sensing activities with motion and subsampled audio. Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies6(3), 1-19.

[2] Arakawa, R., Yakura, H., Mollyn, V., Nie, S., Russell, E., DeMeo, D. P., … & Goel, M. (2023). Prism-tracker: A framework for multimodal procedure tracking using wearable sensors and state transition information with user-driven handling of errors and uncertainty. Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies6(4), 1-27.

[3] Forney, G. D. (1973). The viterbi algorithm. Proceedings of the IEEE61(3), 268-278.

[4] Arakawa, R., Lehman, JF. & Goel, M. (2024) “Prism-q&a: Step-aware voice assistant on a smartwatch enabled by multimodal procedure tracking and large language models.” Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies, 8(4), 1-26.

[5] Arakawa, R., Yakura, H., & Goel, M. (2024, October). PrISM-Observer: Intervention agent to help users perform everyday procedures sensed using a smartwatch. In Proceedings of the 37th Annual ACM Symposium on User Interface Software and Technology (pp. 1-16).

[6] Mackay, W. E. (2023, November). Creating human-computer partnerships. In International Conference on Computer-Human Interaction Research and Applications (pp. 3-17). Cham: Springer Nature Switzerland.

[7] Beaudouin-Lafon, M., Bødker, S., & Mackay, W. E. (2021). Generative theories of interaction. ACM Transactions on Computer-Human Interaction (TOCHI), 28(6), 1-54.

[8] Allen, J. E., Guinn, C. I., & Horvtz, E. (1999). Mixed-initiative interaction. IEEE Intelligent Systems and their Applications, 14(5), 14-23.

Read More

Ready Player Fun: GFN Thursday Brings Six New Adventures to the Cloud

Ready Player Fun: GFN Thursday Brings Six New Adventures to the Cloud

From heart-pounding action games to remastered classics, there’s something for everyone this GFN Thursday.

Six new titles join the cloud this week, starting with The Thing: Remastered. Face the horrors of the Antarctic as the game oozes onto GeForce NOW. Nightdive Studios’ revival of the cult-classic 2002 survival-horror game came to the cloud as a surprise at the PC Gaming Show last week. Since then, GeForce NOW members have been able to experience all the bone-chilling action in the sequel to the title based on Universal Pictures’ genre-defining 1982 film.

And don’t miss out on the limited-time GeForce NOW holiday sale, which offers 50% off the first month of a new Ultimate or Performance membership. The 25% off Day Pass sale ends today — take advantage of the offer to experience 24 hours of cloud gaming with all the benefits of Ultimate or Performance membership.

It’s Alive!

The Thing Remastered on GeForce NOW@
Freeze enemies, not frame rates.

The Thing: Remastered brings the 2002 third-person shooter into the modern era with stunning visual upgrades, including improved character models, textures and animations, all meticulously crafted to enhance the game’s already-tense atmosphere.

Playing as Captain J.F. Blake, leader of a U.S. governmental rescue team, navigate the blood-curdling aftermath of the events depicted in the original film. Trust is a precious commodity as members command their squad through 11 terrifying levels, never knowing who might harbor the alien within. The remaster introduces enhanced lighting and atmospheric effects that make the desolate research facility more immersive and frightening than ever.

With an Ultimate or Performance membership, stream this blood-curdling experience in all its remastered glory without the need for high-end hardware. GeForce NOW streams from powerful GeForce RTX-powered servers in the cloud, rendering every shadow, every flicker of doubt in teammates’ eyes and every grotesque transformation with crystal-clear fidelity.

The Performance tier now offers up to 1440p resolution, allowing members to immerse themselves in the game’s oppressive atmosphere with even greater clarity. Ultimate members can experience the paranoia-inducing gameplay at up to 4K resolution and 120 frames per second, making every heart-pounding moment feel more real than ever.

Feast on This

Dive into the depths of a gothic vampire saga, slide through feudal Japan and flip burgers at breakneck speed with GeForce NOW and the power of the cloud. Grab a controller and rally the gaming squad to stream these mouth-watering additions.

Legacy of Kain Soul Reaver 1&2 Remastered on GeForce NOW
Time to rise again.

The highly anticipated Legacy of Kain Soul Reaver 1&2 Remastered from Aspyr and Crystal Dynamics breathes new life into the classic vampire saga genre. These beloved titles have been meticulously overhauled to offer stunning visuals and improved controls. Join the epic conflict of Kain and Raziel in the gothic world of Nosgoth and traverse between the Spectral and Material Realms to solve puzzles, reveal new paths and defeat foes.

The Spirit of the Samurai on GeForce NOW
Defend the forbidden village.

The Spirit of the Samurai from Digital Mind Games and Kwalee brings a blend of Souls and Metroidvania elements to feudal Japan. This stop-motion inspired 2D action-adventure game offers three playable characters and intense combat with legendary Japanese weapons, all set against a backdrop of mythological landscapes.

Fast Food Simulator on GeForce NOW
The ice cream machine actually works.

Or take on the chaotic world of fast-food management with Fast Food Simulator, a multiplayer simulation game from No Ceiling Games. Take orders, make burgers and increase earnings by dealing with customers. Play solo or co-op with up to four players and take on unexpected and bizarre events that can occur at any moment.

Shift between realms in Legacy of Kain at up to 4K 120 fps with an Ultimate membership, slice through The Spirit of the Samurai’s mythical landscapes in stunning 1440p with RTX ON with a Performance membership or manage a fast-food empire with silky-smooth gameplay. With extended sessions and priority access, members will have plenty of time to master these diverse worlds.

Play On

Diablo Immortal on GeForce NOW
Evil never sleeps.

Diablo Immortal — the action-packed role-playing game from Blizzard Entertainment, set in the dark fantasy world of Sanctuary — bridges the stories of Diablo II and Diablo III. Choose from a variety of classes, each offering unique playstyles and devastating abilities, to battle through diverse zones and randomly generated rifts, and uncover the mystery of the shattered Worldstone while facing off against hordes of demonic enemies.

Since its launch, the game has offered frequent updates, including two new character classes, new zones, gear, competitive events and more demonic stories to experience. With its immersive storytelling, intricate character customization and endless replayability, Diablo Immortal provides members with a rich, hellish adventure to stream from the cloud across devices.

Look for the following games available to stream in the cloud this week:

  • Indiana Jones and the Great Circle (New release on Steam and Xbox, available on the Microsoft Store and PC Game Pass, Dec. 8)
  • Fast Food Simulator (New release on Steam, Dec. 10)
  • Legacy of Kain Soul Reaver 1&2 Remastered (New release on Steam, Dec. 10)
  • The Spirit of the Samurai (New release on Steam, Dec. 12)
  • Diablo Immortal (Battle.net)
  • The Lord of the Rings: Return to Moria (Steam)

What are you planning to play this weekend? Let us know on X or in the comments below.

Read More

Driving Mobility Forward, Vay Brings Advanced Automotive Solutions to Roads With NVIDIA DRIVE AGX

Driving Mobility Forward, Vay Brings Advanced Automotive Solutions to Roads With NVIDIA DRIVE AGX

Vay, a Berlin-based provider of automotive-grade remote driving (teledriving) technology, is offering an alternative approach to autonomous driving.

Through the company’s app, a user can hail a car, and a professionally trained teledriver will remotely drive the vehicle to the customer’s location. Once the car arrives, the user manually drives it.

After completing their trip, the user can end the rental in the app and pull over to a safe location to exit the car, away from traffic flow. There’s no need to park the vehicle, as the teledriver will handle the parking or drive the car to the next customer.

This system offers sustainable, door-to-door mobility, with the unique advantage of having a human driver remotely controlling the vehicle in real time.

Vay’s technology is built on the NVIDIA DRIVE AGX centralized compute platform, running the NVIDIA DriveOS operating system for safe, AI-defined autonomous vehicles.

These technologies enable Vay’s fleets to process large volumes of camera and other vehicle data over the air. DRIVE AGX’s real-time, low-latency video streaming capabilities provide enhanced situational awareness for teledrivers, while its automotive-grade design ensures reliability in any driving condition.

“By combining Vay’s innovative remote driving capabilities with the advanced AI and computing power of NVIDIA DRIVE AGX, we’re setting a new standard for remotely driven vehicles,” said Justin Spratt, chief business officer at Vay. “This collaboration helps us bring safe, reliable and accessible driverless options to the market and provides an adaptable solution that can be deployed in real-world environments now — not years from now.”

High-Quality Video Stream

Vay’s advanced technology stack includes NVIDIA DRIVE AGX software that’s optimized for latency and processing power. By harnessing NVIDIA GPUs specifically designed for autonomous driving, the company’s teledriving system can process and transmit high-definition video feeds in real time, delivering critical situational awareness to the teledriver, even in complex environments. In the event of an emergency, the vehicle can safely bring itself to a complete stop.

“Working with NVIDIA, Vay is setting a new standard in driverless technology,” said Bogdan Djukic, cofounder and vice president of engineering, teledrive experience and autonomy at Vay. “We are proud to not only accelerate the deployment of remotely driven and autonomous vehicles but also to expand the boundaries of what’s possible in urban transportation, logistics and beyond — transforming mobility for both businesses and communities.”

Reshaping Mobility With Teledriving

Vay’s technology enables professionally trained teledrivers to remotely drive vehicles from specialized teledrive stations equipped with industry-standard controls, such as a steering wheel and pedals.

The company’s teledrivers are totally immersed in the drive — road traffic sounds, such as those from emergency vehicles and other warning signals, are transmitted via microphones to the operator’s headphones. Camera sensors reproduce the car’s surroundings and transmit them to the screens of the teledrive station with minimum latency. The vehicles can operate at speeds of up to 26 mph.

Vay’s technology effectively addresses complex edge cases with human supervision, enhancing safety while significantly reducing costs and development challenges.

Vay is a member of NVIDIA Inception, a program that nurtures AI startups with go-to-market support, expertise and technology. Last year, Vay became the first and only company in Europe to teledrive a vehicle on public streets without a safety driver.

Since January, Vay has been operating its commercial services in Las Vegas. The startup recently secured a partnership with Bayanat, a provider of AI-powered geospatial solutions, and is working with Ush and Poppy, Belgium-based car-sharing companies, as well as Peugeot, a French automaker.

In October, Vay announced a $35 million investment from the European Investment Bank, which will help it roll out its technology across Europe and expand its development team.

Learn more about the NVIDIA DRIVE platform.

Read More

How AWS sales uses Amazon Q Business for customer engagement

How AWS sales uses Amazon Q Business for customer engagement

Earlier this year, we published the first in a series of posts about how AWS is transforming our seller and customer journeys using generative AI. In addition to planning considerations when building an AI application from the ground up, it focused on our Account Summaries use case, which allows account teams to quickly understand the state of a customer account, including recent trends in service usage, opportunity pipeline, and recommendations to help customers maximize the value they receive from AWS.

In the same spirit of using generative AI to equip our sales teams to most effectively meet customer needs, this post reviews how we’ve delivered an internally-facing conversational sales assistant using Amazon Q Business. We discuss how our sales teams are using it today, compare the benefits of Amazon Q Business as a managed service to the do-it-yourself option, review the data sources available and high-level technical design, and talk about some of our future plans.

Introducing Field Advisor

In April 2024, we launched our AI sales assistant, which we call Field Advisor, making it available to AWS employees in the Sales, Marketing, and Global Services organization, powered by Amazon Q Business. Since that time, thousands of active users have asked hundreds of thousands of questions through Field Advisor, which we have embedded in our customer relationship management (CRM) system, as well as through a Slack application. The following screenshot shows an example of an interaction with Field Advisor.

Field Advisor serves four primary use cases:

  • AWS-specific knowledge search – With Amazon Q Business, we’ve made internal data sources as well as public AWS content available in Field Advisor’s index. This enables sales teams to interact with our internal sales enablement collateral, including sales plays and first-call decks, as well as customer references, customer- and field-facing incentive programs, and content on the AWS website, including blog posts and service documentation.
  • Document upload – When users need to provide context of their own, the chatbot supports uploading multiple documents during a conversation. We’ve seen our sales teams use this capability to do things like consolidate meeting notes from multiple team members, analyze business reports, and develop account strategies. For example, an account manager can upload a document representing their customer’s account plan, and use the assistant to help identify new opportunities with the customer.
  • General productivity – Amazon Q Business specializes in Retrieval Augmented Generation (RAG) over enterprise and domain-specific datasets, and can also perform general knowledge retrieval and content generation tasks. Our sales, marketing, and operations teams use Field Advisor to brainstorm new ideas, as well as generate personalized outreach that they can use with their customers and stakeholders.
  • Notifications and recommendations – To complement the conversational capabilities provided by Amazon Q, we’ve built a mechanism that allows us to deliver alerts, notifications, and recommendations to our field team members. These push-based notifications are available in our assistant’s Slack application, and we’re planning to make them available in our web experience as well. Example notifications we deliver include field-wide alerts in support of AWS summits like AWS re:Invent, reminders to generate an account summary when there’s an upcoming customer meeting, AI-driven insights around customer service usage and business data, and cutting-edge use cases like autonomous prospecting, which we’ll talk more about in an upcoming post.

Based on an internal survey, our field teams estimate that roughly a third of their time is spent preparing for their customer conversations, and another 20% (or more) is spent on administrative tasks. This time adds up individually, but also collectively at the team and organizational level. Using our AI assistant built on Amazon Q, team members are saving hours of time each week. Not only that, but our sales teams devise action plans that they otherwise might have missed without AI assistance.

Here’s a sampling of what some of our more active users had to say about their experience with Field Advisor:

“I use Field Advisor to review executive briefing documents, summarize meetings and outline actions, as well analyze dense information into key points with prompts. Field Advisor continues to enable me to work smarter, not harder.”– Sales Director

“When I prepare for onsite customer meetings, I define which advisory packages to offer to the customer. We work backward from the customer’s business objectives, so I download an annual report from the customer website, upload it in Field Advisor, ask about the key business and tech objectives, and get a lot of valuable insights. I then use Field Advisor to brainstorm ideas on how to best position AWS services. Summarizing the business objectives alone saves me between 4–8 hours per customer, and we have around five customer meetings to prepare for per team member per month.” – AWS Professional Services, EMEA

“I benefit from getting notifications through Field Advisor that I would otherwise not be aware of. My customer’s Savings Plans were expiring, and the notification helped me kick off a conversation with them at the right time. I asked Field Advisor to improve the content and message of an email I needed to send their executive team, and it only took me a minute. Thank you!” – Startup Account Manager, North America

Amazon Q Business underpins this experience, reducing the time and effort it takes for internal teams to have productive conversations with their customers that drive them toward the best possible outcomes on AWS.

The rest of this post explores how we’ve built our AI assistant for sales teams using Amazon Q Business, and highlights some of our future plans.

Putting Amazon Q Business into action

We started our journey in building this sales assistant before Amazon Q Business was available as a fully managed service. AWS provides the primitives needed for building new generative AI applications from the ground up: services like Amazon Bedrock to provide access to several leading foundation models, several managed vector database options for semantic search, and patterns for using Amazon Simple Storage Service (Amazon S3) as a data lake to host knowledge bases that can be used for RAG. This approach works well for teams like ours with builders experienced in these technologies, as well as for teams who need deep control over every component of the tech stack to meet their business objectives.

When Amazon Q Business became generally available in April 2024, we quickly saw an opportunity to simplify our architecture, because the service was designed to meet the needs of our use case—to provide a conversational assistant that could tap into our vast (sales) domain-specific knowledge bases. By moving our core infrastructure to Amazon Q, we no longer needed to choose a large language model (LLM) and optimize our use of it, manage Amazon Bedrock agents, a vector database and semantic search implementation, or custom pipelines for data ingestion and management. In just a few weeks, we were able to cut over to Amazon Q and significantly reduce the complexity of our service architecture and operations. Not only that, we expected this move to pay dividends—and it has—as the Amazon Q Business service team has continued to add new features (like automatic personalization) and enhance performance and result accuracy.

The following diagram illustrates Field Advisor’s high-level architecture:

Architecture of AWS Field Advisor using Amazon Q Business

Solution overview

We built Field Advisor using the built-in capabilities of Amazon Q Business. This includes how we configured data sources that comprise our knowledge base, indexing documents and relevancy tuning, security (authentication, authorization, and guardrails), and Amazon Q’s APIs for conversation management and custom plugins. We deliver our chatbot experience through a custom web frontend, as well as through a Slack application.

Data management

As mentioned earlier in this post, our initial knowledge base is comprised of all of our internal sales enablement materials, as well as publicly available content including the AWS website, blog posts, and service documentation. Amazon Q Business provides a number of out-of-the-box connectors to popular data sources like relational databases, content management systems, and collaboration tools. In our case, where we have several applications built in-house, as well as third-party software backed by Amazon S3, we make heavy use of Amazon Q connector for Amazon S3, and as well as custom connectors we’ve written. Using the service’s built-in source connectors standardizes and simplifies the work needed to maintain data quality and manage the overall data lifecycle. Amazon Q gives us a templatized way to filter source documents when generating responses on a particular topic, making it straightforward for the application to produce a higher quality response. Not only that, but each time Amazon Q provides an answer using the knowledge base we’ve connected, it automatically cites sources, enabling our sellers to verify authenticity in the information. Previously, we had to build and maintain custom logic to handle these tasks.

Security

Amazon Q Business provides capabilities for authentication, authorization, and access control out of the box. For authentication, we use AWS IAM Identity Center for enterprise single sign-on (SSO), using our internal identity provider called Amazon Federate. After going through a one-time setup for identity management that governs access to our sales assistant application, Amazon Q is aware of the users and roles across our sales teams, making it effortless for our users to access Field Advisor across multiple delivery channels, like the web experience embedded in our CRM, as well as the Slack application.

Also, with our multi-tenant AI application serving thousands of users across multiple sales teams, it’s critical that end-users are only interacting with data and insights that they should be seeing. Like any large organization, we have information firewalls between teams that help us properly safeguard customer information and adhere to privacy and compliance rules. Amazon Q Business provides the mechanisms for protecting each individual document in its knowledge base, simplifying the work required to make sure we’re respecting permissions on the underlying content that’s accessible to a generative AI application. This way, when a user asks a question of the tool, the answer will be generated using only information that the user is permitted to access.

Web experience

As noted earlier, we built a custom web frontend rather than using the Amazon Q built-in web experience. The Amazon Q experience works great, with features like conversation history, sample quick prompts, and Amazon Q Apps. Amazon Q Business makes these features available through the service API, allowing for a customized look and feel on the frontend. We chose this path to have a more fluid integration with our other field-facing tools, control over branding, and sales-specific contextual hints that we’ve built into the experience. As an example, we’re planning to use Amazon Q Apps as the foundation for an integrated prompt library that is personalized for each user and field-facing role.

A look at what’s to come

Field Advisor has seen early success, but it’s still just the beginning, or Day 1 as we like to say here at Amazon. We’re continuing to work on bringing our field-facing teams and field support functions more generative AI across the board. With Amazon Q Business, we no longer need to manage each of the infrastructure components required to deliver a secure, scalable conversational assistant—instead, we can focus on the data, insights, and experience that benefit our salesforce and help them make our customers successful on AWS. As Amazon Q Business adds features, capabilities, and improvements (which we often have the privilege of being able to test in early access) we automatically reap the benefits.

The team that built this sales assistant has been focused on developing—and will be launching soon—deeper integration with our CRM. This will enable teams across all roles to ask detailed questions about their customer and partner accounts, territories, leads and contacts, and sales pipeline. With an Amazon Q custom plugin that uses an internal library used for natural language to SQL (NL2SQL), the same that powers generative SQL capabilities across some AWS database services like Amazon Redshift, we will provide the ability to aggregate and slice-and-dice the opportunity pipeline and trends in product consumption conversationally. Finally, a common request we get is to use the assistant to generate more hyper-personalized customer-facing collateral—think of a first-call deck about AWS products and solutions that’s specific to an individual customer, localized in their language, that draws from the latest available service options, competitive intelligence, and the customer’s existing usage in the AWS Cloud.

Conclusion

In this post, we reviewed how we’ve made a generative AI assistant available to AWS sales teams, powered by Amazon Q Business. As new capabilities land and usage continues to grow, we’re excited to see how our field teams use this, along with other AI solutions, to help customers maximize their value on the AWS Cloud.

The next post in this series will dive deeper into another recent generative AI use case and how we applied this to autonomous sales prospecting. Stay tuned for more, and reach out to us with any questions about how you can drive growth with AI at your business.


About the authors

Joe Travaglini is a Principal Product Manager on the AWS Field Experiences (AFX) team who focuses on helping the AWS salesforce deliver value to AWS customers through generative AI. Prior to AFX, Joe led the product management function for Amazon Elastic File System, Amazon ElastiCache, and Amazon MemoryDB.

Jonathan Garcia is a Sr. Software Development Manager based in Seattle with over a decade of experience at AWS. He has worked on a variety of products, including data visualization tools and mobile applications. He is passionate about serverless technologies, mobile development, leveraging Generative AI, and architecting innovative high-impact solutions. Outside of work, he enjoys golfing, biking, and exploring the outdoors.

Umesh Mohan is a Software Engineering Manager at AWS, where he has been leading a team of talented engineers for over three years. With more than 15 years of experience in building data warehousing products and software applications, he is now focusing on the use of generative AI to drive smarter and more impactful solutions. Outside of work, he enjoys spending time with his family and playing tennis.

Read More