Announcing PyTorch Docathon 2023

Announcing PyTorch Docathon 2023

PyTorch Docathon

We are excited to announce the first ever PyTorch Docathon! The Docathon is a hackathon-style event focused on improving the documentation by enlisting the help of the community. Documentation is a crucial aspect of any technology and by improving the documentation, we can make it easier for users to get started with PyTorch, help them understand how to use its features effectively, and ultimately accelerate research to production in the field of machine learning.

WHY PARTICIPATE

Low Barrier to Entry

Many open-source projects require extensive knowledge of the codebase and prior contributions to the project to participate in any sort of hackathon events. The Docathon, on the other hand, is designed for newcomers. We do expect familiarity with Python, basic knowledge of PyTorch, and ML. But don’t fret, there are some tasks that are related to website issues that won’t require even that.

Tangible Results

One of the best things about the Docathon is that you can see the results of your efforts in real time. Improving documentation can have a huge impact on a project’s usability and accessibility and you’ll be able to see those improvements firsthand. Plus having tangible results can be a great motivator to keep contributing.

Collaborative Environment

The Docathon is a collaborative event which means you’ll have the opportunity to work with other contributors and PyTorch maintainers on improving the documentation. This can be a great way to learn from others, share ideas, and build connections.

Learning Opportunities

Finally, even if you are not an expert in PyTorch, the Docathon can be a great learning experience. You’ll have the opportunity to explore the PyTorch modules and test some of the tutorials on your machine as well as in the CI.

EVENT DETAILS

  • May 31: Kick-off
  • May 31 – June 11: Submissions and Feedback
  • June 12 – June 13: Final Reviews
  • June 15: Winner Announcements

Details for the Docathon to be announced at the kick-off stream on May 31.

Please register to join this year’s event: RSVP

Read More

Picture Perfect: AV1 Streaming Dazzles on GeForce RTX 40 Series GPUs With OBS Studio 29.1 Launch and YouTube Support

Picture Perfect: AV1 Streaming Dazzles on GeForce RTX 40 Series GPUs With OBS Studio 29.1 Launch and YouTube Support

AV1, the next-generation video codec, is expanding its reach with today’s release of OBS Studio 29.1. This latest software update adds support for AV1 streaming to YouTube over Enhanced RTMP.

All GeForce RTX 40 Series GPUs — including laptop GPUs and the recently launched GeForce RTX 4070 — support real-time AV1 hardware encoding, providing 40% more efficient encoding on average than H.264 and delivering higher quality than competing GPUs.

AV1 vs. H.264 encode efficiency based on BD-SNR.

This reduces the upload bandwidth needed to stream, a common limitation from streaming services and internet service providers. At higher resolutions, AV1 encoding is even more efficient. For example, AV1 enables streaming 4K at 60 frames per second with 10 Mbps upload bandwidth — down from 20 Mbps with H.264 — making 4K60 streaming available to a wider audience.

AV1 — The New Standard

As a founding member of the Alliance for Open Media, NVIDIA has worked closely with industry titans in developing the AV1 codec. This work was necessitated by gamers and online content creators who pushed the boundaries of old formats that were defined roughly 20 years ago. The previous standard for livestreaming, H.264, usually maxed out with 1080p at 60 fps at the commonly used bitrates of 6-8 Mbps, and often produced blocky, grainy images.

AV1’s increased efficiency enables streaming higher-quality images, allowing creators to stream at higher resolutions with smoother frame rates. Even in network-limited environments, streamers can now reap the benefits of high-quality video shared with their audience.

Support for AV1 on YouTube comes through the recent update to RTMP. The enhanced protocol also adds support for HEVC streaming, bringing new formats to users on the existing low-latency protocol they use for H.264 streaming. Enhanced RTMP ingestion has been released as a beta feature on YouTube.

Learn how to configure OBS Studio for streaming AV1 with GeForce RTX 40 Series GPUs in the OBS setup guide.

Better Streams With NVENC, NVIDIA Broadcast

GeForce RTX 40 Series GPUs usher in a new era of high-quality streaming with AV1 encoding support on the eighth-generation NVENC. A boon to streamers, NVENC offloads compute-intensive encoding tasks from the CPU to dedicated hardware on the GPU.

Comparison of 4K image quality in AV1 livestream1.

Designed to support the rigors of professional content creators, NVENC preserves video quality with a higher accuracy than competitive encoders. GeForce RTX users can stream higher-quality images at the same bitrate as competitive products or encode at a lower bitrate while maintaining a similar picture quality.

NVIDIA Broadcast, part of the exclusive NVIDIA Studio suite of software, transforms any room into a home studio. Livestreams, voice chats and video calls look and sound better with powerful AI effects like eye contact, noise and room echo removal, virtual background and more.

Follow NVIDIA Studio on Instagram, Twitter and Facebook. Access tutorials on the Studio YouTube channel and get updates directly in your inbox by subscribing to the Studio newsletter.

 

1 Source: 4K60 AV1 encoded video with AMD 7900 XT, GeForce RTX 4080 and Intel Arc 770 with OBS Studio default settings at 12Mbps

Read More

Hosting ML Models on Amazon SageMaker using Triton: XGBoost, LightGBM, and Treelite Models

Hosting ML Models on Amazon SageMaker using Triton: XGBoost, LightGBM, and Treelite Models

One of the most popular models available today is XGBoost. With the ability to solve various problems such as classification and regression, XGBoost has become a popular option that also falls into the category of tree-based models. In this post, we dive deep to see how Amazon SageMaker can serve these models using NVIDIA Triton Inference Server. Real-time inference workloads can have varying levels of requirements and service level agreements (SLAs) in terms of latency and throughput, and can be met using SageMaker real-time endpoints.

SageMaker provides single model endpoints, which allow you to deploy a single machine learning (ML) model against a logical endpoint. For other use cases, you can choose to manage cost and performance using multi-model endpoints, which allow you to specify multiple models to host behind a logical endpoint. Regardless of the option the you choose, SageMaker endpoints allow a scalable mechanism for even the most demanding enterprise customers while providing value in a plethora of features, including shadow variants, auto scaling, and native integration with Amazon CloudWatch (for more information, refer to CloudWatch Metrics for Multi-Model Endpoint Deployments).

Triton supports various backends as engines to support the running and serving of various ML models for inference. For any Triton deployment, it’s crucial to know how the backend behavior impacts your workloads and what to expect so that you can be successful. In this post, we help you understand the Forest Inference Library (FIL) backend, which is supported by Triton on SageMaker, so that you can make an informed decision for your workloads and get the best performance and cost optimization possible.

Deep dive into the FIL backend

Triton supports the FIL backend to serve tree models, such as XGBoost, LightGBM, scikit-learn Random Forest, RAPIDS cuML Random Forest, and any other model supported by Treelite. These models have long been used for solving problems such as classification or regression. Although these types of models have traditionally run on CPUs, the popularity of these models and inference demands have led to various techniques to increase inference performance. The FIL backend utilizes many of these techniques by using cuML constructs and is built on C++ and the CUDA core library to optimize inference performance on GPU accelerators.

The FIL backend uses cuML’s libraries to use CPU or GPU cores to accelerate learning. In order to use these processors, data is referenced from host memory (for example, NumPy arrays) or GPU arrays (uDF, Numba, cuPY, or any library that supports the __cuda_array_interface__) API. After the data is staged in memory, the FIL backend can run processing across all the available CPU or GPU cores.

The FIL backend threads can communicate with each other without utilizing shared memory of the host, but in ensemble workloads, host memory should be considered. The following diagram shows an ensemble scheduler runtime architecture where you have the ability to fine-tune the memory areas, including CPU addressable shared memory that is used for inter-process communication between Triton (C++) and the Python process (Python backend) for exchanging tensors (input/output) with the FIL backend.

Triton Inference Server provides configurable options for developers to tune their workloads and optimize model performance. The configuration dynamic_batching allows Triton to hold client-side requests and batch them on the server side in order to efficiently use FIL’s parallel computation to inference the entire batch together. The option max_queue_delay_microseconds offers a fail-safe control of how long Triton waits to form a batch.

There are a number of other FIL-specific options available that impact performance and behavior. We suggest starting with storage_type. When running the backend on GPU, FIL creates a new memory/data structure that is a representation of the tree for which FIL can impact performance and footprint. This is configurable via the environment parameter storage_type, which has the options dense, sparse, and auto. Choosing the dense option will consume more GPU memory and doesn’t always result in better performance, so it’s best to check. In contrast, the sparse option will consume less GPU memory and can possibly perform as well or better than dense. Choosing auto will cause the model to default to dense unless doing so will consume significantly more GPU memory than sparse.

When it comes to model performance, you might consider emphasizing the threads_per_tree option. One thing that you may overserve in real-world scenarios is that threads_per_tree can have a bigger impact on throughput than any other parameter. Setting it to any power of 2 from 1–32 is legitimate. The optimal value is hard to predict for this parameter, but when the server is expected to deal with higher load or process larger batch sizes, it tends to benefit from a larger value than when it’s processing a few rows at a time.

Another parameter to be aware of is algo, which is also available if you’re running on GPU. This parameter determines the algorithm that’s used to process the inference requests. The options supported for this are ALGO_AUTO, NAIVE, TREE_REORG, and BATCH_TREE_REORG. These options determine how nodes within a tree are organized and can also result in performance gains. The ALGO_AUTO option defaults to NAIVE for sparse storage and BATCH_TREE_REORG for dense storage.

Lastly, FIL comes with Shapley explainer, which can be activated by using the treeshap_output parameter. However, you should keep in mind that Shapley outputs hurt performance due to its output size.

Model format

There is currently no standard file format to store forest-based models; every framework tends to define its own format. In order to support multiple input file formats, FIL imports data using the open-source Treelite library. This enables FIL to support models trained in popular frameworks, such as XGBoost and LightGBM. Note that the format of the model that you’re providing must be set in the model_type configuration value specified in the config.pbtxt file.

Config.pbtxt

Each model in a model repository must include a model configuration that provides the required and optional information about the model. Typically, this configuration is provided in a config.pbtxt file specified as ModelConfig protobuf. To learn more about the config settings, refer to Model Configuration. The following are some of the model configuration parameters:

  • max_batch_size – This determines the maximum batch size that can be passed to this model. In general, the only limit on the size of batches passed to a FIL backend is the memory available with which to process them. For GPU runs, the available memory is determined by the size of Triton’s CUDA memory pool, which can be set via a command line argument when starting the server.
  • input – Options in this section tell Triton the number of features to expect for each input sample.
  • output – Options in this section tell Triton how many output values there will be for each sample. If the predict_proba option is set to true, then a probability value will be returned for each class. Otherwise, a single value will be returned, indicating the class predicted for the given sample.
  • instance_group – This determines how many instances of this model will be created and whether they will use GPU or CPU.
  • model_type – This string indicates what format the model is in (xgboost_json in this example, but xgboost, lightgbm, and tl_checkpoint are valid formats as well).
  • predict_proba – If set to true, probability values will be returned for each class rather than just a class prediction.
  • output_class – This is set to true for classification models and false for regression models.
  • threshold – This is a score threshold for determining classification. When output_class is set to true, this must be provided, although it won’t be used if predict_proba is also set to true.
  • storage_type – In general, using AUTO for this setting should meet most use cases. If AUTO storage is selected, FIL will load the model using either a sparse or dense representation based on the approximate size of the model. In some cases, you may want to explicitly set this to SPARSE in order to reduce the memory footprint of large models.

Triton Inference Server on SageMaker

SageMaker allows you to deploy both single model and multi-model endpoints with NVIDIA Triton Inference Server. The following figure shows the Triton Inference Server high-level architecture. The model repository is a file system-based repository of the models that Triton will make available for inferencing. Inference requests arrive at the server and are routed to the appropriate per-model scheduler. Triton implements multiple scheduling and batching algorithms that can be configured on a model-by-model basis. Each model’s scheduler optionally performs batching of inference requests and then passes the requests to the backend corresponding to the model type. The backend performs inferencing using the inputs provided in the batched requests to produce the requested outputs. The outputs are then returned.

When configuring your auto scaling groups for SageMaker endpoints, you may want to consider SageMakerVariantInvocationsPerInstance as the primary criteria to determine the scaling characteristics of your auto scaling group. In addition, depending on whether your models are running on GPU or CPU, you may also consider using CPUUtilization or GPUUtilization as additional criteria. Note that for single model endpoints, because the models deployed are all the same, it’s fairly straightforward to set proper policies to meet your SLAs. For multi-model endpoints, we recommend deploying similar models behind a given endpoint to have more steady predictable performance. In use cases where models of varying sizes and requirements are used, you may want to separate those workloads across multiple multi-model endpoints or spend some time fine-tuning your auto scaling group policy to obtain the best cost and performance balance.

For a list of NVIDIA Triton Deep Learning Containers (DLCs) supported by SageMaker inference, refer to Available Deep Learning Containers Images.

SageMaker notebook walkthrough

ML applications are complex and can often require data preprocessing. In this notebook, we dive into how to deploy a tree-based ML model like XGBoost using the FIL backend in Triton on a SageMaker multi-model endpoint. We also cover how to implement a Python-based data preprocessing inference pipeline for your model using the ensemble feature in Triton. This will allow us to send in the raw data from the client side and have both data preprocessing and model inference happen in a Triton SageMaker endpoint for optimal inference performance.

Triton model ensemble feature

Triton Inference Server greatly simplifies the deployment of AI models at scale in production. Triton Inference Server comes with a convenient solution that simplifies building preprocessing and postprocessing pipelines. The Triton Inference Server platform provides the ensemble scheduler, which is responsible for pipelining models participating in the inference process while ensuring efficiency and optimizing throughput. Using ensemble models can avoid the overhead of transferring intermediate tensors and minimize the number of requests that must be sent to Triton.

In this notebook, we show how to use the ensemble feature for building a pipeline of data preprocessing with XGBoost model inference, and you can extrapolate from it to add custom postprocessing to the pipeline.

Set up the environment

We begin by setting up the required environment. We install the dependencies required to package our model pipeline and run inferences using Triton Inference Server. We also define the AWS Identity and Access Management (IAM) role that will give SageMaker access to the model artifacts and the NVIDIA Triton Amazon Elastic Container Registry (Amazon ECR) image. See the following code:

import boto3
import sagemaker
from sagemaker import get_execution_role
import pandas as pd
import numpy as np
import subprocess
sess = boto3.Session()
sm = sess.client("sagemaker")
##NOTE :Replace with your S3 bucket name
default_bucket="" 
sagemaker_session = sagemaker.Session(default_bucket=default_bucket)

##NOTE : Make sure to have SageMakerFullAccess permission to the below IAM Role
role = get_execution_role()
client = boto3.client("sagemaker-runtime")
s3_bucket = sagemaker_session.default_bucket()

##NOTE : Latest SageMaker DLCs can be found here, please change region and account ids accordingly - https://github.com/aws/deep-learning-containers/blob/master/available_images.md

triton_image_uri = (
"{account_id}.dkr.ecr.{region}.{base}/sagemaker-tritonserver:23.02-py3".format(
account_id=account_id_map[region], region=region, base=base
))

Create a Conda environment for preprocessing dependencies

The Python backend in Triton requires us to use a Conda environment for any additional dependencies. In this case, we use the Python backend to preprocess the raw data before feeding it into the XGBoost model that is running in the FIL backend. Even though we originally used RAPIDS cuDF and cuML to do the data preprocessing, here we use Pandas and scikit-learn as preprocessing dependencies during inference. We do this for three reasons:

  • We show how to create a Conda environment for your dependencies and how to package it in the format expected by Triton’s Python backend.
  • By showing the preprocessing model running in the Python backend on the CPU while the XGBoost runs on the GPU in the FIL backend, we illustrate how each model in Triton’s ensemble pipeline can run on a different framework backend as well as different hardware configurations.
  • It highlights how the RAPIDS libraries (cuDF, cuML) are compatible with their CPU counterparts (Pandas, scikit-learn). For example, we can show how LabelEncoders created in cuML can be used in scikit-learn and vice versa.

We follow the instructions from the Triton documentation for packaging preprocessing dependencies (scikit-learn and Pandas) to be used in the Python backend as a Conda environment TAR file. The bash script create_prep_env.sh creates the Conda environment TAR file, then we move it into the preprocessing model directory. See the following code:

#!/bin/bash

conda create -y -n preprocessing_env python=3.8
source /opt/conda/etc/profile.d/conda.sh
conda activate preprocessing_env
export PYTHONNOUSERSITE=True
conda install -y -c conda-forge pandas scikit-learn
pip install conda-pack
conda-pack

After we run the preceding script, it generates preprocessing_env.tar.gz, which we copy to the preprocessing directory:

!cp preprocessing_env.tar.gz model_cpu_repository/preprocessing/
!cp preprocessing_env.tar.gz model_gpu_repository/preprocessinggpu/

Set up preprocessing with the Triton Python backend

For preprocessing, we use Triton’s Python backend to perform tabular data preprocessing (categorical encoding) during inference for raw data requests coming into the server. For more information about the preprocessing that was done during training, refer to the training notebook.

The Python backend enables preprocessing, postprocessing, and any other custom logic to be implemented in Python and served with Triton. Using Triton on SageMaker requires us to first set up a model repository folder containing the models we want to serve. We have already set up a model for Python data preprocessing called preprocessing in cpu_model_repository and gpu_model_repository.

Triton has specific requirements for model repository layout. Within the top-level model repository directory, each model has its own subdirectory containing the information for the corresponding model. Each model directory in Triton must have at least one numeric subdirectory representing a version of the model. The value 1 represents version 1 of our Python preprocessing model. Each model is run by a specific backend, so within each version subdirectory there must be the model artifact required by that backend. For this example, we use the Python backend, which requires the Python file you’re serving to be called model.py, and the file needs to implement certain functions. If we were using a PyTorch backend, a model.pt file would be required, and so on. For more details on naming conventions for model files, refer to Model Files.

The model.py Python file we use here implements all the tabular data preprocessing logic to convert raw data into features that can be fed into our XGBoost model.

Every Triton model must also provide a config.pbtxt file describing the model configuration. To learn more about the config settings, refer to Model Configuration. Our config.pbtxt file specifies the backend as python and all the input columns for raw data along with preprocessed output, which consists of 15 features. We also specify we want to run this Python preprocessing model on the CPU. See the following code:

name: "preprocessing"
backend: "python"
max_batch_size: 882352
input [
    {
        name: "User"
        data_type: TYPE_FP32
        dims: [ 1 ]
    },
    {
        name: "Card"
        data_type: TYPE_FP32
        dims: [ 1 ]
    },
    {
        name: "Year"
        data_type: TYPE_FP32
        dims: [ 1 ]
    },
    {
        name: "Month"
        data_type: TYPE_FP32
        dims: [ 1 ]
    },
    {
        name: "Day"
        data_type: TYPE_FP32
        dims: [ 1 ]
    },
    {
        name: "Time"
        data_type: TYPE_STRING
        dims: [ 1 ]
    },
    {
        name: "Amount"
        data_type: TYPE_STRING
        dims: [ 1 ]
    },
    {
        name: "Use Chip"
        data_type: TYPE_STRING
        dims: [ 1 ]
    },
    {
        name: "Merchant Name"
        data_type: TYPE_STRING
        dims: [ 1 ]
    },
    {
        name: "Merchant City"
        data_type: TYPE_STRING
        dims: [ 1 ]
    },
    {
        name: "Merchant State"
        data_type: TYPE_STRING
        dims: [ 1 ]
    },
    {
        name: "Zip"
        data_type: TYPE_STRING
        dims: [ 1 ]
    },
    {
        name: "MCC"
        data_type: TYPE_STRING
        dims: [ 1 ]
    },
    {
        name: "Errors?"
        data_type: TYPE_STRING
        dims: [ 1 ]
    }
    
]
output [
    {
        name: "OUTPUT"
        data_type: TYPE_FP32
        dims: [ 15 ]
    }
]

instance_group [
    {
        count: 1
        kind: KIND_CPU
    }
]
parameters: {
  key: "EXECUTION_ENV_PATH",
  value: {string_value: "$$TRITON_MODEL_DIRECTORY/preprocessing_env.tar.gz"}
}

Set up a tree-based ML model for the FIL backend

Next, we set up the model directory for a tree-based ML model like XGBoost, which will be using the FIL backend.

The expected layout for cpu_memory_repository and gpu_memory_repository are similar to the one we showed earlier.

Here, FIL is the name of the model. We can give it a different name like xgboost if we want to. 1 is the version subdirectory, which contains the model artifact. In this case, it’s the xgboost.json model that we saved. Let’s create this expected layout:

# move saved xgboost model into fil model directory
!mkdir -p model_cpu_repository/fil/1
!cp xgboost.json model_cpu_repository/fil/1/
!cp xgboost.json model_gpu_repository/filgpu/1/

We need to have the configuration file config.pbtxt describing the model configuration for the tree-based ML model, so that the FIL backend in Triton can understand how to serve it. For more information, refer to the latest generic Triton configuration options and the configuration options specific to the FIL backend. We focus on just a few of the most common and relevant options in this example.

Create config.pbtxt for model_cpu_repository:

USE_GPU =False
FIL_MODEL_DIR = "./model_cpu_repository/fil"

# Maximum size in bytes for input and output arrays. If you are
# using Triton 21.11 or higher, all memory allocations will make
# use of Triton's memory pool, which has a default size of
# 67_108_864 bytes
MAX_MEMORY_BYTES = 60_000_000
NUM_FEATURES = 15
NUM_CLASSES = 2
bytes_per_sample = (NUM_FEATURES + NUM_CLASSES) * 4
max_batch_size = MAX_MEMORY_BYTES // bytes_per_sample

IS_CLASSIFIER = True
model_format = "xgboost_json"

# Select deployment hardware (GPU or CPU)
if USE_GPU:
    instance_kind = "KIND_GPU"
else:
    instance_kind = "KIND_CPU"

# whether the model is doing classification or regression
if IS_CLASSIFIER:
    classifier_string = "true"
else:
    classifier_string = "false"

# whether to predict probabilites or not
predict_proba = False

if predict_proba:
    predict_proba_string = "true"
else:
    predict_proba_string = "false"

config_text = f"""backend: "fil"
max_batch_size: {max_batch_size}
input [                                 
 {{  
    name: "input__0"
    data_type: TYPE_FP32
    dims: [ {NUM_FEATURES} ]                    
  }} 
]
output [
 {{
    name: "output__0"
    data_type: TYPE_FP32
    dims: [ 1 ]
  }}
]
instance_group [{{ kind: {instance_kind} }}]
parameters [
  {{
    key: "model_type"
    value: {{ string_value: "{model_format}" }}
  }},
  {{
    key: "predict_proba"
    value: {{ string_value: "{predict_proba_string}" }}
  }},
  {{
    key: "output_class"
    value: {{ string_value: "{classifier_string}" }}
  }},
  {{
    key: "threshold"
    value: {{ string_value: "0.5" }}
  }},
  {{
    key: "storage_type"
    value: {{ string_value: "AUTO" }}
  }}
]

dynamic_batching {{}}"""

config_path = os.path.join(FIL_MODEL_DIR, "config.pbtxt")
with open(config_path, "w") as file_:
    file_.write(config_text)

Similarly, set up config.pbtxt for model_gpu_repository (note the difference is USE_GPU = True):

USE_GPU = True
FIL_MODEL_DIR = "./model_gpu_repository/filgpu"

# Maximum size in bytes for input and output arrays. If you are
# using Triton 21.11 or higher, all memory allocations will make
# use of Triton's memory pool, which has a default size of
# 67_108_864 bytes
MAX_MEMORY_BYTES = 60_000_000
NUM_FEATURES = 15
NUM_CLASSES = 2
bytes_per_sample = (NUM_FEATURES + NUM_CLASSES) * 4
max_batch_size = MAX_MEMORY_BYTES // bytes_per_sample

IS_CLASSIFIER = True
model_format = "xgboost_json"

# Select deployment hardware (GPU or CPU)
if USE_GPU:
    instance_kind = "KIND_GPU"
else:
    instance_kind = "KIND_CPU"

# whether the model is doing classification or regression
if IS_CLASSIFIER:
    classifier_string = "true"
else:
    classifier_string = "false"

# whether to predict probabilites or not
predict_proba = False

if predict_proba:
    predict_proba_string = "true"
else:
    predict_proba_string = "false"

config_text = f"""backend: "fil"
max_batch_size: {max_batch_size}
input [                                 
 {{  
    name: "input__0"
    data_type: TYPE_FP32
    dims: [ {NUM_FEATURES} ]                    
  }} 
]
output [
 {{
    name: "output__0"
    data_type: TYPE_FP32
    dims: [ 1 ]
  }}
]
instance_group [{{ kind: {instance_kind} }}]
parameters [
  {{
    key: "model_type"
    value: {{ string_value: "{model_format}" }}
  }},
  {{
    key: "predict_proba"
    value: {{ string_value: "{predict_proba_string}" }}
  }},
  {{
    key: "output_class"
    value: {{ string_value: "{classifier_string}" }}
  }},
  {{
    key: "threshold"
    value: {{ string_value: "0.5" }}
  }},
  {{
    key: "storage_type"
    value: {{ string_value: "AUTO" }}
  }}
]

dynamic_batching {{}}"""

config_path = os.path.join(FIL_MODEL_DIR, "config.pbtxt")
with open(config_path, "w") as file_:
    file_.write(config_text)

Set up an inference pipeline of the data preprocessing Python backend and FIL backend using ensembles

Now we’re ready to set up the inference pipeline for data preprocessing and tree-based model inference using an ensemble model. An ensemble model represents a pipeline of one or more models and the connection of input and output tensors between those models. Here we use the ensemble model to build a pipeline of data preprocessing in the Python backend followed by XGBoost in the FIL backend.

The expected layout for the ensemble model directory is similar to the ones we showed previously:

# create model version directory for ensemble CPU model
!mkdir -p model_cpu_repository/ensemble/1
# create model version directory for ensemble GPU model
!mkdir -p model_gpu_repository/ensemble/1

We created the ensemble model’s config.pbtxt following the guidance in Ensemble Models. Importantly, we need to set up the ensemble scheduler in config.pbtxt, which specifies the data flow between models within the ensemble. The ensemble scheduler collects the output tensors in each step, and provides them as input tensors for other steps according to the specification.

Package the model repository and upload to Amazon S3

Finally, we end up with the following model repository directory structure, containing a Python preprocessing model and its dependencies along with the XGBoost FIL model and the model ensemble.

We package the directory and its contents as model.tar.gz for uploading to Amazon Simple Storage Service (Amazon S3). We have two options in this example: using a CPU-based instance or a GPU-based instance. A GPU-based instance is more suitable when you need higher processing power and want to use CUDA cores.

Create and upload the model package for a CPU-based instance (optimized for CPU) with the following code:

!tar —exclude='.ipynb_checkpoints' -czvf model-cpu.tar.gz -C model_cpu_repository .

model_uri_cpu = sagemaker_session.upload_data(
path="model-cpu.tar.gz", key_prefix="triton-fil-mme-ensemble"
)

Create and upload the model package for a GPU-based instance (optimized for GPU) with the following code:

!tar —exclude='.ipynb_checkpoints' -czvf model-gpu.tar.gz -C model_gpu_repository .

model_uri_cpu = sagemaker_session.upload_data(
path="model-gpu.tar.gz", key_prefix="triton-fil-mme-ensemble"
)

Create a SageMaker endpoint

We now have the model artifacts stored in an S3 bucket. In this step, we can also provide the additional environment variable SAGEMAKER_TRITON_DEFAULT_MODEL_NAME, which specifies the name of the model to be loaded by Triton. The value of this key should match the folder name in the model package uploaded to Amazon S3. This variable is optional in the case of a single model. In the case of ensemble models, this key has to be specified for Triton to start up in SageMaker.

Additionally, you can set SAGEMAKER_TRITON_BUFFER_MANAGER_THREAD_COUNT and SAGEMAKER_TRITON_THREAD_COUNT for optimizing the thread counts.

# Set the primary path for where all the models are stored on S3 bucket
model_location = f"s3://{s3_bucket}/triton-fil-mme-ensemble/"
sm_model_name = f"{user_profile}" + time.strftime("%Y-%m-%d-%H-%M-%S", time.gmtime())

container = {
 "Image": triton_image_uri,
 "ModelDataUrl": model_location,
"Mode": "MultiModel",
 "Environment": {
 "SAGEMAKER_TRITON_DEFAULT_MODEL_NAME": "ensemble",
# "SAGEMAKER_TRITON_DEFAULT_MODEL_NAME": model_uri.rsplit('/')[-2], #m_name,
# "SAGEMAKER_TRITON_LOG_VERBOSE": "true", #"200",
# "SAGEMAKER_TRITON_SHM_DEFAULT_BYTE_SIZE" : "20000000", #"1677721600", #"16777216000", "16777216"
# "SAGEMAKER_TRITON_SHM_GROWTH_BYTE_SIZE": "1048576"
},
}

create_model_response = sm.create_model(
 ModelName=sm_model_name, ExecutionRoleArn=role, PrimaryContainer=container
)

We use the preceding model to create an endpoint configuration where we can specify the type and number of instances we want in the endpoint

eendpoint_config_name = f"{user_profile}" + time.strftime("%Y-%m-%d-%H-%M-%S", time.gmtime())

create_endpoint_config_response = sm.create_endpoint_config(
 EndpointConfigName=endpoint_config_name,
 ProductionVariants=[
 {
 "InstanceType": "ml.g4dn.xlarge",
 "InitialVariantWeight": 1,
 "InitialInstanceCount": 1,
 "ModelName": sm_model_name,
 "VariantName": "AllTraffic",
 }
 ],
)

We use this endpoint configuration to create a SageMaker endpoint and wait for the deployment to finish. With SageMaker MMEs, we have the option to host multiple ensemble models by repeating this process, but we stick with one deployment for this example:

endpoint_name = f"{studio_user_profile_output}-lab1-" + time.strftime("%Y-%m-%d-%H-%M-%S", time.gmtime())
create_endpoint_response = sm.create_endpoint(
 EndpointName=endpoint_name, EndpointConfigName=endpoint_config_name
)

The status will change to InService when the deployment is successful.

Invoke your model hosted on the SageMaker endpoint

After the endpoint is running, we can use some sample raw data to perform inference using JSON as the payload format. For the inference request format, Triton uses the KFServing community standard inference protocols. See the following code:

data_infer = pd.read_csv("data_infer.csv")
STR_COLUMNS = [
    "Time",
    "Amount",
    "Zip",
    "MCC",
    "Merchant Name",
    "Use Chip",
    "Merchant City",
    "Merchant State",
    "Errors?",
]

batch_size = len(data_infer)

payload = {}
payload["inputs"] = []
data_dict = {}
for col_name in data_infer.columns:
    data_dict[col_name] = {}
    data_dict[col_name]["name"] = col_name
    if col_name in STR_COLUMNS:
        data_dict[col_name]["data"] = data_infer[col_name].astype(str).tolist()
        data_dict[col_name]["datatype"] = "BYTES"
    else:
        data_dict[col_name]["data"] = data_infer[col_name].astype("float32").tolist()
        data_dict[col_name]["datatype"] = "FP32"
    data_dict[col_name]["shape"] = [batch_size, 1]
    payload["inputs"].append(data_dict[col_name])
#Invoke the endpoint
# Change the TargetModel to either CPU or GPU
response = client.invoke_endpoint(
 EndpointName=endpoint_name, ContentType="application/octet-stream", Body=json.dumps(payload),TargetModel="model-cpu.tar.gz",
)

#Read the results
response_body = json.loads(response["Body"].read().decode("utf8"))
predictions = response_body["outputs"][0]["data"]

CLASS_LABELS = ["NOT FRAUD", "FRAUD"]
predictions = [CLASS_LABELS[int(idx)] for idx in predictions]
print(predictions)

The notebook referred in the blog can be found in the GitHub repository.

Best practices

In addition to the options to fine-tune the settings of the FIL backend we mentioned earlier, data scientists can also ensure that the input data for the backend is optimized for processing by the engine. Whenever possible, input data in row-major format into the GPU array. Other formats will require internal conversion and take up cycles, decreasing performance.

Due to the way FIL data structures are maintained in GPU memory, be mindful of the tree depth. The deeper the tree depth, the larger your GPU memory footprint will be.

Use the instance_group_count parameter to add worker processes and increase the throughput of the FIL backend, which will result in larger CPU and GPU memory consumption. In addition, consider SageMaker-specific variables that are available to increase the throughput, such as HTTP threads, HTTP buffer size, batch size, and max delay.

Conclusion

In this post, we dove deep into the FIL backend that Triton Inference Server supports on SageMaker. This backend provides for both CPU and GPU acceleration of your tree-based models such as the popular XGBoost algorithm. There are many options to consider to get the best performance for inference, such as batch sizes, data input formats, and other factors that can be tuned to meet your needs. SageMaker allows you to use this capability with single and multi-model endpoints to balance of performance and cost savings.

We encourage you to take the information in this post and see if SageMaker can meet your hosting needs to serve tree-based models, meeting your requirements for cost reduction and workload performance.

The notebook referenced in this post can be found in the SageMaker examples GitHub repository. Furthermore, you can find the latest documentation on the FIL backend on GitHub.


About the Authors

Raghu Ramesha is an Senior ML Solutions Architect with the Amazon SageMaker Service team. He focuses on helping customers build, deploy, and migrate ML production workloads to SageMaker at scale. He specializes in machine learning, AI, and computer vision domains, and holds a master’s degree in Computer Science from UT Dallas. In his free time, he enjoys traveling and photography.

James Park is a Solutions Architect at Amazon Web Services. He works with Amazon.com to design, build, and deploy technology solutions on AWS, and has a particular interest in AI and machine learning. In his spare time he enjoys seeking out new cultures, new experiences,  and staying up to date with the latest technology trends.

Dhawal Patel is a Principal Machine Learning Architect at AWS. He has worked with organizations ranging from large enterprises to mid-sized startups on problems related to distributed computing and artificial intelligence. He focuses on deep learning, including NLP and computer vision domains. He helps customers achieve high-performance model inference on Amazon SageMaker.

Jiahong Liu is a Solution Architect on the Cloud Service Provider team at NVIDIA. He assists clients in adopting machine learning and AI solutions that leverage NVIDIA accelerated computing to address their training and inference challenges. In his leisure time, he enjoys origami, DIY projects, and playing basketball.

Kshitiz Gupta is a Solutions Architect at NVIDIA. He enjoys educating cloud customers about the GPU AI technologies NVIDIA has to offer and assisting them with accelerating their machine learning and deep learning applications. Outside of work, he enjoys running, hiking and wildlife watching.

Read More

Latest NVIDIA Graphics Research Advances Generative AI’s Next Frontier

Latest NVIDIA Graphics Research Advances Generative AI’s Next Frontier

NVIDIA today introduced a wave of cutting-edge AI research that will enable developers and artists to bring their ideas to life — whether still or moving, in 2D or 3D, hyperrealistic or fantastical.

Around 20 NVIDIA Research papers advancing generative AI and neural graphics — including collaborations with over a dozen universities in the U.S., Europe and Israel — are headed to SIGGRAPH 2023, the premier computer graphics conference, taking place Aug. 6-10 in Los Angeles.

The papers include generative AI models that turn text into personalized images; inverse rendering tools that transform still images into 3D objects; neural physics models that use AI to simulate complex 3D elements with stunning realism; and neural rendering models that unlock new capabilities for generating real-time, AI-powered visual details.

Innovations by NVIDIA researchers are regularly shared with developers on GitHub and incorporated into products, including the NVIDIA Omniverse platform for building and operating metaverse applications and NVIDIA Picasso, a recently announced foundry for custom generative AI models for visual design. Years of NVIDIA graphics research helped bring film-style rendering to games, like the recently released Cyberpunk 2077 Ray Tracing: Overdrive Mode, the world’s first path-traced AAA title.

The research advancements presented this year at SIGGRAPH will help developers and enterprises rapidly generate synthetic data to populate virtual worlds for robotics and autonomous vehicle training. They’ll also enable creators in art, architecture, graphic design, game development and film to more quickly produce high-quality visuals for storyboarding, previsualization and even production.

AI With a Personal Touch: Customized Text-to-Image Models

Generative AI models that transform text into images are powerful tools to create concept art or storyboards for films, video games and 3D virtual worlds. Text-to-image AI tools can turn a prompt like “children’s toys” into nearly infinite visuals a creator can use for inspiration — generating images of stuffed animals, blocks or puzzles.

However, artists may have a particular subject in mind. A creative director for a toy brand, for example, could be planning an ad campaign around a new teddy bear and want to visualize the toy in different situations, such as a teddy bear tea party. To enable this level of specificity in the output of a generative AI model, researchers from Tel Aviv University and NVIDIA have two SIGGRAPH papers that enable users to provide image examples that the model quickly learns from.

One paper describes a technique that needs a single example image to customize its output, accelerating the personalization process from minutes to roughly 11 seconds on a single NVIDIA A100 Tensor Core GPU, more than 60x faster than previous personalization approaches.

A second paper introduces a highly compact model called Perfusion, which takes a handful of concept images to allow users to combine multiple personalized elements — such as a specific teddy bear and teapot — into a single AI-generated visual:

Examples of generative AI model personalizing text-to-image output based on user-provided images

Serving in 3D: Advances in Inverse Rendering and Character Creation 

Once a creator comes up with concept art for a virtual world, the next step is to render the environment and populate it with 3D objects and characters. NVIDIA Research is inventing AI techniques to accelerate this time-consuming process by automatically transforming 2D images and videos into 3D representations that creators can import into graphics applications for further editing.

A third paper created with researchers at the University of California, San Diego, discusses tech that can generate and render a photorealistic 3D head-and-shoulders model based on a single 2D portrait — a major breakthrough that makes 3D avatar creation and 3D video conferencing accessible with AI. The method runs in real time on a consumer desktop, and can generate a photorealistic or stylized 3D telepresence using only conventional webcams or smartphone cameras.

A fourth project, a collaboration with Stanford University, brings lifelike motion to 3D characters. The researchers created an AI system that can learn a range of tennis skills from 2D video recordings of real tennis matches and apply this motion to 3D characters. The simulated tennis players can accurately hit the ball to target positions on a virtual court, and even play extended rallies with other characters.

Beyond the test case of tennis, this SIGGRAPH paper addresses the difficult challenge of producing 3D characters that can perform diverse skills with realistic movement — without the use of expensive motion-capture data.

 

Not a Hair Out of Place: Neural Physics Enables Realistic Simulations

Once a 3D character is generated, artists can layer in realistic details such as hair — a complex, computationally expensive challenge for animators.

Humans have an average of 100,000 hairs on their heads, with each reacting dynamically to an individual’s motion and the surrounding environment. Traditionally, creators have used physics formulas to calculate hair movement, simplifying or approximating its motion based on the resources available. That’s why virtual characters in a big-budget film sport much more detailed heads of hair than real-time video game avatars.

A fifth paper showcases a method that can simulate tens of thousands of hairs in high resolution and in real time using neural physics, an AI technique that teaches a neural network to predict how an object would move in the real world.

The team’s novel approach for accurate simulation of full-scale hair is specifically optimized for modern GPUs. It offers significant performance leaps compared to state-of-the-art, CPU-based solvers, reducing simulation times from multiple days to merely hours — while also boosting the quality of hair simulations possible in real time. This technique finally enables both accurate and interactive physically based hair grooming.

Neural Rendering Brings Film-Quality Detail to Real-Time Graphics 

After an environment is filled with animated 3D objects and characters, real-time rendering simulates the physics of light reflecting through the virtual scene. Recent NVIDIA research shows how AI models for textures, materials and volumes can deliver film-quality, photorealistic visuals in real time for video games and digital twins.

NVIDIA invented programmable shading over two decades ago, enabling developers to customize the graphics pipeline. In these latest neural rendering inventions, researchers extend programmable shading code with AI models that run deep inside NVIDIA’s real-time graphics pipelines.

In a sixth SIGGRAPH paper, NVIDIA will present neural texture compression that delivers up to 16x more texture detail without taking additional GPU memory. Neural texture compression can substantially increase the realism of 3D scenes, as seen in the image below, which demonstrates how neural-compressed textures (right) capture sharper detail than previous formats, where the text remains blurry (center).

Three-pane image showing a page of text, a zoomed-in version with blurred text, and a zoomed-in version with clear text.
Neural texture compression (right) provides up to 16x more texture detail than previous texture formats without using additional GPU memory.

A related paper announced last year is now available in early access as NeuralVDB, an AI-enabled data compression technique that decreases by 100x the memory needed to represent volumetric data — like smoke, fire, clouds and water.

NVIDIA also released today more details about neural materials research that was shown in the most recent NVIDIA GTC keynote. The paper describes an AI system that learns how light reflects from photoreal, many-layered materials, reducing the complexity of these assets down to small neural networks that run in real time, enabling up to 10x faster shading.

The level of realism can be seen in this neural-rendered teapot, which accurately represents the ceramic, the imperfect clear-coat glaze, fingerprints, smudges and even dust.

Rendered close-up images of a ceramic blue teapot with gold handle
The neural material model learns how light reflects from the many-layered, photoreal reference materials.

More Generative AI and Graphics Research

These are just the highlights — read more about all the NVIDIA papers at SIGGRAPH. NVIDIA will also present six courses, four talks and two Emerging Technology demos at the conference, with topics including path tracing, telepresence and diffusion models for generative AI.

NVIDIA Research has hundreds of scientists and engineers worldwide, with teams focused on topics including AI, computer graphics, computer vision, self-driving cars and robotics.

Read More

AI self-play for algorithm design

AI self-play for algorithm design

A flow chart demonstrating the five steps in a self-play pipeline for a language model to improve itself automatically.A self-play pipeline for a language model (LM) to improve itself in a fully automatic manner. First, the LM generates novel puzzles based on a training set of handwritten puzzles. Then, the LM attempts to solve each of these puzzles 100 times. In Step 3, the computer (specifically a Python interpreter) filters the candidate solutions for correctness. Finally, the LM is improved by further training on these verified correct solutions to synthetic puzzles, and the process repeats. This process leads to significant improvements as measured on held-out test puzzles that were also handwritten.
A self-play pipeline for a language model (LM) to improve itself in a fully automatic manner. First, the LM generates novel puzzles based on a training set of handwritten puzzles. Then, the LM attempts to solve each of these puzzles 100 times. In Step 3, the computer (specifically a Python interpreter) filters the candidate solutions for correctness. Finally, the LM is improved by further training on these verified correct solutions to synthetic puzzles, and the process repeats. This process leads to significant improvements as measured on held-out test puzzles, which were also handwritten.

Efficient algorithms are crucial for many purposes, including reducing energy consumption in digital devices. While humans outperform AI systems at designing such algorithms, we show how to improve AI programming abilities using self-play, a technique that has helped AI systems dominate in games such as chess and Go.

Designing fast and accurate algorithms requires high-level abstract reasoning, which remains difficult for AI systems. Our approach involves having the AI design and solve its own programming challenges, enabling practice on millions of artificial challenges and exploration of problem types not found in public repositories. We detail our work in a new paper, “Language Models Can Teach Themselves to Program Better,” which we’re presenting at the 2023 International Conference on Learning Representations (ICLR).

Spotlight: Microsoft Research Podcast

AI Frontiers: AI for health and the future of research with Peter Lee

Peter Lee, head of Microsoft Research, and Ashley Llorens, AI scientist and engineer, discuss the future of AI research and the potential for GPT-4 as a medical copilot.

The key challenge and our solution

How can an AI system generate novel algorithmic programming problems without knowing the solution?

Our approach uses programming puzzles introduced by Microsoft Research in 2021. These puzzles—known in complexity theory as the class of “NP” decision problems—are easy to check for correctness (no hidden answer key) but often difficult to solve. In this way, they’re like a Rubik’s cube, where it’s trivial to recognize a solution but hard to find one. Three examples are illustrated below: a novel string challenge and the classic Towers of Hanoi and factoring problems. Programming puzzles can range from trivial to major open problems in algorithms and mathematics, and solving them requires all the major algorithmic techniques, such as dynamic programming and greedy algorithms. However, each puzzle just checks a single input as opposed to standard problems in algorithms, which require a solution that scales efficiently for all inputs, which is much harder to test.

Programming puzzle examples

Can computers generate valuable, novel challenges?

Surprisingly, language models such as Codex and GPT-Neo can indeed create novel puzzles when prompted to generate “more like these” on a set of example puzzles without solutions. You may wonder what makes a challenge good. Instead of focusing on interesting, we prioritize useful challenges. Our evaluation has the language model generate, solve, and train on its own puzzles; then we assess whether the training improved its performance on a hidden test set of puzzles. (By now, solutions to our puzzles may have leaked into AI training sets, but with the help of champion competitive programmers, we have created a secret test set that remains unpublished, which can be used for uncontaminated evaluation.) In our experiments with small- to medium-sized language models—with a few billion parameters, much fewer than the latest GPT models—self-training more than doubled success rates.

Risks and limitations

This research was conducted prior to GPT-4’s release. While we believe similar techniques may help GPT-4 self-improve in programming, this is an active area of research as we better understand the capabilities and limitations of these models. One key limitation of puzzles is that solutions might only work for the specific instance provided. However, this limitation also serves as an advantage in terms of human-AI alignment. Unlike other AI challenges with inherent ambiguities that could lead to unintended consequences if objectives are imprecisely defined (for example, an AI-designed math-tutor app that may become addicting unintendedly), our programming puzzles encompass exactly those standalone problems that can be perfectly verified for meeting a precise objective. As there remains a risk that any work that substantially advances AI programming capabilities can be used in other systems and with unintended consequences, we continue to encourage taking great care before deploying systems with artificially generated code.  

Examples of programming puzzles for AI self-play

Each puzzle is specified by a short Python program that checks a possible answer. Each solution is a Python program that outputs an answer in a limited amount of time.

Example 1: Towers of Hanoi

A Towers of Hanoi puzzle in three steps: the first a picture with the puzzle’s seven disks on the first tower, the second a picture with the disks split among the three towers, and the third a picture of all the disks on the last tower.

The goal of the well-known Towers of Hanoi puzzle is to move all the disks from the first tower to the last tower, one by one, without ever putting a bigger disk on top of a smaller disk. It’s easy to check that a solution is correct but hard to find a correct solution. Even though the number of steps required to solve it is exponential in the number of disks, there’s a solution in the form of a short program that is often used to teach recursion. The clever solution program that outputs the moves is easier to find than the sequence of moves itself. Here are the programming puzzle and solution:

Example 2: String challenge

This concise puzzle perplexes AI systems, although humans find it simple. The puzzle requires a string with 1,000 “A” characters but no two consecutive A’s. Most programmers devise solutions like “ABABAB …” (1,000 times), generated by the compact Python solution above. In contrast, AI systems usually need multiple attempts. Fortunately, AI systems can easily verify their attempts by running the checking program. This puzzle exemplifies a straightforward, unique problem specifically created for our dataset.

Example 3: Integer factorization

Another classic example is integer factorization. The puzzle above requires a factor of a relatively small number so it can be solved quickly by a simple loop. However, our dataset also contains factoring challenges like the 309-digit RSA Factoring Challenge number, which was published in 1991 along with a $100,000 prize. The 309-digit number was never factored, and the challenge has since ended.

The post AI self-play for algorithm design appeared first on Microsoft Research.

Read More

Bring your own ML model into Amazon SageMaker Canvas and generate accurate predictions

Bring your own ML model into Amazon SageMaker Canvas and generate accurate predictions

Machine learning (ML) helps organizations generate revenue, reduce costs, mitigate risk, drive efficiencies, and improve quality by optimizing core business functions across multiple business units such as marketing, manufacturing, operations, sales, finance, and customer service. With AWS ML, organizations can accelerate the value creation from months to days. Amazon SageMaker Canvas is a visual, point-and-click service that allows business analysts to generate accurate ML predictions without writing a single line of code or requiring ML expertise. You can use models to make predictions interactively and for batch scoring on bulk datasets.

In this post, we showcase architectural patterns on how business teams can use ML models built anywhere by generating predictions in Canvas and achieve effective business outcomes.

This integration of model development and sharing creates a tighter collaboration between business and data science teams and lowers time to value. Business teams can use existing models built by their data scientists or other departments to solve a business problem instead of rebuilding new models in outside environments.

Finally, business analysts can import shared models into Canvas and generate predictions before deploying to production with just a few clicks.

Solution overview

The following figure describes three different architecture patterns to demonstrate how data scientists can share models with business analysts, who can then directly generate predictions from those models in the visual interface of Canvas:

Prerequisites

To train and build your model using SageMaker and bring your model into Canvas, complete the following prerequisites:

  1. If you don’t already have a SageMaker domain and Studio user, set up and onboard a Studio user to a SageMaker domain.
  2. Enable and set up Canvas base permissions for your users and grant users permissions to collaborate with Studio.
  3. You must have a trained model from Autopilot, JumpStart, or the model registry. For any model that you’ve built outside of SageMaker, you must register your model in the model registry before importing it into Canvas.

Now let’s assume the role of a data scientist who is looking to train, build, deploy, and share ML models with a business analyst for each of these three architectural patterns.

Use Autopilot and Canvas

Autopilot automates key tasks of an automatic ML (AutoML) process like exploring data, selecting the relevant algorithm for the problem type, and then training and tuning it. All of this can be achieved while allowing you to maintain full control and visibility on the dataset. Autopilot automatically explores different solutions to find the best model, and users can either iterate on the ML model or directly deploy the model to production with one click.

In this example, we use a customer churn synthetic dataset from the telecom domain and are tasked with identifying customers that are potentially at risk of churning. Complete the following steps to use Autopilot AutoML to build, train, deploy, and share an ML model with a business analyst:

  1. Download the dataset, upload it to an Amazon S3 (Amazon Simple Storage Service) bucket, and make a note of the S3 URI.
  2. On the Studio console, choose AutoML in the navigation pane.
  3. Choose Create AutoML experiment.
  4. Specify the experiment name (for this post, Telecom-Customer-Churn-AutoPilot), S3 data input, and output location.
  5. Set the target column as churn.
  6. In the deployment settings, you can enable the auto deploy option to create an endpoint that deploys your best model and runs inference on the endpoint.

For more information, refer to Create an Amazon SageMaker Autopilot experiment.

  1. Choose your experiment, then select your best model and choose Share model.
  2. Add a Canvas user and choose Share to share the model.

(Note: You can’t share model with the same Canvas user as used for Studio login. For example, Studio user-A can’t share model with Canvas User-A. But user-A can share model with user-B, hence choose different uses for model-sharing)

For more information, refer to Studio users: Share a model to SageMaker Canvas.

Use JumpStart and Canvas

JumpStart is an ML hub that provides pre-trained, open-source models for a wide range of ML use cases like fraud detection, credit risk prediction, and product defect detection. You can deploy more than 300 pre-trained models for tabular, vision, text, and audio data.

For this post, we use a LightGBM regression pre-trained model from JumpStart. We train the model on a custom dataset and share the model with a Canvas user (business analyst). The pre-trained model can be deployed to an endpoint for inference. JumpStart provides an example notebook to access the model after it is deployed.

In this example, we use the abalone dataset. The dataset contains examples of eight physical measurements such as length, diameter, and height to predict the age of abalone (a regression problem).

  1. Download the abalone dataset from Kaggle.
  2. Create an S3 bucket and upload the train, validation, and custom header datasets.
  3. On the Studio console, under SageMaker JumpStart in the navigation pane, choose Models, notebooks, solutions.
  4. Under Tabular Models, choose LightGBM Regression.
  5. Under Train Model, specify the S3 URIs for the training, validation, and column header datasets.
  6. Choose Train.
  7. In the navigation pane, choose Launched JumpStart assets.
  8. On the Training jobs tab, choose your training job.
  9. On the Share menu, choose Share to Canvas.
  10. Choose the Canvas users to share with, specify the model details, and choose Share.

For more information, refer to Studio users: Share a model to SageMaker Canvas.

Use SageMaker model registry and Canvas

With SageMaker model registry, you can catalog models for production, manage model versions, associate metadata, manage the approval status of a model, deploy models to production, and automate model deployment with CI/CD.

Let’s assume the role of a data scientist. For this example, you’re building an end-to-end ML project that includes data preparation, model training, model hosting, model registry, and model sharing with a business analyst. Optionally, for data preparation and preprocessing or postprocessing steps, you can use Amazon SageMaker Data Wrangler and an Amazon SageMaker Processing job. In this example, we use the abalone dataset downloaded from LIBSVM. The target variable is the age of abalone.

  1. In Studio, clone the GitHub repo.
  2. Complete the steps listed in the README file.
  3. On the Studio console, under Models in the navigation pane, choose Model registry.
  4. Choose the model sklearn-reg-ablone.
  5. Share model version 1 from the model registry to Canvas.
  6. Choose the Canvas users to share with, specify the model details, and choose Share.

For instructions, refer to the Model Registry section in Studio users: Share a model to SageMaker Canvas.

Manage shared models

After you share the model using any of the preceding methods, you can go to the Models section in Studio and review all shared models. In the following screenshot, we see 3 different models shared by a Studio user (data scientist) with different Canvas users (business teams).

Import a shared model and make predictions with Canvas

Let’s assume the role of business analyst and log in to Canvas with your Canvas user.

When a data scientist or Studio user shares a model with a Canvas user, you receive a notification within the Canvas application that a Studio user has shared a model with you. In the Canvas application, the notification is similar to the following screenshot.

You can choose View update to see the shared model, or you can go to the Models page in the Canvas application to discover all the models that have been shared with you. The model import from Studio can take up to 20 minutes.

After importing the model, you can view its metrics and generate real-time predictions with what-if analysis or batch predictions.

Considerations

Keep in mind the following when sharing models with Canvas:

  • You store training and validation datasets in Amazon S3, and the S3 URIs are passed to Canvas with AWS Identity and Access Management (IAM) permissions.
  • Provide the target column to Canvas or use the first column as default.
  • For a Canvas container to parse inference data, the Canvas endpoint accepts either text (CSV) or application (JSON).
  • Canvas doesn’t support multiple container or inference pipelines.
  • A data schema is provided to Canvas if no headers are provided in the training and validation datasets. By default, the JumpStart platform doesn’t provide headers in the training and validation datasets.
  • With Jumpstart, the training job needs to be complete before you can share it with Canvas.

Refer to Limitations and troubleshooting to help you troubleshoot any issues you encounter when sharing models.

Clean up

To avoid incurring future charges, delete or shut down the resources you created while following this post. Refer to Logging out of Amazon SageMaker Canvas for more details. Shut down the individual resources, including notebooks, terminal, kernels, apps and instances. For more information, refer to Shut Down Resources. Delete the model version, SageMaker endpoint and resources, Autopilot experiment resources, and S3 bucket.

Conclusion

Studio allows data scientists to share ML models with business analysts in a few simple steps. Business analysts can benefit from ML models already built by data scientists to solve business problems instead of creating a new model in Canvas. However, it might be difficult to use these models outside the environments in which they are built due to technical requirements and manual processes to import models. This often forces users to rebuild ML models, resulting in the duplication of effort and additional time and resources. Canvas removes these limitations so you can generate predictions in Canvas with models that you have trained anywhere. By using the three patterns illustrated in this post, you can register ML models in the SageMaker model registry, which is a metadata store for ML models, and import them into Canvas. Business analysts can then analyze and generate predictions from any model in Canvas.

To learn more about using SageMaker services, check out the following resources:

If you have questions or suggestions, leave a comment.


About the authors

Aman Sharma is a Senior Solutions Architect With AWS. He works with start-ups, small and medium businesses, and enterprise customers across the APJ region, more than 19 years of experience in consulting, architecting, and solutioning. He is passionate about democratizing AI and ML and helping customers in designing their data and ML strategies. Outside work, he likes to explore nature and wildlife.

Zichen Nie is the Senior Software Engineer at AWS SageMaker leading the project Bring Your Own Model to SageMaker Canvas last year. She has been working in Amazon for more than 7 years and has experience in both Amazon Supply Chain Optimization and AWS AI services. She enjoys Barre workouts and music after work.

Read More

Self-Supervised Temporal Analysis of Spatiotemporal Data

*=Equal Contributors
There exists a correlation between geospatial activity temporal patterns and type of land use. A novel self-supervised approach is proposed to survey landscape based on activity time series, where time series signal is transformed to frequency domain and compressed into embeddings by a contractive autoencoder, which preserve cyclic temporal patterns observed in time series. The embeddings are input to segmentation neural network for binary classification. Experiments show that the temporal embeddings are effective in classifying residential area and commercial area.Apple Machine Learning Research

Question answering using Retrieval Augmented Generation with foundation models in Amazon SageMaker JumpStart

Question answering using Retrieval Augmented Generation with foundation models in Amazon SageMaker JumpStart

Today, we announce the availability of sample notebooks that demonstrate question answering tasks using a Retrieval Augmented Generation (RAG)-based approach with large language models (LLMs) in Amazon SageMaker JumpStart. Text generation using RAG with LLMs enables you to generate domain-specific text outputs by supplying specific external data as part of the context fed to LLMs.

JumpStart is a machine learning (ML) hub that can help you accelerate your ML journey. JumpStart provides many pre-trained language models called foundation models that can help you perform tasks such as article summarization, question answering, and conversation generation and image generation.

In this post, we describe RAG and its advantages, and demonstrate how to quickly get started by using a sample notebook to solve a question answering task using RAG implementation with LLMs in Jumpstart. We demonstrate two approaches:

  • How to solve the problem with the open-sourced LangChain library and Amazon SageMaker endpoints in a few lines of code
  • How to use the SageMaker KNN algorithm to perform semantic searching for large-scale data using SageMaker endpoints

LLMS and constraints

LLMs are trained on large amounts of unstructured data and are great at general text generation. LLMs can store factual knowledge by training their parameters on a large corpus of natural language data.

There are a few limitations of using off-the-shelf pre-trained LLMs:

  • They’re usually trained offline, making the model agnostic to the latest information (for example, a chatbot trained from 2011–2018 has no information about COVID-19).
  • They make predictions by only looking up information stored in its parameters, leading to inferior interpretability.
  • They’re mostly trained on general domain corpora, making them less effective on domain-specific tasks. There are scenarios when you want models to generate text based on specific data rather than generic data. For example, a health insurance company may want their question answering bot to answer questions using the latest information stored in their enterprise document repository or database, so the answers are accurate and reflect their unique business rules.

Currently, there are two popular ways to reference specific data in LLMs:

  • Insert data as context in the model prompt as a way to provide the information that the model can use while creating the result
  • Fine-tine the model by providing a file with prompt and completion pairs

The challenge of the context-based approach is that models come with limited context size, and including all the documents as context may not fit into the allowed context size of the model. Depending on the model used, there may also be additional cost for larger context.

For the approach of fine-tuning, generating the right formatted information is time consuming and involves cost. In addition, if external data used for fine-tuning changes frequently, it would imply frequent fine-tunings and retraining are needed to create accurate results. Frequent training impacts speed to market and adds to the overall solution cost.

To demonstrate these constraints, we used an LLM Flan T5 XXL model and asked the following question:

question = "Which instances can I use with Managed Spot Training in SageMaker?"

We get the following response:

"""For model: huggingface-text2text-flan-t5-xxl, the generated output is: 
the Managed Spot Training is a subscriptions product available for the following instances: Data Science Virtual Machine (DSVM), DSVM High, and DSVM Low.
"""

As you can see, the response is not accurate. The correct answer should be all SageMaker instances support Managed Spot Training.

We tried the same question but with additional context passed along with the question:

question + context + prompt = """
Answer based on context:

Managed Spot Training can be used with all instances supported in Amazon SageMaker. Managed Spot Training is supported in all AWS Regions where Amazon SageMaker is currently available.

Which instances can I use with Managed Spot Training in SageMaker?
"""

We got the following response this time:

"""For model: huggingface-text2text-flan-t5-xxl, the generated output is: 
instances supported in Amazon SageMaker
"""

The response is better but still not accurate. However, in real production use cases, users may send various queries, and to provide accurate responses, you may want to include all or most of the available information as part of the static context to create accurate responses. Therefore, with this approach, we may hit the context size limitation constraint because even non-relevant information for the question asked is sent as part of the context. This is where you can use the RAG-based approach to create scalable and accurate responses for a user’s queries.

Retrieval Augmented Generation

To solve the constraints we discussed, we can use Retrieval Augmented Generation (RAG) with LLMs. RAG retrieves data from outside the language model (non-parametric) and augments the prompts by adding the relevant retrieved data in context. RAG models were introduced by Lewis et al. in 2020 as a model where parametric memory is a pre-trained seq2seq model and the non-parametric memory is a dense vector index of Wikipedia, accessed with a pre-trained neural retriever.

In RAG, the external data can come from multiple data sources, such as a document repository, databases, or APIs. The first step is to convert the documents and the user query in the format so they can be compared and relevancy search can be performed. To make the formats comparable for doing relevancy search, a document collection (knowledge library) and the user-submitted query are converted to numerical representation using embedding language models. The embeddings are essentially numerical representations of concept in text. Next, based on the embedding of user query, its relevant text is identified in the document collection by a similarity search in the embedding space. Then the prompt provided by the user is appended with relevant text that was searched and it’s added to the context. The prompt is now sent to the LLM and because the context has relevant external data along with the original prompt, the model output is relevant and accurate.

To maintain up-to-date information for the reference documents, you can asynchronously update the documents and update embedding representation of the documents. This way, the updated documents will be used to generate answers for future questions to provide accurate responses.

The following diagram shows the conceptual flow of using RAG with LLMs.

In this post, we demonstrate how to implement a question answering application with the following steps:

  1. Generate embedding for each of document in the knowledge library with a SageMaker GPT-J-6B embedding model.
  2. Identify the top K most relevant documents based on the user query.
    1. For your query, generate the embedding of the query using the same embedding model.
    2. Search the indexes of the top K most relevant documents in the embedding space using an in-memory FAISS search.
    3. Use the indexes to retrieve the corresponding documents.
  3. Use the retrieved relevant documents as context with the prompt and question, and send them to the SageMaker LLM to generate the response.

We demonstrate the following approaches:

  • How to solve a question answering task with SageMaker LLMs and embedding endpoints and the open-sourced library LangChain in a few lines of code. In particular, we use two SageMaker endpoints for the LLM (Flan T5 XXL) and embedding model (GPT-J 6B), and the vector database used is in-memory FAISS. For more details, see the GitHub repo.
  • If the in-memory FAISS doesn’t fit into your large dataset, we provide you with a SageMaker KNN algorithm to perform the semantic search, which also uses FAISS as the underlying searching algorithm. For details, see the GitHub repo.

The following diagram depicts the solution architecture.

JumpStart RAG-based implementation notebook with LangChain

LangChain is an open-source framework for developing applications powered by language models. LangChain provides a generic interface for many different LLMs. It also makes it easier for developers to chain various LLMs together and build powerful applications. LangChain provides a standard interface for memory and a collection of memory implementations to persist the state between calls of agents or chains.

LangChain has many other utility features that can add to developer productivity. These features include a prompt template that helps customize prompts using variables in the prompt template, agents to build end-to-end applications, indexes for search and retrieval steps of the chain, and much more. To further explore LangChain capabilities, refer to the LangChain documentation.

Create LLM Model

As a first step, deploy the JumpStart LLM model of your choice. In this demo, we use a Jumpstart Flan T5 XXL model endpoint. For deployment instructions, refer to Zero-shot prompting for the Flan-T5 foundation model in Amazon SageMaker JumpStart. Based on your use case, you can also deploy other instruction-tuned models like Flan T5 UL2 or BloomZ 7B1. For details, see the example notebook.

To use the SageMaker LLM endpoint with LangChain, we use langchain.llms.sagemaker_endpoint.SagemakerEndpoint, which abstracts the SageMaker LLM endpoint. We need to perform a transformation for the request and response payload as shown in the following code for the LangChain SageMaker integration. Note that you may need to adjust the code in ContentHandler based on the content_type and accepts format of the LLM model that you choose to use.

from langchain.llms.sagemaker_endpoint import SagemakerEndpoint

class ContentHandler(ContentHandlerBase):
    content_type = "application/json"
    accepts = "application/json"

    def transform_input(self, prompt: str, model_kwargs={}) -> bytes:
        input_str = json.dumps({"text_inputs": prompt, **model_kwargs})
        return input_str.encode("utf-8")

    def transform_output(self, output: bytes) -> str:
        response_json = json.loads(output.read().decode("utf-8"))
        return response_json["generated_texts"][0]

content_handler = ContentHandler()

sm_llm = SagemakerEndpoint(
    endpoint_name=_MODEL_CONFIG_["huggingface-text2text-flan-t5-xxl"]["endpoint_name"],
    region_name=aws_region,
    model_kwargs=parameters,
    content_handler=content_handler,
)

Create the embedding model

Next, we need to get our embedded model ready. We deploy the GPT-J 6B model as the embedding model. If you’re using a JumpStart embedding model, you need to customize the LangChain SageMaker endpoint embedding class and transform the model request and response to integrate with LangChain. For a detailed implementation, refer to the GitHub repo.

embeddings = SagemakerEndpointEmbeddingsJumpStart(
    endpoint_name=_MODEL_CONFIG_["huggingface-textembedding-gpt-j-6b"]["endpoint_name"],
    region_name=aws_region,
    content_handler=content_handler,
)

Load domain-specific documents using the LangChain document loader and create an index

We use the CSVLoader package in LangChain to load CSV-formatted documents into the document loader:

loader = CSVLoader(file_path="rag_data/processed_data.csv")
documents = loader.load()

Next, we use TextSplitter to preprocess data for embedding purposes and use the SageMaker embedding model GPT-J -6B to create the embedding. We store embedding in a FAISS vector store to create an index. We use this index to find relevant documents that are semantically similar to the user’s query.

The following code shows how all these steps are done by the VectorstoreIndexCreator class in just few lines of code in LangChain to create a concise implementation of question answering with RAG:

index_creator = VectorstoreIndexCreator(
    vectorstore_cls=FAISS,
    embedding=embeddings,
    text_splitter=CharacterTextSplitter(chunk_size=300, chunk_overlap=0),
)
index = index_creator.from_loaders([loader])

Use the index to search for relevant context and pass it to the LLM model

Next, use the query method on the created index and pass the user’s question and SageMaker endpoint LLM. LangChain selects the top four closest documents (K=4) and passes the relevant context extracted from the documents to generate an accurate response. See the following code:

index.query(question=question, llm=sm_llm)

We get the following response for the query using the RAG-based approach with Flan T5 XXL:

"""For model: huggingface-text2text-flan-t5-xxl, the generated output is: 
Managed Spot Training can be used with all instances supported in Amazon SageMaker
"""

The response looks more accurate compared to the response we got with other approaches that we demonstrated earlier that have no context or static context that may not be always relevant.

Alternate approach to implement RAG with more customization using SageMaker and LangChain

In this section, we show you another approach to implement RAG using SageMaker and LangChain. This approach offers the flexibility to configure top K parameters for a relevancy search in the documents. It also allows you to use the LangChain feature of prompt templates, which allow you to easily parameterize the prompt creation instead of hard coding the prompts.

In the following code, we explicitly use FAISS to generate embedding for each of the document in the knowledge library with the SageMaker GPT-J-6B embedding model. Then we identify the top K (K=3) most relevant documents based on the user query.

docsearch = FAISS.from_documents(documents, embeddings)
docs = docsearch.similarity_search(question, k=3)

Next, we use a prompt template and chain it with the SageMaker LLM:

prompt_template = """Answer based on context:nn{context}nn{question}"""
PROMPT = PromptTemplate(template=prompt_template, input_variables=["context", "question"])
chain = load_qa_chain(llm=sm_llm, prompt=PROMPT)

We send the top three (K=3) relevant documents we found as context to the prompt by using a LangChain chain:

result = chain({"input_documents": docs, "question": question}, return_only_outputs=True)["output_text"]

With this approach of RAG implementation, we were able to take advantage of the additional flexibility of LangChain prompt templates and customize the number of documents searched for a relevancy match using the top K hyperparameter.

JumpStart RAG-based implementation notebook with SageMaker KNN

In this section, we implement the RAG-based approach using the KNN algorithm for finding relevant documents to create enhanced context. In this approach, we’re not using LangChain, but we use same dataset Amazon SageMaker FAQs as knowledge documents, embedding the models GPT-J-6B and LLM Flan T5 XXL just as we did in the previous LangChain approach.

If you have a large dataset, the SageMaker KNN algorithm may provide you with an effective semantic search. The SageMaker KNN algorithm also uses FAISS as the underlying search algorithm. The notebook for this solution can be found on GitHub.

First, we deploy the LLM Flan T5 XXL and GPT-J 6B embedding models in the same way as in the previous section. For each record in the knowledge database, we generate an embedding vector using the GPT-J embedding model.

Next, we use a SageMaker KNN training job to index the embedding of the knowledge data. The underlying algorithm used to index the data is FAISS. We want to find the top five most relevant documents, so we set the TOP_K variable to 5. We create the estimator for the KNN algorithm, run the training job, and deploy the KNN model to find indexes of the top five documents matching the query. See the following code:

from sagemaker.amazon.amazon_estimator import get_image_uri

def trained_estimator_from_hyperparams(s3_train_data, hyperparams, output_path):
    """
    Create an Estimator from the given hyperparams, fit to training data,
    and return a deployed predictor

    """
    # set up the estimator
    knn = sagemaker.estimator.Estimator(
        get_image_uri(boto3.Session().region_name, "knn"),
        aws_role,
        instance_count=1,
        instance_type="ml.m5.2xlarge",
        output_path=output_path,
        sagemaker_session=sess,
    )
    knn.set_hyperparameters(**hyperparams)

    # train a model. fit_input contains the locations of the train data
    fit_input = {"train": s3_train_data}
    knn.fit(fit_input)
    return knn

hyperparams = {"feature_dim": train_features.shape[1], "k": TOP_K,"sample_size": train_features.shape[0], "predictor_type": "classifier"}
output_path = f"s3://{bucket}/{prefix}/default_example/output"
knn_estimator = trained_estimator_from_hyperparams(
    s3_train_data, hyperparams, output_path)

Next, we create an embedding representation of the query using the GPT-J-6B embedding model that we used for creating an embedding of the knowledge library documents:

query_response = query_endpoint_with_json_payload(question, endpoint_name_embed, content_type="application/x-text")
question_embedding = parse_response_text_embed(query_response)

Then we use the KNN endpoint and pass the embedding of the query to the KNN endpoint to get the indexes of the top K most relevant documents. We use the indexes to retrieve the corresponded textual documents. Next, we concatenate the documents, ensuring the maximum allowed length of context is not exceeded. See the following code:

"""With maximum sequence length 500, selected top 4 document sections: 
  Managed Spot Training can be used with all instances supported in Amazon SageMaker.
  Managed Spot Training is supported in all AWS Regions where Amazon SageMaker is currently available.
  The difference between Savings Plans for Amazon SageMaker and Savings Plans for EC2 is in the services they 
  include. 
  SageMaker Savings Plans apply only to SageMaker ML Instance usage.
  There are no fixed limits to the size of the dataset you can use for training models with Amazon SageMaker.
"""

Now we come to our final step in which we combine the query, prompt, and the context containing text from relevant documents and pass it to the text generation LLM Flan T5 XXL model to generate the answer.

We get the following response for the query using a RAG-based approach with Flan T5 XXL:

"""
For model: huggingface-text2text-flan-t5-xxl, the generated output is: 

Managed Spot Training can be used with all instances supported in Amazon SageMaker
"""

Clean up

Make sure to delete the endpoints that we created in this notebook when not using them to avoid reoccurring cost.

Conclusion

In this post, we demonstrated the implementation of a RAG-based approach with LLMs for question answering tasks using two approaches: LangChain and the built-in KNN algorithm. The RAG-based approach optimizes the accuracy of the text generation using Flan T5 XXL by dynamically providing relevant context that was created by searching a list of documents.

You can use this these notebooks in SageMaker as is or you may customize them to your needs. To customize, you can use your own set of documents in the knowledge library, use other relevancy search implementations like OpenSearch, and use other embedding models and text generation LLMs available on JumpStart.

We look forward to seeing what you build on JumpStart using a RAG-based approach!


About the authors

Dr. Xin Huang is a Senior Applied Scientist for Amazon SageMaker JumpStart and Amazon SageMaker built-in algorithms. He focuses on developing scalable machine learning algorithms. His research interests are in the area of natural language processing, explainable deep learning on tabular data, and robust analysis of non-parametric space-time clustering. He has published many papers in ACL, ICDM, KDD conferences, and Royal Statistical Society: Series A.

Rachna Chadha is a Principal Solution Architect AI/ML in Strategic Accounts at AWS. Rachna is an optimist who believes that ethical and responsible use of AI can improve society in future and bring economical and social prosperity. In her spare time, Rachna likes spending time with her family, hiking and listening to music.

Dr. Kyle Ulrich is an Applied Scientist with the Amazon SageMaker built-in algorithms team. His research interests include scalable machine learning algorithms, computer vision, time series, Bayesian non-parametrics, and Gaussian processes. His PhD is from Duke University and he has published papers in NeurIPS, Cell, and Neuron.

Hemant Singh is a Machine Learning Engineer with experience in Amazon SageMaker JumpStart and Amazon SageMaker built-in algorithms. He got his masters from Courant Institute of Mathematical Sciences and B.Tech from IIT Delhi. He had experience in working on a diverse range of Machine Learning problems within the domain of natural language processing, computer vision, and time-series analysis.

Manas Dadarkar is a Software Development Manager owning the engineering of the Amazon Forecast service. He is passionate about the applications of machine learning and making ML technologies easily available for everyone to adopt and deploy to production. Outside of work, he has multiple interests including travelling, reading and spending time with friends and family.

Dr. Ashish Khetan is a Senior Applied Scientist with Amazon SageMaker built-in algorithms and helps develop machine learning algorithms. He got his PhD from University of Illinois Urbana-Champaign. He is an active researcher in machine learning and statistical inference, and has published many papers in NeurIPS, ICML, ICLR, JMLR, ACL, and EMNLP conferences.

Read More

Considerations for Distribution Shift Robustness in Health

*=Equal Contributors
This paper was accepted at the workshop “Trustworthy Machine Learning for Healthcare Workshop” at the conference ICLR 2023.
When analyzing robustness of predictive models under distribution shift, many works focus on tackling generalization in the presence of spurious correlations. In this case, one typically makes use of covariates or environment indicators to enforce independencies in learned models to guarantee generalization under various distribution shifts. In this work, we analyze a class of distribution shifts, where such independencies are not desirable, as…Apple Machine Learning Research