Research over the past few years has shown that machine learning (ML) models are vulnerable to adversarial inputs, where an adversary can craft inputs to strategically alter the model’s output (in image classification, speech recognition, or fraud detection). For example, imagine you have deployed a model that identifies your employees based on images of their faces. As demonstrated in the whitepaper Accessorize to a Crime: Real and Stealthy Attacks on State-of-the-Art Face Recognition, malicious employees may apply subtle but carefully designed modifications to their image and fool the model to authenticate them as other employees. Obviously, such adversarial inputs—especially if there are a significant amount of them—can have a devastating business impact.
Ideally, we want to detect each time an adversarial input is sent to the model to quantify how adversarial inputs are impacting your model and business. To this end, a wide class of methods analyze individual model inputs to check for adversarial behavior. However, active research in adversarial ML has led to increasingly sophisticated adversarial inputs, many of which are known to make detection ineffective. The reason for this shortcoming is that it’s difficult to draw conclusions from an individual input as to whether it’s adversarial or not. To this end, a recent class of methods focuses on distributional-level checks by analyzing multiple inputs at a time. The key idea behind these new methods is that considering multiple inputs at a time enables more powerful statistical analysis that isn’t possible with individual inputs. However, in the face of a determined adversary with deep knowledge of the model, even these advanced detection methods can fail.
However, we can defeat even these determined adversaries by providing the defense methods with additional information. Specifically, instead of just the analyzing model inputs, analyzing the latent representations collected from the intermediate layers in a deep neural network significantly strengthens the defense.
In this post, we walk you through how to detect adversarial inputs using Amazon SageMaker Model Monitor and Amazon SageMaker Debugger for an image classification model hosted on Amazon SageMaker.
To reproduce the different steps and results listed in this post, clone the repository detecting-adversarial-samples-using-sagemaker into your Amazon SageMaker notebook instance and run the notebook.
Detecting adversarial inputs
We show you how to detect adversarial inputs using the representations collected from a deep neural network. The following four images show the original training image on the left (taken from the Tiny ImageNet dataset) and three images produced by the Projected Gradient Descent (PGD) attack [1] with different perturbation parameters ϵ. The model used here was ResNet18. The ϵ parameter defines the amount of adversarial noise added to the images. The original image (left) is correctly predicted as class 67 (goose
). The adversarially modified images 2, 3, and 4 are incorrectly predicted as class 51 (mantis
) by the ResNet18 model. We can also see that images generated with small ϵ are perceptually indistinguishable from the original input image.
Next, we create a set of normal and adversarial images and use t-Distributed Stochastic Neighbor Embedding (t-SNE [2]) to visually compare their distributions. t-SNE is a dimensionality reduction method that maps high-dimensional data into a 2- or 3-dimensional space. Each data point in the following image presents an input image. Orange data points present the normal inputs taken from the test set, and blue data points indicate the corresponding adversarial images generated with an epsilon of 0.003. If normal and adversarial inputs are distinguishable, then we would expect separate clusters in the t-SNE visualization. Because both belong to the same cluster, this means that a detection technique that focuses solely on changes in the model input distribution can’t distinguish these inputs.
Let’s take a closer look at the layer representations produced by different layers in the ResNet18 model. ResNet18 consists of 18 layers; in the following image, we visualize the t-SNE embeddings for the representations for six of these layers.
As the preceding figure shows, natural and adversarial inputs become more distinguishable for deeper layers of the ResNet18 model.
Based on these observations, we use a statistical method that measures distinguishability with hypothesis testing. The method consists of a two-sample test using maximum mean discrepancy (MMD). MMD is a kernel-based metric for measuring the similarity between two distributions generating the data. A two-sample test takes two sets that contain inputs drawn from two distributions, and determines whether these distributions are the same. We compare the distribution of inputs observed in the training data and compare it with the distribution of the inputs received during inference.
Our method uses these inputs to estimate the p-value using MMD. If the p-value is greater than a user-specific significance threshold (5% in our case), we conclude that both distributions are different. The threshold tunes the trade-off between false positives and false negatives. A higher threshold, such as 10%, decreases the false negative rate (there are fewer cases when both distributions were different but the test failed to indicate that). However, it also results in more false positives (the test indicates both distributions are different even when that isn’t the case). On the other hand, a lower threshold, such as 1%, results in fewer false positives but more false negatives.
Instead of applying this method solely on the raw model inputs (images), we use the latent representations produced by the intermediate layers of our model. To account for its probabilistic nature, we apply the hypothesis test 100 times on 100 randomly selected natural inputs and 100 randomly selected adversarial inputs. Then we report the detection rate as the percentage of tests that resulted in a detection event according to our 5% significance threshold. The higher detection rate is a stronger indication that the two distributions are different. This procedure gives us the following detection rates:
- Layer 1: 3%
- Layer 4: 7%
- Layer 8: 84%
- Layer 12: 95%
- Layer 14: 100%
- Layer 15: 100%
In the initial layers, the detection rate is rather low (less than 10%), but increases to 100% in the deeper layers. Using the statistical test, the method can confidently detect adversarial inputs in deeper layers. It is often sufficient to simply use the representations generated by the penultimate layer (the last layer before the classification layer in a model). For more sophisticated adversarial inputs, it’s useful to use representations from other layers and aggregate the detection rates.
Solution overview
In the previous section, we saw how to detect adversarial inputs using representations from the penultimate layer. Next, we show how to automate these tests on SageMaker by using Model Monitor and Debugger. For this example, we first train an image classification ResNet18 model on the tiny ImageNet dataset. Next, we deploy the model on SageMaker and create a custom Model Monitor schedule that runs the statistical test. Afterwards, we run inference with normal and adversarial inputs to see how effective the method is.
Capture tensors using Debugger
During model training, we use Debugger to capture representations generated by the penultimate layer, which are used later on to derive information about the distribution of normal inputs. Debugger is a feature of SageMaker that enables you to capture and analyze information such as model parameters, gradients, and activations during model training. These parameter, gradient, and activation tensors are uploaded to Amazon Simple Storage Service (Amazon S3) while the training is in progress. You can configure rules that analyze these for issues such as overfitting and vanishing gradients. For our use case, we only want to capture the penultimate layer of the model (.*avgpool_output
) and the model outputs (predictions). We specify a Debugger hook configuration that defines a regular expression for the layer representations to be collected. We also specify a save_interval
that instructs Debugger to collect this data during the validation phase every 100 forward passes. See the following code:
from sagemaker.debugger import DebuggerHookConfig, CollectionConfig
debugger_hook_config = DebuggerHookConfig(
collection_configs=[
CollectionConfig(
name="custom_collection",
parameters={ "include_regex": ".*avgpool_output|.*ResNet_output",
"eval.save_interval": "100" })])
Run SageMaker training
We pass the Debugger configuration into the SageMaker estimator and start the training:
import sagemaker
from sagemaker.pytorch import PyTorch
role = sagemaker.get_execution_role()
pytorch_estimator = PyTorch(entry_point='train.py',
source_dir='code',
role=role,
instance_type='ml.p3.2xlarge',
instance_count=1,
framework_version='1.8',
py_version='py3',
hyperparameters = {'epochs': 25,
'learning_rate': 0.001},
debugger_hook_config=debugger_hook_config
)
pytorch_estimator.fit()
Deploy an image classification model
After the model training is complete, we deploy the model as an endpoint on SageMaker. We specify an inference script that defines the model_fn
and transform_fn
functions. These functions specify how the model is loaded and how incoming data needs to be preprocessed to perform the model inference. For our use case, we enable Debugger to capture relevant data during inference. In the model_fn
function, we specify a Debugger hook and a save_config
that specifies that for each inference request, the model inputs (images), the model outputs (predictions), and the penultimate layer are recorded (.*avgpool_output
). We then register the hook on the model. See the following code:
def model_fn(model_dir):
#create model
model = create_and_load_model(model_dir)
#hook configuration
tensors_output_s3uri = os.environ.get('tensors_output')
#capture layers for every inference request
save_config = smd.SaveConfig(mode_save_configs={
smd.modes.PREDICT: smd.SaveConfigMode(save_interval=1),
})
#configure Debugger hook
hook = smd.Hook(
tensors_output_s3uri,
save_config=save_config,
include_regex='.*avgpool_output|.*ResNet_output_0|*ResNet_input',
)
#register hook
hook.register_module(model)
#set mode
hook.set_mode(modes.PREDICT)
return model
Now we deploy the model, which we can do from the notebook in two ways. We can either call pytorch_estimator.deploy()
or create a PyTorch model that points to the model artifact files in Amazon S3 that have been created by the SageMaker training job. In this post, we do the latter. This allows us to pass in environment variables into the Docker container, which is created and deployed by SageMaker. We need the environment variable tensors_output
to tell the script where to upload the tensors that are collected by SageMaker Debugger during inference. See the following code:
from sagemaker.pytorch import PyTorchModel
sagemaker_model = PyTorchModel(
model_data=pytorch_estimator.model_data,
role=role,
source_dir='code',
entry_point='inference.py',
env={
'tensors_output': f's3://{sagemaker_session.default_bucket()}/data_capture/inference',
},
framework_version='1.8',
py_version='py3',
)
Next, we deploy the predictor on an ml.m5.xlarge instance type:
predictor = sagemaker_model.deploy(
initial_instance_count=1,
instance_type='ml.m5.xlarge',
data_capture_config=data_capture_config,
deserializer=sagemaker.deserializers.JSONDeserializer(),
)
Create a custom Model Monitor schedule
When the endpoint is up and running, we create a customized Model Monitor schedule. This is a SageMaker processing job that runs on a periodic interval (such as hourly or daily) and analyzes the inference data. Model Monitor provides a pre-configured container that analyzes and detects data drift. In our case, we want to customize it to fetch the Debugger data and run the MMD two-sample test on the retrieved layer representations.
To customize it, we first define the Model Monitor object, which specifies on which instance type these jobs are going to run and the location of our custom Model Monitor container:
from sagemaker.model_monitor import ModelMonitor
monitor = ModelMonitor(
base_job_name='ladis-monitor',
role=role,
image_uri=processing_repository_uri,
instance_count=1,
instance_type='ml.m5.large',
env={ 'training_data':f'{pytorch_estimator.latest_job_debugger_artifacts_path()}',
'inference_data': f's3://{sagemaker_session.default_bucket()}/data_capture/inference'},
)
We want to run this job on an hourly basis, so we specify CronExpressionGenerator.hourly()
and the output locations where analysis results are uploaded to. For that we need to define ProcessingOutput
for the SageMaker processing output:
from sagemaker.model_monitor import CronExpressionGenerator, MonitoringOutput
from sagemaker.processing import ProcessingInput, ProcessingOutput
#inputs and outputs for scheduled monitoring job
destination = f's3://{sagemaker_session.default_bucket()}/data_capture/results'
processing_output = ProcessingOutput(
output_name='result',
source='/opt/ml/processing/results',
destination=destination,
)
output = MonitoringOutput(source=processing_output.source, destination=processing_output.destination)
#create schedule
monitor.create_monitoring_schedule(
output=output,
endpoint_input=predictor.endpoint_name,
schedule_cron_expression=CronExpressionGenerator.hourly(),
)
Let’s look closer at what our custom Model Monitor container is running. We create an evaluation script, which loads the data captured by Debugger. We also create a trial object, which enables us to access, query, and filter the data that Debugger saved. With the trial object, we can iterate over the steps saved during the inference and training phases trial.steps(mode)
.
First, we fetch the model outputs (trial.tensor("ResNet_output_0")
) as well as the penultimate layer (trial.tensor_names(regex=".*avgpool_output")
). We do this for the inference and validation phases of training (modes.EVAL
and modes.PREDICT
). The tensors from the validation phase serve as an estimation of the normal distribution, which we then use to compare the distribution of inference data. We created a class LADIS (Detecting Adversarial Input Distributions via Layerwise Statistics). This class provides the relevant functionalities to perform the two-sample test. It takes the list of tensors from the inference and validation phases and runs the two-sample test. It returns a detection rate, which is a value between 0–100%. The higher the value, the more likely that the inference data follows a different distribution. Furthermore, we compute a score for each sample that indicates how likely a sample is adversarial and the top 100 samples are recorded, so that users can further inspect them. See the following code:
import LADIS
import sample_selection
#access tensors saved during training
trial = create_trial("s3://xxx/training/debug-output/")
#iterate over validation steps saved by Debugger during training
for step in trial.steps(mode=modes.EVAL):
#get model outputs
tensor = trial.tensor("ResNet_output_0").value(step, mode=modes.EVAL)
prediction = np.argmax(tensor)
val_predictions.append(prediction)
#get outputs from penultimate layer
for layer in trial.tensor_names(regex=".*avgpool_output"):
tensor = trial.tensor(layer).value(step, mode=modes.EVAL)])
val_pen_layer[layer].append(tensor)
#access tensors saved during inference
trial = create_trial("s3://xxx/data_capture/inference/")
#iterate over inference steps saved by Debugger
for step in trial.steps(mode=modes.PREDICT):
#get model outputs
tensor = trial.tensor("ResNet_output_0").value(step, mode=modes.PREDICT)
prediction = np.argmax(tensor)
inference_predictions.append(prediction)
#get penultimate layer
for layer in trial.tensor_names(regex=".*avgpool_output"):
tensor = trial.tensor(layer).value(step, mode=modes.PREDICT)])
inference_pen_layer[layer].append(tensor)
#create LADIS object
ladis = LADIS.LADIS(val_pen_layer, val_predictions,
inference_pen_layer, inference_predictions)
#run MMD test
detection_rate = ladis.get_detection_rate(layers=[0], combine=True)
#determine how much each sample contribute to the detection
for index in range(len(query_latent['avgpool_output_0'])):
stats.append(sample_selection.compute_ME_stat(val_latent['avgpool_output_0',
inference_pen_layer['avgpool_output_0'],
inference_pen_layer['avgpool_output_0'][index]))
#find top 100 samples that were the most impactful for detection
samples = sorted(stats)[:100]
Test against adversarial inputs
Now that our custom Model Monitor schedule has been deployed, we can produce some inference results.
First, we run with data from the holdout set and then with adversarial inputs:
test_dataset = datasets.CIFAR10('data/cifar10', train=False, download=True, transform=None)
#run inference loop over holdout dataset
for index, (image, label) in enumerate(zip(test_dataset.data, test_dataset.targets)):
#predict
result = predictor.predict(image)
We can then check the Model Monitor display in Amazon SageMaker Studio or use Amazon CloudWatch logs to see if an issue was found.
Next, we use the adversarial inputs against the model hosted on SageMaker. We use the test dataset of the Tiny ImageNet dataset and apply the PGD attack, which introduces perturbations at the pixel level such that the model doesn’t recognize correct classes. In the following images, the left column shows two original test images, the middle column shows their adversarially perturbed versions, and the right column shows the difference between both images.
Now we can check the Model Monitor status and see that some of the inference images were drawn from a different distribution.
Results and user action
The custom Model Monitor job determines scores for each inference request, which indicates how likely the sample is adversarial according to the MMD test. These scores are gathered for all inference requests. Their score with the corresponding Debugger step number is recorded in a JSON file and uploaded to Amazon S3. After the Model Monitoring job is complete, we download the JSON file, retrieve step numbers, and use Debugger to retrieve the corresponding model inputs for these steps. This allows us to inspect the images that were detected as adversarial.
The following code block plots the first two images that have been identified as the most likely to be adversarial:
#access inference data
trial = create_trial(f"s3://{sagemaker_session.default_bucket()}/data_capture/inference")
steps = trial.steps(mode=modes.PREDICT)
#load constraint_violations.json file generated by custom ModelMonitor
results = monitor.latest_monitoring_constraint_violations().body_dict)
for index in range(2):
# get results: step and score
step = results['violations'][index]['description']['Step']
score = round( results['violations'][index]['description']['Score'],3)
# get input image
image = trial.tensor('ResNet_input_0').value(step, mode=modes.PREDICT)[0,:,:,:]
# get predicted class
predicted = np.argmax(trial.tensor('ResNet_output_0').value(step, mode=modes.PREDICT))
# visualize image
plot_image(image, predicted)
In our example test run, we get the following output. The jellyfish image was incorrectly predicted as an orange, and the camel image as a panda. Obviously, the model failed on these inputs and didn’t even predict a similar image class, such as goldfish or horse. For comparison, we also show the corresponding natural samples from the test set on the right side. We can observe that the random perturbations introduced by the attacker are very visible in the background of both images.
The custom Model Monitor job publishes the detection rate to CloudWatch, so we can investigate how this rate changed over time. A significant change between two data points may indicate that an adversary was trying to fool the model at a specific time frame. Additionally, you can also plot the number of inference requests being processed in each Model Monitor job and the baseline detection rate, which is computed over the validation dataset. The baseline rate is usually close to 0 and only serves as a comparison metric.
The following screenshot shows the metrics generated by our test runs, which ran three Model Monitoring jobs over 3 hours. Each job processes approximately 200–300 inference requests at a time. The detection rate is 100% between 5:00 PM and 6:00 PM, and drops afterwards.
Furthermore, we can also inspect the distributions of representations generated by the intermediate layers of the model. With Debugger, we can access the data from the validation phase of the training job and the tensors from the inference phase, and use t-SNE to visualize their distribution for certain predicted classes. See the following code:
import seaborn as sns
from sklearn.manifold import TSNE
#compute TSNE embeddings
tsne = TSNE(n_components=2, verbose=1, perplexity=40, n_iter=300)
embedding = tsne.fit_transform(np.concatenate((val_penultimate_layer, inference_penultimate_layer)))
# plot results
sns.scatterplot(x=embedding[:,0], y= embedding[:,1], hue=labels, alpha=0.6, palette=sns.color_palette(None, len(np.unique(labels))), legend="full")
plt.figure(figsize=(10,5))
In our test case, we get the following t-SNE visualization for the second image class. We can observe that the adversarial samples are clustered differently than the natural ones.
Summary
In this post, we showed how to use a two-sample test using maximum mean discrepancy to detect adversarial inputs. We demonstrated how you can deploy such detection mechanisms using Debugger and Model Monitor. This workflow allows you to monitor your models hosted on SageMaker at scale and detect adversarial inputs automatically. To learn more about it, check out our GitHub repo.
References
[1] Aleksander Madry, Aleksandar Makelov, Ludwig Schmidt, Dimitris Tsipras, and Adrian Vladu. Towards deep learning models resistant to adversarial attacks. In
International Conference on Learning Representations, 2018.
[2] Laurens van der Maaten and Geoffrey Hinton. Visualizing data using t-SNE. Journal of Machine Learning Research, 9:2579–2605, 2008. URL
http://www.jmlr.org/papers/v9/vandermaaten08a.html.
About the Authors
Nathalie Rauschmayr is a Senior Applied Scientist at AWS, where she helps customers develop deep learning applications.
Yigitcan Kaya is a fifth year PhD student at University of Maryland and an applied scientist intern at AWS, working on security of machine learning and applications of machine learning for security.
Bilal Zafar is an Applied Scientist at AWS, working on Fairness, Explainability and Security in Machine Learning.
Sergul Aydore is a Senior Applied Scientist at AWS working on Privacy and Security in Machine Learning
Read More