Improve governance of your machine learning models with Amazon SageMaker

Improve governance of your machine learning models with Amazon SageMaker

As companies are increasingly adopting machine learning (ML) for their mainstream enterprise applications, more of their business decisions are influenced by ML models. As a result of this, having simplified access control and enhanced transparency across all your ML models makes it easier to validate that your models are performing well and take action when they are not.

In this post, we explore how companies can improve visibility into their models with centralized dashboards and detailed documentation of their models using two new features: SageMaker Model Cards and the SageMaker Model Dashboard. Both these features are available at no additional charge to SageMaker customers.

Overview of model governance

Model governance is a framework that gives systematic visibility into model development, validation, and usage. Model governance is applicable across the end-to-end ML workflow, starting from identifying the ML use case to ongoing monitoring of a deployed model through alerts, reports, and dashboards. A well-implemented model governance framework should minimize the number of interfaces required to view, track, and manage lifecycle tasks to make it easier to monitor the ML lifecycle at scale.

Today, organizations invest significant technical expertise into building tooling to automate large portions of their governance and auditability workflow. For example, model builders need to proactively record model specifications such as intended use for a model, risk rating, and performance criteria a model should be measured against. Furthermore, they also need to record observations on model behavior, and document the reason they made certain key decisions such as the objective function they optimized the model against.

It’s common for companies to use tools like Excel or email to capture and share such model information for use in approvals for production usage. But as the scale of ML development increases, information can be easily lost or misplaced, and keeping track of these details becomes infeasible quickly. Furthermore, after these models are deployed, you might stitch together data from various sources to gain end-to-end visibility into all your models, endpoints, monitoring history, and lineage. Without such a view, you can easily lose track of your models, and may not be aware of when you need to take action on them. This issue is intensified in highly regulated industries because you’re subject to regulations that require you to keep such measures in place.

As the volume of models starts to scale, managing custom tooling can become a challenge and gives organizations less time to focus on core business needs. In the following sections, we explore how SageMaker Model Cards and the SageMaker Model Dashboard can help you scale your governance efforts.

SageMaker Model Cards

Model cards enable you to standardize how models are documented, thereby achieving visibility into the lifecycle of a model, from designing, building, training, and evaluation. Model cards are intended to be a single source of truth for business and technical metadata about the model that can reliably be used for auditing and documentation purposes. They provide a factsheet of the model that is important for model governance.

Model cards allow users to author and store decisions such as why an objective function was chosen for optimization, and details such as intended usage and risk rating. You can also attach and review evaluation results, and jot down observations for future reference.

For models trained on SageMaker, Model cards can discover and auto-populate details such as training job, training datasets, model artifacts, and inference environment, thereby accelerating the process of creating the cards. With the SageMaker Python SDK, you can seamlessly update the Model card with evaluation metrics.

Model cards provide model risk managers, data scientists, and ML engineers the ability to perform the following tasks:

  • Document model requirements such as risk rating, intended usage, limitations, and expected performance
  • Auto-populate Model cards for SageMaker trained models
  • Bring your own info (BYOI) for non-SageMaker models
  • Upload and share model and data evaluation results
  • Define and capture custom information
  • Capture Model card status (draft, pending review, or approved for production)
  • Access the Model card hub from the AWS Management Console
  • Create, edit, view, export, clone, and delete Model cards
  • Trigger workflows using Amazon EventBridge integration for Model card status change events

Create SageMaker Model Cards using the console

You can easily create Model cards using the SageMaker console. Here you can see all the existing Model cards and create new ones as needed.

When creating a Model card, you can document critical model information such as who built the model, why it was developed, how it is performing for independent evaluations, and any observations that need to be considered prior to using the model for a business application.

To create a Model card on the console, complete the following steps:

  1. Enter model overview details.
  2. Enter training details (auto-populated if the model was trained on SageMaker).
  3. Upload evaluation results.
  4. Add additional details such as recommendations and ethical considerations.

After you create the Model card, you can choose a version to view it.

The following screenshot shows the details of our Model card.

You can also export the Model card to be shared as a PDF.

Create and explore SageMaker Model Cards through the SageMaker Python SDK

Interacting with Model cards isn’t limited to the console. You can also use the SageMaker Python SDK to create and explore Model cards. The SageMaker Python SDK allows data scientists and ML engineers to easily interact with SageMaker components. The following code snippets showcase the process to create a Model card using the newly added SageMaker Python SDK functionality.

Make sure you have the latest version of the SageMaker Python SDK installed:

$ pip install --upgrade "sagemaker>=2"

Once you have trained and deployed a model using SageMaker, you can use the information from the SageMaker model and the training job to automatically populate information into the Model card.

Using the SageMaker Python SDK and passing the SageMaker model name, we can automatically collect basic model information. Information such as the SageMaker model ARN, training environment, and model output Amazon Simple Storage Service (Amazon S3) location is all automatically populated. We can add other model facts, such as description, problem type, algorithm type, model creator, and owner. See the following code:

model_overview = ModelOverview.from_name(
    model_name=model_name,
    sagemaker_session=sagemaker_session,
    model_description="This is a simple binary classification model used for Model Card demo",
    problem_type="Binary Classification",
    algorithm_type="Logistic Regression",
    model_creator="DEMO-ModelCard",
    model_owner="DEMO-ModelCard",
)
print(model_overview.model_id) # Provides us with the SageMaker Model ARN
print(model_overview.inference_environment.container_image) # Provides us with the SageMaker inference container URI
print(model_overview.model_artifact) # Provides us with the S3 location of the model artifacts

We can also automatically collect basic training information like training job ARN, training environment and training metrics. Additional training details can be added, like training objective function and observations. See the following code:

objective_function = ObjectiveFunction(
    function=Function(
        function=ObjectiveFunctionEnum.MINIMIZE,
        facet=FacetEnum.LOSS,
    ),
    notes="This is a example objective function.",
)
training_details = TrainingDetails.from_model_overview(
    model_overview=model_overview,
    sagemaker_session=sagemaker_session,
    objective_function=objective_function,
    training_observations="Additional training observations could be put here."
)

print(training_details.training_job_details.training_arn) # Provides us with the SageMaker Model ARN
print(training_details.training_job_details.training_environment.container_image) # Provides us with the SageMaker training container URI
print([{"name": i.name, "value": i.value} for i in training_details.training_job_details.training_metrics]) # Provides us with the SageMaker Training Job metrics

If we have evaluation metrics available, we can add those to the Model card as well:

my_metric_group = MetricGroup(
    name="binary classification metrics",
    metric_data=[Metric(name="accuracy", type=MetricTypeEnum.NUMBER, value=0.5)]
)
evaluation_details = [
    EvaluationJob(
        name="Example evaluation job",
        evaluation_observation="Evaluation observations.",
        datasets=["s3://path/to/evaluation/data"],
        metric_groups=[my_metric_group],
    )
]

We can also add additional information about the model that can help with model governance:

intended_uses = IntendedUses(
    purpose_of_model="Test Model Card.",
    intended_uses="Not used except this test.",
    factors_affecting_model_efficiency="No.",
    risk_rating=RiskRatingEnum.LOW,
    explanations_for_risk_rating="Just an example.",
)
additional_information = AdditionalInformation(
    ethical_considerations="You model ethical consideration.",
    caveats_and_recommendations="Your model's caveats and recommendations.",
    custom_details={"custom details1": "details value"},
)

After we have provided all the details we require, we can create the Model card using the preceding configuration:

model_card_name = "sample-notebook-model-card"
my_card = ModelCard(
    name=model_card_name,
    status=ModelCardStatusEnum.DRAFT,
    model_overview=model_overview,
    training_details=training_details,
    intended_uses=intended_uses,
    evaluation_details=evaluation_details,
    additional_information=additional_information,
    sagemaker_session=sagemaker_session,
)
my_card.create()

The SageMaker SDK also provides the ability to update, load, list, export, and delete a Model card.

To learn more about Model cards, refer to the developer guide and follow this example notebook to get started.

SageMaker Model Dashboard

The Model dashboard is a centralized repository of all models that have been created in the account. The models are usually created by training on SageMaker, or you can bring your models trained elsewhere to host on SageMaker.

The Model dashboard provides a single interface for IT administrators, model risk managers, or business leaders to view all deployed models and how they’re performing. You can view your endpoints, batch transform jobs, and monitoring jobs to get insights into model performance. Organizations can dive deep to identify which models have missing or inactive monitors and add them using SageMaker APIs to ensure all models are being checked for data drift, model drift, bias drift, and feature attribution drift.

The following screenshot shows an example of the Model dashboard.

The Model dashboard provides an overview of all your models, what their risk rating is, and how those models are performing in production. It does this by pulling information from across SageMaker. The performance monitoring information is captured through Amazon SageMaker Model Monitor, and you can also see information on models invoked for batch predictions through SageMaker batch transform jobs. Lineage information such as how the model was trained, data used, and more is captured, and information from Model cards is pulled as well.

Model Monitor monitors the quality of SageMaker models used in production for batch inference or real-time endpoints. You can set up continuous monitoring or scheduled monitors via SageMaker APIs, and edit the alert settings through the Model dashboard. You can set alerts that notify you when there are deviations in the model quality. Early and proactive detection of these deviations enables you to take corrective actions, such as retraining models, auditing upstream systems, or fixing quality issues without having to monitor models manually or build additional tooling. The Model dashboard gives you quick insight into which models are being monitored and how they are performing. For more information on Model Monitor, visit Monitor models for data and model quality, bias, and explainability.

When you choose a model in the Model dashboard, you can get deeper insights into the model, such as the Model card (if one exists), model lineage, details about the endpoint the model has been deployed to, and the monitoring schedule for the model.

This view allows you to create a Model card if needed. The monitoring schedule can be activated, deactivated, or edited as well through the Model dashboard.

For models that don’t have a monitoring schedule, you can set this up by enabling Model Monitor for the endpoint the model has been deployed to. Through the alert details and status, you will be notified of models that are showing data drift, model drift, bias drift, or feature drift, depending on which monitors you set up.

Let’s look at an example workflow of how to set up model monitoring. The key steps of this process are:

  1. Capture data sent to the endpoint (or batch transform job).
  2. Establish a baseline (for each of the monitoring types).
  3. Create a Model Monitor schedule to compare the live predictions against the baseline to report violations and trigger alerts.

Based on the alerts, you can take actions like rolling back the endpoint to a previous version or retraining the model with new data. While doing this, it may be necessary to trace how the model was trained, which can be done by visualizing the model’s lineage.

The Model dashboard offers a rich set of information regarding the overall model ecosystem in an account, in addition to the ability to drill into the specific details of a model. To learn more about the Model dashboard, refer to developer guide.

Conclusion

Model governance is complex and often involves lots of customized needs specific to an organization or an industry. This could be based on the regulatory requirements your organization needs to comply with, the types of personas present in the organization, and the types of models being used. There’s no one-size-fits-all approach to governance, and it’s important to have the right tools available so that a robust governance process can be put into place.

With the purpose-built ML governance tools in SageMaker, organizations can implement the right mechanisms to improve control and visibility over ML projects for their specific use cases. Give Model cards and the Model dashboard a try, and leave your comments with questions and feedback. To learn more about Model cards and the Model dashboard, refer to developer guide.


About the authors

Kirit Thadaka is an ML Solutions Architect working in the SageMaker Service SA team. Prior to joining AWS, Kirit worked in early-stage AI startups followed by some time consulting in various roles in AI research, MLOps, and technical leadership.

Marc Karp is a ML Architect with the SageMaker Service team. He focuses on helping customers design, deploy and manage ML workloads at scale. In his spare time, he enjoys traveling and exploring new places.

Raghu Ramesha is an ML Solutions Architect with the Amazon SageMaker Service team. He focuses on helping customers build, deploy, and migrate ML production workloads to SageMaker at scale. He specializes in machine learning, AI, and computer vision domains, and holds a master’s degree in Computer Science from UT Dallas. In his free time, he enjoys traveling and photography.

Ram Vittal is an ML Specialist Solutions Architect at AWS. He has over 20 years of experience architecting and building distributed, hybrid, and cloud applications. He is passionate about building secure and scalable AI/ML and big data solutions to help enterprise customers with their cloud adoption and optimization journey to improve their business outcomes. In his spare time, he enjoys tennis, photography, and action movies.

Sahil Saini is an ISV Solution Architect at Amazon Web Services . He works with AWS strategic customers product and engineering teams to help them with technology solutions using AWS services for AI/ML, Containers, HPC and IoT. He has helped set up AI/ML platforms for enterprise customers.

Read More

Define customized permissions in minutes with Amazon SageMaker Role Manager

Define customized permissions in minutes with Amazon SageMaker Role Manager

Administrators of machine learning (ML) workloads are focused on ensuring that users are operating in the most secure manner, striving towards a principal of least privilege design. They have a wide variety of personas to account for, each with their own unique sets of needs, and building the right sets of permissions policies to meet those needs can sometimes be an inhibitor to agility. In this post, we look at how to use Amazon SageMaker Role Manager to quickly build out a set of persona-based roles that can be further customized to your specific requirements in minutes, right on the Amazon SageMaker console.

Role Manager offers predefined personas and ML activities combined with a wizard to streamline your permission generation process, allowing your ML practitioners to perform their responsibilities with the minimal necessary permissions. If you require additional customization, SageMaker Role Manager allows you to specify networking and encryption permissions for Amazon Virtual Private Cloud (Amazon VPC) resources and AWS Key Management Service (AWS KMS) encryption keys, and attach your custom policies.

In this post, you walk through how to use SageMaker Role Manager to create a data scientist role for accessing Amazon SageMaker Studio, while maintaining a set of minimal permissions to perform their necessary activities.

Solution overview

In this walkthrough, you perform all the steps to grant permissions to an ML administrator, create a service role for accessing required dependencies for building and training models, and create execution roles for users to assume inside of Studio to perform their tasks. If your ML practitioners access SageMaker via the AWS Management Console, you can create the permissions to allow access or grant access through IAM Identity Center (Successor to AWS Single Sign-On).

Personas

A persona is an entity that needs to perform a set of ML activities and uses a role to grant them permissions. SageMaker Role Manager provides you with a set of predefined persona templates for common use cases, or you can build your own custom persona.

There are several personas currently supported, including:

  • Data scientist – A persona that performs ML activities from within a SageMaker environment. They’re permitted to process Amazon Simple Storage Service (Amazon S3) data, perform experiments, and produce models.
  • MLOps – A persona that deals with operational activities from within a SageMaker environment. They’re permitted to manage models, endpoints, and pipelines, and audit resources.
  • SageMaker compute role – A persona used by SageMaker compute resources such as jobs and endpoints. They’re permitted to access Amazon S3 resources, Amazon Elastic Container Registry (Amazon ECR) repositories, Amazon CloudWatch, and other services for ML computation.
  • Custom role settings – This persona has no pre-selected settings or default options. It offers complete customization starting with empty settings.

For a comprehensive list of personas and additional details, refer to the persona reference of the SageMaker Role Manager Developer Guide.

ML activities

ML activities are predefined sets of permissions tailored to common ML tasks. Personas are composed of one or more ML activities to grant permissions.

For example, the data scientist persona uses the following ML activities:

  • Run Studio Applications – Permissions to operate within a Studio environment. Required for domain and user-profile execution roles.
  • Manage Experiments – Permissions to manage experiments and trials.
  • Manage ML Jobs – Permissions to audit, query lineage, and visualize experiments.
  • Manage Models – Permissions to manage SageMaker jobs across their lifecycles.
  • Manage Pipelines – Permissions to manage SageMaker pipelines and pipeline executions.
  • S3 Bucket Access – Permissions to perform operations on specified buckets.

There are many more ML activities available than the ones that are listed here. To see the full list along with template policy details, refer to the ML Activity reference of the SageMaker Role Manager Developer Guide.

The following figure demonstrates the entire scope of this post, where you first create a service execution role to allow users to PassRole for access to underlying services and then create a user execution role to grant permissions for your ML practitioners to perform their required ML activities.

Prerequisites

You need to ensure that you have a role for your ML administrator to create and manage personas, as well as the AWS Identity and Access Management (IAM) permissions for those users.

An example IAM policy for an ML administrator may look like the following code. Note that the following policy locks down Studio domain creation to VPC only. Although this is a best practice for controlling network access, you need to remove the LockDownStudioDomainCreateToVPC statement if your implementation doesn’t use a VPC-based Studio domain.

{
    "Version": "2012-10-17",
    "Statement":
    [
        {
            "Sid": "LockDownStudioDomainCreateToVPC",
            "Effect": "Allow",
            "Action":
            [
                "sagemaker:CreateDomain"
            ],
            "Resource":
            [
                "arn:aws:sagemaker:<REGION>:<ACCOUNT-ID>:domain/*"
            ],
            "Condition":
            {
                "StringEquals":
                {
                    "sagemaker:AppNetworkAccessType": "VpcOnly"
                }
            }
        },
        {
            "Sid": "StudioUserProfilePerm",
            "Effect": "Allow",
            "Action":
            [
                "sagemaker:CreateUserProfile"
            ],
            "Resource":
            [
                "arn:aws:sagemaker:<REGION>:<ACCOUNT-ID>:user-profile/*"
            ]
        },
        {
            "Sid": "AllowFileSystemPermissions",
            "Effect": "Allow",
            "Action":
            [
                "elasticfilesystem:CreateFileSystem"
            ],
            "Resource": "arn:aws:elasticfilesystem:<REGION>:<ACCOUNT-ID>:file-system/*"
        },
        {
            "Sid": "KMSPermissionsForSageMaker",
            "Effect": "Allow",
            "Action":
            [
                "kms:CreateGrant",
                "kms:Decrypt",
                "kms:DescribeKey",
                "kms:Encrypt",
                "kms:GenerateDataKey",
                "kms:RetireGrant",
                "kms:ReEncryptTo",
                "kms:ListGrants",
                "kms:RevokeGrant",
                "kms:GenerateDataKeyWithoutPlainText"
            ],
            "Resource":
            [
                "arn:aws:kms:<REGION>:<ACCOUNT-ID>:key/<KMS-KEY-ID>"
            ]
        },
        {
            "Sid": "AmazonSageMakerPresignedUrlPolicy",
            "Effect": "Allow",
            "Action":
            [
                "sagemaker:CreatePresignedDomainUrl"
            ],
            "Resource":
            [
                "arn:aws:sagemaker:<REGION>:<ACCOUNT-ID>:user-profile/*"
            ]
        },
        {
            "Sid": "AllowRolePerm",
            "Effect": "Allow",
            "Action":
            [
                "iam:PassRole",
                "iam:GetRole"
            ],
            "Resource":
            [
                "arn:aws:iam::<ACCOUNT-ID>:role/*"
            ]
        },
        {
            "Sid": "ListExecutionRoles",
            "Effect": "Allow",
            "Action":
            [
                "iam:ListRoles"
            ],
            "Resource":
            [
                "arn:aws:iam::<ACCOUNT-ID>:role/*"
            ]
        },
        {
            "Sid": "SageMakerApiListDomain",
            "Effect": "Allow",
            "Action":
            [
                "sagemaker:ListDomains"
            ],
            "Resource": "arn:aws:sagemaker:<REGION>:<ACCOUNT-ID>:domain/*"
        },
        {
            "Sid": "VpcConfigurationForCreateForms",
            "Effect": "Allow",
            "Action":
            [
                "ec2:DescribeVpcs",
                "ec2:DescribeSubnets",
                "ec2:DescribeSecurityGroups"
            ],
            "Resource": "*"
        },
        {
            "Sid": "KmsKeysForCreateForms",
            "Effect": "Allow",
            "Action":
            [
                "kms:DescribeKey",
                "kms:ListAliases"
            ],
            "Resource":
            [
                "arn:aws:kms:<REGION>:<ACCOUNT-ID>:key/*"
            ]
        },
        {
            "Sid": "KmsKeysForCreateForms2",
            "Effect": "Allow",
            "Action":
            [
                "kms:ListAliases"
            ],
            "Resource":
            [
                "*"
            ]
        },
        {
            "Sid": "StudioReadAccess",
            "Effect": "Allow",
            "Action":
            [
                "sagemaker:ListDomains",
                "sagemaker:ListApps",
                "sagemaker:DescribeDomain",
                "sagemaker:DescribeUserProfile",
                "sagemaker:ListUserProfiles",
                "sagemaker:EnableSagemakerServicecatalogPortfolio",
                "sagemaker:GetSagemakerServicecatalogPortfolioStatus"
            ],
            "Resource": "*"
        },
        {
            "Sid": "SageMakerProjectsSC",
            "Effect": "Allow",
            "Action":
            [
                "servicecatalog:AcceptPortfolioShare",
                "servicecatalog:ListAcceptedPortfolioShares",
                "servicecatalog:Describe*",
                "servicecatalog:List*",
                "servicecatalog:ScanProvisionedProducts",
                "servicecatalog:SearchProducts",
                "servicecatalog:SearchProvisionedProducts",
                "cloudformation:GetTemplateSummary",
                "servicecatalog:ProvisionProduct",
                "cloudformation:ListStackResources",
                "servicecatalog:AssociatePrincipalWithPortfolio"
            ],
            "Resource": "*"
        },
        {
            "Action":
            [
                "s3:CreateBucket",
                "s3:ListAllMyBuckets",
                "s3:GetBucketLocation",
                "s3:GetObject",
                "s3:PutObject",
                "s3:DeleteObject",
                "s3:AbortMultipartUpload",
                "s3:GetBucketCors",
                "s3:PutBucketCors",
                "s3:GetBucketAcl",
                "s3:PutObjectAcl"
            ],
            "Effect": "Allow",
            "Resource":
            [
                "arn:aws:s3:::<S3-BUCKET-NAME>",
                "arn:aws:s3:::<S3-BUCKET-NAME>/*"
            ]
        }
    ]
}

Create a service role for passing to jobs and endpoints

When creating roles for your ML practitioners to perform activities in SageMaker, they need to pass permissions to an service role that has access to manage the underlying infrastructure. This service role can be reused, and doesn’t need to be created for every use case. In this section, you create a service role and then reference it when you create your other personas via PassRole. If you already have an appropriate service role, you can use it instead of creating another one.

  1. On the SageMaker console, choose Getting Started in the navigation bar.
  2. Under Configure role, choose Create a role.

  3. For Role name suffix, give your role a name, which becomes the suffix of the IAM role name created for you. For this post, we enter SageMaker-demoComputeRole.
  4. Choose SageMaker Compute Role as your persona.
  5. Optionally, configure the networking and encryption settings to use your desired resources.
  6. Choose Next.

    In the Configure ML activities section, you can see that the ML activity for Access Required AWS Services is already preselected for the SageMaker Compute Role persona.
    Because the Access Required AWS Services ML activity is selected, further options appear.
  7. Enter the appropriate S3 bucket ARNs and Amazon ECR ARNs that this service role will be able to access.
    You can add multiple values by choosing Add in each section.
  8. After you have filled in the required values, choose Next.
  9. In the Add additional policies & tags section, choose any other policies your service role might need.
  10. Choose Next.
  11. In the Review role section, verify that your configuration is correct, then choose Submit.
    The last thing you need to do for the service role is note down the role ARN so you can use it later in your data scientist persona role creation process.
  12. To view the role in IAM, choose Go to Role in the success banner or alternatively search for the name you gave your service role persona on the IAM console.
  13. On the IAM console, note the role’s ARN in the ARN section.

You enter this ARN later when creating your other persona-based roles.

Create an execution role for data scientists

Now that you have created the base service roles for your other personas to use, you can create your role for data scientists.

  1. On the SageMaker console, choose Getting Started in the navigation bar.
  2. Under Configure role, choose Create a role.
  3. For Role name suffix, give your role a name, for example, SageMaker-dataScientistRole.
    Note that this resulting name needs to be unique across your existing roles, or persona creation will fail.
  4. Optionally, add a description.
  5. Choose a base persona template to give your persona a baseline set of ML activities. In this example, we choose Data Scientist.
  6. Optionally, in the Network setup section, specify the specific VPC subnets and security groups that the persona can access for resources that support them.
  7. In the Encryption setup, you can optionally choose one or more data encryption and volume encryption keys for services that support encryption at rest.
  8. After you have completed customizing your persona, choose Next.

    In the Configure ML activities section, one or more ML activities are pre-selected based on your baseline persona template.
  9. In this section, you can add or remove additional ML activities to tailor this role to your specific use case.

    Certain ML activities require additional information to complete the role setup. For example, selecting the S3 Bucket Access ML activity requires you to specify a list of S3 buckets to grant access to.Other ML activities may require a PassRoles entry to allow this persona to pass its permissions to a service role to perform actions on behalf of the persona. In our example, the Manage ML Jobs ML activity requires a PassRoles entry.
  10. Enter the role ARN for the service role you created earlier.
    You can add multiple entries by choosing Add, which creates an array of the specified values in the resulting role.
  11. After you have selected all the appropriate ML activities and supplied the necessary values, choose Next.
  12. In the Add additional policies section, choose any other policies your execution role might need. You can also add tags to your execution role.
  13. Choose Next.
  14. In the Review Role section, verify that the persona configuration details are accurate, then choose Submit.

View and add final customizations to your new role

After submitting your persona, you can go to the IAM console and see the resulting role and policies that were created for you, as well as make further modifications. To get to the new role in IAM, choose Go to role in the success banner.

On the IAM console, you can view your newly created role along with the attached policies that map the ML activities you selected in Role Manager. You can change the existing policies here by selecting the policy and editing the document. This role can also be recreated via Infrastructure as Code (IaC) by simply taking the contents of the policy documents and inserting them into your existing solution.

Link the new role to a user

In order for your users to access Studio, they need to be associated with the user execution role you created (in this example, based on the data scientist persona). The method of associating the user with the role varies based on the authentication method you set up for your Studio domain, either IAM or IAM Identity Center. You can find the authentication method under the Domain section in the Studio Control Panel, as shown in the following screenshots.

Depending on your authentication method, proceed to the appropriate subsection.

Access Studio via IAM

Note that if you’re using the IAM Identity Center integration with Studio, the IAM role in this section isn’t necessary. Proceed to the next section.

SageMaker Role Manager creates execution roles for access to AWS services. To allow your data scientists to assume their given persona via the console, they require a console role to get to the Studio environment.

The following example role gives the necessary permissions to allow a data scientist to access the console and assume their persona’s role inside of Studio:

{
    "Version": "2012-10-17",
    "Statement":
    [
        {
            "Sid": "DescribeCurrentDomain",
            "Effect": "Allow",
            "Action": "sagemaker:DescribeDomain",
            "Resource": "arn:aws:sagemaker:<REGION>:<ACCOUNT-ID>:domain/<STUDIO-DOMAIN-ID>"
        },
        {
            "Sid": "RemoveErrorMessagesFromConsole",
            "Effect": "Allow",
            "Action":
            [
                "servicecatalog:ListAcceptedPortfolioShares",
                "sagemaker:GetSagemakerServicecatalogPortfolioStatus",
                "sagemaker:ListModels",
                "sagemaker:ListTrainingJobs",
                "servicecatalog:ListPrincipalsForPortfolio",
                "sagemaker:ListNotebookInstances",
                "sagemaker:ListEndpoints"
            ],
            "Resource": "*"
        },
        {
            "Sid": "RequiredForAccess",
            "Effect": "Allow",
            "Action":
            [
                "sagemaker:ListDomains",
                "sagemaker:ListUserProfiles"
            ],
            "Resource": "*"
        },
        {
            "Sid": "CreatePresignedURLForAccessToDomain",
            "Effect": "Allow",
            "Action": "sagemaker:CreatePresignedDomainUrl",
            "Resource": "arn:aws:sagemaker:<REGION>:<ACCOUNT-ID>:user-profile/<STUDIO-DOMAIN-ID>/<PERSONA_NAME>"
        }
    ]
}

The statement labeled RemoveErrorMessagesFromConsole can be removed without affecting the ability to get into Studio, but will result in API errors on the console UI.

Sometimes administrators give access to the console for ML practitioners to debug issues with their Studio environment. In this scenario, you want to grant additional permissions to view CloudWatch and AWS CloudTrail logs.

The following code is an example of a read-only CloudWatch Logs access policy:

{
"Version": "2012-10-17",
    "Statement": [
        {
        "Action": [
                "logs:Describe*",
                "logs:Get*",
                "logs:List*",
                "logs:StartQuery",
                "logs:StopQuery",
                "logs:TestMetricFilter",
                "logs:FilterLogEvents"
            ],
            "Effect": "Allow",
            "Resource": "*"
        }
    ]
}

For additional information on CloudWatch Logs policies, refer to Customer managed policy examples.

The following code is an example read-only CloudTrail access policy:

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Action": [
                "cloudtrail:Get*",
                "cloudtrail:Describe*",
                "cloudtrail:List*",
                "cloudtrail:LookupEvents"
            ],
            "Resource": "*"
        }
    ]
}

For more details and example policies, refer to Identity and Access Management for AWS CloudTrail.

  1. In the Studio Control Panel, choose Add User to create your new data scientist user.
  2. For Name, give your user a name.
  3. For Default execution role, choose the persona role that you created earlier.
  4. Choose Next.
  5. Choose the appropriate Jupyter Lab version, and whether to enable Amazon SageMaker JumpStart and SageMaker project templates.
  6. Choose Next.
  7. This post assumes you’re not using RStudio, so choose Next again to skip RStudio configuration.
  8. Choose whether to enable Amazon SageMaker Canvas support, and additionally whether to allow for time series forecasting in Canvas.
  9. Choose Submit.
    You can now see your new data science user in the Studio Control Panel.
  10. To test this user, on the Launch app menu, choose Studio.
    This redirects you to the Studio console as the selected user with their persona’s permissions.

Access Studio via IAM Identity Center

Assigning IAM Identity Center users to execution roles requires them to first exist in the IAM Identity Center directory. If they don’t exist, contact your identity administrator or refer to Manage identities in IAM Identity Center for instructions.

Note that in order to use the IAM Identity Center authentication method, its directory and your Studio domain must be in the same AWS Region.

  1. To assign IAM Identity Center users to your Studio domain, choose Assign users and Groups in the Studio Control Panel.
  2. Select your data scientist user, then choose Assign users and groups.
  3. After the user has been added to the Studio Control panel, choose the user to open the user details screen.
  4. On the User details page, choose Edit.
  5. On the Edit user profile page, under General settings, change the Default execution role to match the user execution role you created for your data scientists.
  6. Choose Next.
  7. Choose Next through the rest of the settings pages, then choose Submit to save your changes.

Now, when your data scientist logs into the IAM Identity Center portal, they will see a tile for this Studio domain. Choosing that tile logs them in to Studio with the user execution role you assigned to them.

Test your new persona

After you’re logged in to Studio, you can use the following example notebook to validate the permissions that you granted to your data science user.

You can observe that the data scientist user can only perform the actions in the notebook that their role has been permitted. For example:

  • The user is blocked from running jobs without VPC or AWS KMS configuration, if the role were customized to do so
  • The user only has access to Amazon S3 resources if the role had the ML activity included
  • The user is only able to deploy endpoints if the role had the ML activity included

Clean up

To clean up the resources you created in this walkthrough, complete the following steps:

  1. Remove the mapping of your new role to your users:
    1. If using Studio with IAM, delete any new Studio users you created.
    2. If using Studio with IAM Identity Center, detach the created execution role from your Studio users.
  2. On the IAM console, find your user execution role and delete it.
  3. On the IAM console, find your service role and delete it.
  4. If you created a new role for an ML administrator:
    1. Log out of your account as the ML administrator role, and back in as another administrator that has IAM permissions.
    2. Delete the ML administrator role that you created.

Conclusion

Until recently, in order to build out SageMaker roles with customized permissions, you had to start from scratch. With the new SageMaker Role Manager, you can use the combination of personas, pre-built ML activities, and custom policies to quickly generate customized roles in minutes. This allows your ML practitioners to start working in SageMaker faster.

To learn more about how to use SageMaker Role Manager, refer to the SageMaker Role Manager Developer Guide.


About the authors

Giuseppe Zappia is a Senior Solutions Architect at AWS, with over 20 years of experience in full stack software development, distributed systems design, and cloud architecture. In his spare time, he enjoys playing video games, programming, watching sports, and building things.

Ram VittalRam Vittal is a Principal ML Solutions Architect at AWS. He has over 20 years of experience architecting and building distributed, hybrid, and cloud applications. He is passionate about building secure and scalable AI/ML and big data solutions to help enterprise customers with their cloud adoption and optimization journey to improve their business outcomes. In his spare time, he enjoys riding motorcycle, playing tennis, and photography.

Arvind Sowmyan is a Senior Software Development Engineer on the SageMaker Model Governance team where he specializes in building scalable webservices with a focus on enterprise security. Prior to this, he worked on the Training Jobs platform where he was a part of the SageMaker launch team. In his spare time, he enjoys illustrating comics, exploring virtual reality and tinkering with large language models.

Ozan Eken is a Senior Product Manager at Amazon Web Services. He is passionate about building governance products in Machine Learning for enterprise customers. Outside of work, he likes exploring different outdoor activities and watching soccer.

Read More

Build an agronomic data platform with Amazon SageMaker geospatial capabilities

Build an agronomic data platform with Amazon SageMaker geospatial capabilities

The world is at increasing risk of global food shortage as a consequence of geopolitical conflict, supply chain disruptions, and climate change. Simultaneously, there’s an increase in overall demand from population growth and shifting diets that focus on nutrient- and protein-rich food. To meet the excess demand, farmers need to maximize crop yield and effectively manage operations at scale, using precision farming technology to stay ahead.

Historically, farmers have relied on inherited knowledge, trial and error, and non-prescriptive agronomic advice to make decisions. Key decisions include what crops to plant, how much fertilizer to apply, how to control pests, and when to harvest. However, with an increasing demand for food and the need to maximize harvest yield, farmers need more information in addition to inherited knowledge. Innovative technologies like remote sensing, IoT, and robotics have the potential to help farmers move past legacy decision-making. Data-driven decisions fueled by near-real-time insights can enable farmers to close the gap on increased food demand.

Although farmers have traditionally collected data manually from their operations by recording equipment and yield data or taking notes of field observations, builders of agronomic data platforms on AWS help farmers work with their trusted agronomic advisors use that data at scale. Small fields and operations more easily allow a farmer to see the entire field to look for issues affecting the crop. However, scouting each field on a frequent basis for large fields and farms is not feasible, and successful risk mitigation requires an integrated agronomic data platform that can bring insights at scale. These platforms help farmers make sense of their data by integrating information from multiple sources for use in visualization and analytics applications. Geospatial data, including satellite imagery, soil data, weather, and topography data, are layered together with data collected by agricultural equipment during planting, nutrient application, and harvest operations. By unlocking insights through enhanced geospatial data analytics, advanced data visualizations, and automation of workflows via AWS technology, farmers can identify specific areas of their fields and crops that are experiencing an issue and take action to protect their crops and operations. These timely insights help farmers better work with their trusted agronomists to produce more, reduce their environmental footprint, improve their profitability, and keep their land productive for generations to come.

In this post, we look at how you can use the predictions generated from Amazon SageMaker geospatial capabilities into a user interface of an agronomic data platform. Furthermore, we discuss how software development teams are adding advanced machine learning (ML)-driven insights, including remote sensing algorithms, cloud masking (automatically detecting clouds within satellite imagery) and automated image processing pipelines, to their agronomic data platforms. Together, these additions help agronomists, software developers, ML engineers, data scientists, and remote sensing teams provide scalable, valuable decision-making support systems to farmers. This post also provides an example end-to-end notebook and GitHub repository that demonstrates SageMaker geospatial capabilities, including ML-based farm field segmentation and pre-trained geospatial models for agriculture.

Adding geospatial insights and predictions into agronomic data platforms

Established mathematical and agronomic models combined with satellite imagery enable visualization of the health and status of a crop by satellite image, pixel by pixel, over time. However, these established models require access to satellite imagery that is not obstructed by clouds or other atmospheric interference that reduces the quality of the image. Without identifying and removing clouds from each processed image, predictions and insights will have significant inaccuracies and agronomic data platforms will lose the trust of the farmer. Because agronomic data platform providers commonly serve customers comprising thousands of farm fields across varying geographies, agronomic data platforms require computer vision and an automated system to analyze, identify, and filter out clouds or other atmospheric issues within each satellite image before further processing or providing analytics to customers.

Developing, testing, and improving ML computer vision models that detect clouds and atmospheric issues in satellite imagery presents challenges for builders of agronomic data platforms. First, building data pipelines to ingest satellite imagery requires time, software development resources, and IT infrastructure. Each satellite imagery provider can differ greatly from each other. Satellites frequently collect imagery at different spatial resolutions; resolutions can range from many meters per pixel to very high-resolution imagery measured in centimeters per pixel. Additionally, each satellite may collect imagery with different multi-spectral bands. Some bands have been thoroughly tested and show strong correlation with plant development and health indicators, and other bands can be irrelevant for agriculture. Satellite constellations revisit the same spot on earth at different rates. Small constellations may revisit a field every week or more, and larger constellations may revisit the same area multiple times per day. These differences in satellite images and frequencies also lead to differences in API capabilities and features. Combined, these differences mean agronomic data platforms may need to maintain multiple data pipelines with complex ingestion methodologies.

Second, after the imagery is ingested and made available to remote sensing teams, data scientists, and agronomists, these teams must engage in a time-consuming process of accessing, processing, and labeling each region within each image as cloudy. With thousands of fields spread across varying geographies, and multiple satellite images per field, the labeling process can take a significant amount of time and must be continually trained to account for business expansion, new customer fields, or new sources of imagery.

Integrated access to Sentinel satellite imagery and data for ML

By using SageMaker geospatial capabilities for remote sensing ML model development, and by consuming satellite imagery from the AWS Data Exchange conveniently available public Amazon Simple Storage Service (Amazon S3) bucket, builders of agronomic data platforms on AWS can achieve their goals faster and more easily. Your S3 bucket always has the most up-to-date satellite imagery from Sentinel-1 and Sentinel-2 because Open Data Exchange and the Amazon Sustainability Data Initiative provide you with automated built-in access to satellite imagery.

The following diagram illustrates this architecture.

The following diagram illustrates this architecture

SageMaker geospatial capabilities include built-in pre-trained deep neural network models such as land use classification and cloud masking, with an integrated catalog of geospatial data sources including satellite imagery, maps, and location data from AWS and third parties. With an integrated geospatial data catalog, SageMaker geospatial customers have easier access to satellite imagery and other geospatial datasets that remove the burden of developing complex data ingestion pipelines. This integrated data catalog can accelerate your own model building and the processing and enrichment of large-scale geospatial datasets with purpose-built operations such as time statistics, resampling, mosaicing, and reverse geocoding. The ability to easily ingest imagery from Amazon S3 and use SageMaker geospatial pre-trained ML models that automatically identify clouds and score each Sentinel-2 satellite image removes the need to engage remote sensing, agronomy, and data science teams to ingest, process, and manually label thousands of satellite images with cloudy regions.

SageMaker geospatial capabilities support the ability to define an area of interest (AOI) and a time of interest (TOI), search within the Open Data Exchange S3 bucket archive for images with a geospatial intersect that meets the request, and return true color images, Normalized Difference Vegetation Index (NDVI), cloud detection and scores, and land cover. NDVI is a common index used with satellite imagery to understand the health of crops by visualizing measurements of the amount of chlorophyll and photosynthetic activity via a newly processed and color-coded image.

Users of SageMaker geospatial capabilities can use the pre-built NDVI index or develop their own. SageMaker geospatial capabilities make it easier for data scientists and ML engineers to build, train, and deploy ML models faster and at scale using geospatial data and with less effort than before.

Farmers and agronomists need fast access to insights in the field and at home

Promptly delivering processed imagery and insights to farmers and stakeholders is important for agribusinesses and decision-making at the field. Identifying areas of poor crop health across each field during critical windows of time allows the farmer to mitigate risks by applying fertilizers, herbicides, and pesticides where needed, and even identify areas of potential crop insurance claims. It is common for agronomic data platforms to comprise a suite of applications, including web applications and mobile applications. These applications provide intuitive user interfaces that help farmers and their trusted stakeholders securely review each of their fields and images while at home, in the office, or standing in the field itself. These web and mobile applications, however, need to consume and quickly display processed imagery and agronomic insights via APIs.

Amazon API Gateway makes it easy for developers to create, publish, maintain, monitor, and secure RESTful and WebSocket APIs at scale. With API Gateway, API access and authorization is integrated with AWS Identity Access Management (IAM), and offers native OIDC and OAuth2 support, as well as Amazon Cognito. Amazon Cognito is a cost-effective customer identity and access management (CIAM) service supporting a secure identity store with federation options that can scale to millions of users.

Raw, unprocessed satellite imagery can be very large, in some instances hundreds of megabytes or even gigabytes per image. Because many agricultural areas of the world have poor or no cellular connectivity, it’s important to process and serve imagery and insights in smaller formats and in ways that limit required bandwidth. Therefore, by using AWS Lambda to deploy a tile server, smaller sized GeoTIFFs, JPEGs, or other imagery formats can be returned based on the current map view being displayed to a user, as opposed to much larger file sizes and types that decrease performance. By combining a tile server deployed through Lambda functions with API Gateway to manage requests for web and mobile applications, farmers and their trusted stakeholders can consume imagery and geospatial data from one or hundreds of fields at once, with reduced latency, and achieve an optimal user experience.

SageMaker geospatial capabilities can be accessed via an intuitive user interface that enables you to gain easy access to a rich catalog of geospatial data, transform and enrich data, train or use purpose-build models, deploy models for predictions, and visualize and explore data on integrated maps and satellite images. To read more about the SageMaker geospatial user experience, refer to How Xarvio accelerated pipelines of spatial data for digital farming with Amazon SageMaker geospatial capabilities.

Agronomic data platforms provide several layers of data and insights at scale

The following example user interface demonstrates how a builder of agronomic data platforms may integrate insights delivered by SageMaker geospatial capabilities.

SageMaker geospatial capabilities

This example user interface depicts common geospatial data overlays consumed by farmers and agricultural stakeholders. Here, the consumer has selected three separate data overlays. First, the underlying Sentinel-2 natural color satellite image taken from October, 2020, and made available via the integrated SageMaker geospatial data catalog. This image was filtered using the SageMaker geospatial pre-trained model that identifies cloud cover. The second data overlay is a set of field boundaries, depicted with a white outline. A field boundary is commonly a polygon of latitude and longitude coordinates that reflects the natural topography of a farm field, or operational boundary differentiating between crop plans. The third data overlay is processed imagery data in the form of Normalized Difference Vegetation Index (NDVI). Further, the NDVI imagery is overlaid on the respective field boundary, and an NDVI color classification chart is depicted on the left side of the page.

The following image depicts the results using a SageMaker pre-trained model that identifies cloud cover.

SageMaker pre-trained model that identifies cloud cover

In this image, the model identifies clouds within the satellite image and applies a yellow mask over each cloud within the image. By removing masked pixels (clouds) from further image processing, downstream analytics and products have improved accuracy and provide value to farmers and their trusted advisors.

In areas of poor cellular coverage, reducing latency improves the user experience

To address the need for low latency when evaluating geospatial data and remote sensing imagery, you can use Amazon ElastiCache to cache processed images retrieved from tile requests made via Lambda. By storing the requested imagery into a cache memory, latency is further reduced and there is no need to re-process imagery requests. This can improve application performance and reduce pressure on databases. Because Amazon ElastiCache supports many configuration options for caching strategies, cross-region replication, and auto scaling, agronomic data platform providers can scale up quickly based upon application needs, and continue to achieve cost efficiency by paying for only what is needed.

Conclusion

This post focused on geospatial data processing, implementing ML-enabled remote sensing insights, and ways to streamline and simplify the development and enhancement of agronomic data platforms on AWS. It illustrated several methods and services that builders of agronomic data platforms on AWS services can use to achieve their goals, including SageMaker, Lambda, Amazon S3, Open Data Exchange, and ElastiCache.

To follow an end-to-end example notebook that demonstrates SageMaker geospatial capabilities, access the example notebook available in the following GitHub repository. You can review how to identify agricultural fields through ML segmentation models, or explore the preexisting SageMaker geospatial models and the bring your own model (BYOM) functionality on geospatial tasks such as land use and land cover classification. The end-to-end example notebook is discussed in detail in the companion post How Xarvio accelerated pipelines of spatial data for digital farming with Amazon SageMaker Geospatial.

Please contact us to learn more about how the agricultural industry is solving important problems related to global food supply, traceability, and sustainability initiatives by using the AWS Cloud.


About the authors

Will Conrad is the Head of Solutions for the Agriculture Industry at AWS. He is passionate about helping customers use technology to improve the livelihoods of farmers, the environmental impact of agriculture, and the consumer experience for people who eat food. In his spare time, he fixes things, plays golf, and takes orders from his four children.

Bishesh Adhikari is a Machine Learning Prototyping Architect at the AWS Prototyping team. He works with AWS customers to build solutions on various AI & Machine Learning use-cases to accelerate their journey to production. In his free time, he enjoys hiking, travelling, and spending time with family and friends.

Priyanka Mahankali is a Guidance Solutions Architect at AWS for more than 5 years building cross-industry solutions including technology for global agriculture customers. She is passionate about bringing cutting-edge use cases to the forefront and helping customers build strategic solutions on AWS.

Ron Osborne is AWS Global Technology Lead for Agriculture – WWSO and a Senior Solution Architect. Ron is focused on helping AWS agribusiness customers and partners develop and deploy secure, scalable, resilient, elastic, and cost-effective solutions. Ron is a cosmology enthusiast, an established innovator within ag-tech, and is passionate about positioning customers and partners for business transformation and sustainable success.

Read More

Separate lines of business or teams with multiple Amazon SageMaker domains

Separate lines of business or teams with multiple Amazon SageMaker domains

Amazon SageMaker Studio is a fully integrated development environment (IDE) for machine learning (ML) that enables data scientists and developers to perform every step of the ML workflow, from preparing data to building, training, tuning, and deploying models.

To access SageMaker Studio, Amazon SageMaker Canvas, or other Amazon ML environments like RStudio on Amazon SageMaker, you must first provision a SageMaker domain. A SageMaker domain includes an associated Amazon Elastic File System (Amazon EFS) volume; a list of authorized users; and a variety of security, application, policy, and Amazon Virtual Private Cloud (Amazon VPC) configurations.

Administrators can now provision multiple SageMaker domains in order to separate different lines of business or teams within a single AWS account. This creates a logical separation between the users, files storage, and configuration settings for various groups in your organization. As an example, your organization may want to separate your financial line of business from the sustainability research division, as shown in the following multi-domain console.

domains

Creating multiple SageMaker domains also allows you to granularly set domain-level configurations such as VPC configurations in order to permit public internet access for some groups’ research, while enforcing that traffic goes through a specified VPC for business units with greater restriction.

Automated tagging

In addition to separating users, file storage, and domain configurations, administrators can also separate SageMaker resources that are created within their domain. By default, SageMaker now automatically tags new SageMaker resources such as training jobs, processing jobs, experiments, pipelines, and model registry entries with their respective sagemaker:domain-arn. SageMaker also tags the resource with the sagemaker:user-profile-arn or sagemaker:space-arn to designate the resource creation at an even more granular level.

Cost allocation

Administrators can use automated tagging to easily monitor costs associated with their line of business, teams, individual users, or individual business problems by using tools such as AWS Budgets and AWS Cost Explorer. As an example, an administrator can attach a cost allocation tag for the sagemaker:domain-arn tag.

cost allocation tags

This allows them to utilize Cost Explorer to visualize the notebook spend for a given domain.

AWS cost management

Domain-level resource isolation

Administrators can attach AWS Identity and Access Management (IAM) policies that ensure a domain’s user can only create and open SageMaker resources that are originating from their respective domain. The following code is an example of such a policy:

{
    "Version": "2012-10-17",
    "Statement":
    [
        {
            "Sid": "CreateRequireDomainTag",
            "Effect": "Allow",
            "Action":
            [
                "SageMaker:Create*",
                "SageMaker:Update*"
            ],
            "Resource": "*",
            "Condition":
            {
                "ForAllValues:StringEquals":
                {
                    "aws:TagKeys":
                    [
                        "sagemaker:domain-arn"
                    ]
                }
            }
        },
        {
            "Sid": "ResourceAccessRequireDomainTag",
            "Effect": "Allow",
            "Action":
            [
                "SageMaker:Update*",
                "SageMaker:Delete*",
                "SageMaker:Describe*"
            ],
            "Resource": "*",
            "Condition":
            {
                "StringEquals":
                {
                    "aws:ResourceTag/sagemaker:domain-arn": "arn:aws:sagemaker:<REGION>:<ACCOUNT_ID>:domain/<DOMAIN_ID>"
                }
            }
        }
    ]
}

For more information, see Multiple domains overview.

Backfilling existing resources with domain tags

Since the launch of the multi-domain capability, new resources are automatically tagged with aws:ResourceTag/sagemaker:domain-arn. However, if you want to update existing resources to facilitate resource isolation, administrations can use the add-tag SageMaker API call in a script. The below example shows how to tag all existing experiments to a domain:

domain_arn=arn:aws:sagemaker:<REGION>:<ACCOUNT_ID>:domain/<DOMAIN_ID>
experiments=`aws --region $REGION 
sagemaker list-experiments`
for row in $(echo "${experiments}" | jq -r '.ExperimentSummaries[] | @base64'); do
    _jq() {
     echo ${row} | base64 --decode | jq -r ${1}
    }

    exp_name=$(_jq '.ExperimentName')
    exp_arn=$(_jq '.ExperimentArn')

    echo "Tagging resource name: $exp_name and arn: $exp_arn with "sagemaker:domain-arn=$domain_arn""
    echo `aws sagemaker 
        add-tags 
        --resource-arn $exp_arn 
        --tags "Key=sagemaker:domain-arn,Value=$domain_arn" 
        --region $REGION`
    echo "Tagging done for: $exp_name"
    sleep 1
done

You can verify that any individual resource was correctly tagged with the following code sample:

aws sagemaker 
list-tags 
--resource-arn <SAGEMAKER-RESOURCE-ARN> 
--region <REGION> 

Solution overview

In this section, we outline how you can set up multiple SageMaker domains in your own AWS account. You can either use the AWS Command Line Interface (AWS CLI) or the SageMaker console. Refer to Onboard to Amazon SageMaker Domain for the most up-to-date instructions on creating a domain.

Create a domain using the AWS CLI

There are no necessary API changes from the previous aws sagemaker create-domain CLI call, but there is now support for --default-space-settings if you intend to use shared spaces in SageMaker Studio. For more information, see shared spaces in Amazon SageMaker Studio.

Create a new domain with your specified configurations using aws sagemaker create-domain, and then you’re ready to populate it with users.

Create a domain using the SageMaker console

On the updated SageMaker console, you can administer your domains via the new option called SageMaker Domains in the navigation pane.

Here you’ll be presented with the options to open existing domains, or create a new one using the graphical interface.

create domain

Conclusion

Utilizing multiple SageMaker domains provides flexibility to meet your organizational needs. Whether you need to isolate users and their business groups, or you want to run separate domains due to configuration differences, we encourage you to stand up multiple SageMaker domains within a single AWS account!


About the Authors

Sean MorganSean Morgan is an AI/ML Solutions Architect at AWS. He has experience in the semiconductor and academic research fields, and uses his experience to help customers reach their goals on AWS. In his free time, Sean is an active open-source contributor/maintainer and is the special interest group lead for TensorFlow Add-ons.

Arkaprava De is a Senior Software Engineer at AWS. He has been at Amazon for over 7 years and is currently working on improving the Amazon SageMaker Studio IDE experience. You can find him on LinkedIn.

Kunal Jha is a Senior Product Manager at AWS. He is focused on building Amazon SageMaker Studio as the IDE of choice for all ML development steps. In his spare time, Kunal enjoys skiing and exploring the Pacific Northwest. You can find him on LinkedIn.

Han Zhang is a Senior Software Engineer at Amazon Web Services. She is part of the launch team for Amazon SageMaker Notebooks and Amazon SageMaker Studio, and has been focusing on building secure machine learning environments for customers. In her spare time, she enjoys hiking and skiing in the Pacific Northwest.

Read More

Operationalize your Amazon SageMaker Studio notebooks as scheduled notebook jobs

Operationalize your Amazon SageMaker Studio notebooks as scheduled notebook jobs

Amazon SageMaker Studio provides a fully managed solution for data scientists to interactively build, train, and deploy machine learning (ML) models. In addition to the interactive ML experience, data workers also seek solutions to run notebooks as ephemeral jobs without the need to refactor code as Python modules or learn DevOps tools and best practices to automate their deployment infrastructure. Some common use cases for doing this include:

  • Regularly running model inference to generate reports
  • Scaling up a feature engineering step after having tested in Studio against a subset of data on a small instance
  • Retraining and deploying models on some cadence
  • Analyzing your team’s Amazon SageMaker usage on a regular cadence

Previously, when data scientists wanted to take the code they built interactively on notebooks and run them as batch jobs, they were faced with a steep learning curve using Amazon SageMaker Pipelines, AWS Lambda, Amazon EventBridge, or other solutions that are difficult to set up, use, and manage.

With SageMaker notebook jobs, you can now run your notebooks as is or in a parameterized fashion with just a few simple clicks from the SageMaker Studio or SageMaker Studio Lab interface. You can run these notebooks on a schedule or immediately. There’s no need for the end-user to modify their existing notebook code. When the job is complete, you can view the populated notebook cells, including any visualizations!

In this post, we share how to operationalize your SageMaker Studio notebooks as scheduled notebook jobs.

Solution overview

The following diagram illustrates our solution architecture. We utilize the pre-installed SageMaker extension to run notebooks as a job immediately or on a schedule.

In the following sections, we walk through the steps to create a notebook, parameterize cells, customize additional options, and schedule your job. We also include a sample use case.

Prerequisites

To use SageMaker notebook jobs, you need to be running a JupyterLab 3 JupyterServer app within Studio. For more information on how to upgrade to JupyterLab 3, refer to View and update the JupyterLab version of an app from the console. Be sure to Shut down and Update SageMaker Studio in order to pick up the latest updates.

To define job definitions that run notebooks on a schedule, you may need to add additional permissions to your SageMaker execution role.

First, add a trust relationship to your SageMaker execution role that allows events.amazonaws.com to assume your role:

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Principal": {
                "Service": "sagemaker.amazonaws.com"
            },
            "Action": "sts:AssumeRole"
        },
        {
            "Effect": "Allow",
            "Principal": {
                "Service": "events.amazonaws.com"
            },
            "Action": "sts:AssumeRole"
        }
    ]
}

Additionally, you may need to create and attach an inline policy to your execution role. The below policy is supplementary to the very permissive AmazonSageMakerFullAccess policy. For a complete and minimal set of permissions see Install Policies and Permissions.

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "VisualEditor0",
            "Effect": "Allow",
            "Action": [
                "events:TagResource",
                "events:DeleteRule",
                "events:PutTargets",
                "events:DescribeRule",
                "events:PutRule",
                "events:RemoveTargets",
                "events:DisableRule",
                "events:EnableRule"
            ],
            "Resource": "*",
            "Condition": {
              "StringEquals": {
                "aws:ResourceTag/sagemaker:is-scheduling-notebook-job": "true"
              }
            }
        },
        {
            "Sid": "VisualEditor1",
            "Effect": "Allow",
            "Action": "iam:PassRole",
            "Resource": "arn:aws:iam::*:role/*",
            "Condition": {
                "StringLike": {
                    "iam:PassedToService": "events.amazonaws.com"
                }
            }
        },
        {
            "Sid": "VisualEditor2",
            "Effect": "Allow",
            "Action": "sagemaker:ListTags",
            "Resource": "arn:aws:sagemaker:*:*:user-profile/*/*"
        }
    ]
}

Create a notebook job

To operationalize your notebook as a SageMaker notebook job, choose the Create a notebook job icon.

Alternatively, you can choose (right-click) your notebook on the file system and choose Create Notebook Job.

In the Create job section, simply choose the right instance type for your scheduled job based on your workload: standard instances, compute optimized instances, or accelerated computing instances that contain GPUs. You can choose any of the instances available for SageMaker training jobs. For the complete list of instances available, refer to Amazon SageMaker Pricing.

When a job is complete, you can view the output notebook file with its populated cells, as well as the underlying logs from the job runs.

Parameterize cells

When moving a notebook to a production workflow, it’s important to be able to reuse the same notebook with different sets of parameters for modularity. For example, you may want to parameterize the dataset location or the hyperparameters of your model so that you can reuse the same notebook for many distinct model trainings. SageMaker notebook jobs support this through cell tags. Simply choose the double gear icon in the right pane and choose Add Tag. Then label the tag as parameters.

By default, the notebook job run uses the parameter values specified in the notebook, but alternatively, you can modify these as a configuration for your notebook job.

Configure additional options

When creating a notebook job, you can expand the Additional options section in order to customize your job definition. Studio will automatically detect the image or kernel you’re using in your notebook and pre-select it for you. Ensure that you have validated this selection.

You can also specify environment variables or startup scripts to customize your notebook run environment. For the full list of configurations, see Additional Options.

Schedule your job

To schedule your job, choose Run on a schedule and set an appropriate interval and time. Then you can choose the Notebook Jobs tab that is visible after choosing the home icon. After the notebook is loaded, choose the Notebook Job Definitions tab to pause or remove your schedule.

Example use case

For our example, we showcase an end-to-end ML workflow that prepares data from a ground truth source, trains a refreshed model from that time period, and then runs inference on the most recent data to generate actionable insights. In practice, you might run a complete end-to-end workflow, or simply operationalize one step of your workflow. You can schedule an AWS Glue interactive session for daily data preparation, or run a batch inference job that generates graphical results directly in your output notebook.

The full notebook for this example can be found in our SageMaker Examples GitHub repository. The use case assumes that we’re a telecommunications company that is looking to schedule a notebook that predicts probable customer churn based on a model trained with the most recent data we have available.

To start, we gather the most recently available customer data and perform some preprocessing on it:

import pandas as pd
from synthetic_data import generate_data

previous_two_weeks_data = generate_data(5000, label_known=True)
todays_data = generate_data(300, label_known=False)

processed_prior_data = process_data(previous_two_weeks_data, label_known=True)
processed_todays_data = process_data(todays_data, label_known=False)

We train our refreshed model on this updated training data in order to make accurate predictions on todays_data:

import numpy as np
from sklearn.model_selection import train_test_split
from sklearn.ensemble import RandomForestClassifier
from sklearn.metrics import f1_score, confusion_matrix, ConfusionMatrixDisplay

y = np.ravel(processed_prior_data[["Churn"]])
x = processed_prior_data.drop(["Churn"], axis=1)

x_train, x_test, y_train, y_test = train_test_split(x, y, test_size=0.25)

clf = RandomForestClassifier(n_estimators=int(number_rf_estimators), criterion="gini")
clf.fit(x_train, y_train)

Because we’re going to schedule this notebook as a daily report, we want to capture how good our refreshed model performed on our validation set so that we can be confident in its future predictions. The results in the following screenshot are from our scheduled inference report.

Lastly, you want to capture the predicted results of today’s data into a database so that actions can be taken based on the results of this model.

After the notebook is understood, feel free to run this as an ephemeral job using the Run now option described earlier or test out the scheduling functionality.

Clean up

If you followed along with our example, be sure to pause or delete your notebook job’s schedule to avoid incurring ongoing charges.

Conclusion

Bringing notebooks to production with SageMaker notebook jobs vastly simplifies the undifferentiated heavy lifting required by data workers. Whether you’re scheduling end-to-end ML workflows or a piece of the puzzle, we encourage you to put some notebooks in production using SageMaker Studio or SageMaker Studio Lab! To learn more, see Notebook-based Workflows.


About the authors

Sean MorganSean Morgan is a Senior ML Solutions Architect at AWS. He has experience in the semiconductor and academic research fields, and uses his experience to help customers reach their goals on AWS. In his free time Sean is an activate open source contributor/maintainer and is the special interest group lead for TensorFlow Addons.

Sumedha Swamy is a Principal Product Manager at Amazon Web Services. He leads SageMaker Studio team to build it into the IDE of choice for interactive data science and data engineering workflows. He has spent the past 15 years building customer-obsessed consumer and enterprise products using Machine Learning. In his free time he likes photographing the amazing geology of the American Southwest.

Edward Sun is a Senior SDE working for SageMaker Studio at Amazon Web Services. He is focused on building interactive ML solution and simplifying the customer experience to integrate SageMaker Studio with popular technologies in data engineering and ML ecosystem. In his spare time, Edward is big fan of camping, hiking and fishing and enjoys the time spending with his family.

Read More

How xarvio Digital Farming Solutions accelerates its development with Amazon SageMaker geospatial capabilities

How xarvio Digital Farming Solutions accelerates its development with Amazon SageMaker geospatial capabilities

This is a guest post co-written by Julian Blau, Data Scientist at xarvio Digital Farming Solutions; BASF Digital Farming GmbH, and Antonio Rodriguez, AI/ML Specialist Solutions Architect at AWS

xarvio Digital Farming Solutions is a brand from BASF Digital Farming GmbH, which is part of BASF Agricultural Solutions division. xarvio Digital Farming Solutions offers precision digital farming products to help farmers optimize crop production. Available globally, xarvio products use machine learning (ML), image recognition technology, and advanced crop and disease models, in combination with data from satellites and weather station devices, to deliver accurate and timely agronomic recommendations to manage the needs of individual fields. xarvio products are tailored to local farming conditions, can monitor growth stages, and recognize diseases and pests. They increase efficiency, save time, reduce risks, and provide higher reliability for planning and decision-making—all while contributing to sustainable agriculture.

We work with different geospatial data, including satellite imagery of the areas where our users’ fields are located, for some of our use cases. Therefore, we use and process hundreds of large image files daily. Initially, we had to invest a lot of manual work and effort to ingest, process, and analyze this data using third-party tools, open-source libraries, or general-purpose cloud services. In some instances, this could take up to 2 months for us to build the pipelines for each specific project. Now, by utilizing the geospatial capabilities of Amazon SageMaker, we have reduced this time to just 1–2 weeks.

This time-saving is the result of automating the geospatial data pipelines to deliver our use cases more efficiently, along with using built-in reusable components for speeding up and improving similar projects in other geographical areas, while applying the same proven steps for other use cases based on similar data.

In this post, we go through an example use case to describe some of the techniques we commonly use, and show how implementing these using SageMaker geospatial functionalities in combination with other SageMaker features delivers measurable benefits. We also include code examples so you can adapt these to your own specific use cases.

Overview of solution

A typical remote sensing project for developing new solutions requires a step-by-step analysis of imagery taken by optical satellites such as Sentinel or Landsat, in combination with other data, including weather forecasts or specific field properties. The satellite images provide us with valuable information used in our digital farming solutions to help our users accomplish various tasks:

  • Detecting diseases early in their fields
  • Planning the right nutrition and treatments to be applied
  • Getting insights on weather and water for planning irrigation
  • Predicting crop yield
  • Performing other crop management tasks

To achieve these goals, our analyses typically require preprocessing of the satellite images with different techniques that are common in the geospatial domain.

To demonstrate the capabilities of SageMaker geospatial, we experimented with identifying agricultural fields through ML segmentation models. Additionally, we explored the preexisting SageMaker geospatial models and the bring your own model (BYOM) functionality on geospatial tasks such as land use and land cover classification, or crop classification, often requiring panoptic or semantic segmentation techniques as additional steps in the process.

In the following sections, we go through some examples of how to perform these steps with SageMaker geospatial capabilities. You can also follow these in the end-to-end example notebook available in the following GitHub repository.

As previously mentioned, we selected the land cover classification use case, which consists of identifying the type of physical coverage that we have on a given geographical area on the earth’s surface, organized on a set of classes including vegetation, water, or snow. This high-resolution classification allows us to detect the details for the location of the fields and its surroundings with high accuracy, which can be later chained with other analyses such as change detection in crop classification.

Client setup

First, let’s assume we have users with crops being cultivated in a given geographical area that we can identify within a polygon of geospatial coordinates. For this post, we define an example area over Germany. We can also define a given time range, for example in the first months of 2022. See the following code:

### Coordinates for the polygon of your area of interest...
coordinates = [
    [9.181602157004177, 53.14038825707946],
    [9.181602157004177, 52.30629767547948],
    [10.587520893823973, 52.30629767547948],
    [10.587520893823973, 53.14038825707946],
    [9.181602157004177, 53.14038825707946],
]
### Time-range of interest...
time_start = "2022-01-01T12:00:00Z"
time_end = "2022-05-01T12:00:00Z"

In our example, we work with the SageMaker geospatial SDK through programmatic or code interaction, because we’re interested in building code pipelines that can be automated with the different steps required in our process. Note you could also work with an UI through the graphical extensions provided with SageMaker geospatial in Amazon SageMaker Studio if you prefer this approach, as shown in the following screenshots. For accessing the Geospatial Studio UI, open the SageMaker Studio Launcher and choose Manage Geospatial resources. You can check more details in the documentation to Get Started with Amazon SageMaker Geospatial Capabilities.

Geospatial UI launcher

Geospatial UI main

Geospatial UI list of jobs

Here you can graphically create, monitor, and visualize the results of the Earth Observation jobs (EOJs) that you run with SageMaker geospatial features.

Back to our example, the first step for interacting with the SageMaker geospatial SDK is to set up the client. We can do this by establishing a session with the botocore library:

session = botocore.session.get_session()
gsClient = session.create_client(
    service_name='sagemaker-geospatial',
    region_name=region) #Replace with your region, e.g., 'us-east-1'

From this point on, we can use the client for running any EOJs of interest.

Obtaining data

For this use case, we start by collecting satellite imagery for our given geographical area. Depending on the location of interest, there might be more or less frequent coverage by the available satellites, which have its imagery organized in what is usually referred to as raster collections.

With the geospatial capabilities of SageMaker, you have direct access to high-quality data sources for obtaining the geospatial data directly, including those from AWS Data Exchange and the Registry of Open Data on AWS, among others. We can run the following command to list the raster collections already provided by SageMaker:

list_raster_data_collections_resp = gsClient.list_raster_data_collections()

This returns the details for the different raster collections available, including the Landsat C2L2 Surface Reflectance (SR), Landsat C2L2 Surface Temperature (ST), or the Sentinel 2A & 2B. Conveniently, Level 2A imagery is already optimized into Cloud-Optimized GeoTIFFs (COGs). See the following code:

…
{'Name': 'Sentinel 2 L2A COGs',
  'Arn': 'arn:aws:sagemaker-geospatial:us-west-2:378778860802:raster-data-collection/public/nmqj48dcu3g7ayw8',
  'Type': 'PUBLIC',
  'Description': 'Sentinel-2a and Sentinel-2b imagery, processed to Level 2A (Surface Reflectance) and converted to Cloud-Optimized GeoTIFFs'
…

Let’s take this last one for our example, by setting our data_collection_arn parameter to the Sentinel 2 L2A COGs’ collection ARN.

We can also search the available imagery for a given geographical location by passing the coordinates of a polygon we defined as our area of interest (AOI). This allows you to visualize the image tiles available that cover the polygon you submit for the specified AOI, including the Amazon Simple Storage Service (Amazon S3) URIs for these images. Note that satellite imagery is typically provided in different bands according to the wavelength of the observation; we discuss this more later in the post.

response = gsClient.search_raster_data_collection(**eoj_input_config, Arn=data_collection_arn)

The preceding code returns the S3 URIs for the different image tiles available, that you can directly visualize with any library compatible with GeoTIFFs such as rasterio. For example, let’s visualize two of the True Color Image (TCI) tiles.

…
'visual': {'Href': 'https://sentinel-cogs.s3.us-west-2.amazonaws.com/sentinel-s2-l2a-cogs/32/U/NC/2022/3/S2A_32UNC_20220325_0_L2A/TCI.tif'},
…

True Color Image 1True Color Image 2

Processing techniques

Some of the most common preprocessing techniques that we apply include cloud removal, geo mosaic, temporal statistics, band math, or stacking. All of these processes can now be done directly through the use of EOJs in SageMaker, without the need to perform manual coding or using complex and expensive third-party tools. This makes it 50% faster to build our data processing pipelines. With SageMaker geospatial capabilities, we can run these processes over different input types. For example:

  • Directly run a query for any of the raster collections included with the service through the RasterDataCollectionQuery parameter
  • Pass imagery stored in Amazon S3 as an input through the DataSourceConfig parameter
  • Simply chain the results of a previous EOJ through the PreviousEarthObservationJobArn parameter

This flexibility allows you to build any kind of processing pipeline you need.

The following diagram illustrates the processes we cover in our example.

Geospatial Processing tasks

In our example, we use a raster data collection query as input, for which we pass the coordinates of our AOI and time range of interest. We also specify a percentage of maximum cloud coverage of 2%, because we want clear and noise-free observations of our geographical area. See the following code:

eoj_input_config = {
    "RasterDataCollectionQuery": {
        "RasterDataCollectionArn": data_collection_arn,
        "AreaOfInterest": {
            "AreaOfInterestGeometry": {"PolygonGeometry": {"Coordinates": [coordinates]}}
        },
        "TimeRangeFilter": {"StartTime": time_start, "EndTime": time_end},
        "PropertyFilters": {
            "Properties": [
                {"Property": {"EoCloudCover": {"LowerBound": 0, "UpperBound": 2}}}
            ]
        },
    }
}

For more information on supported query syntax, refer to Create an Earth Observation Job.

Cloud gap removal

Satellite observations are often less useful due to high cloud coverage. Cloud gap filling or cloud removal is the process of replacing the cloudy pixels from the images, which can be done with different methods to prepare the data for further processing steps.

With SageMaker geospatial capabilities, we can achieve this by specifying a CloudRemovalConfig parameter in the configuration of our job.

eoj_config =  {
    'CloudRemovalConfig': {
        'AlgorithmName': 'INTERPOLATION',
        'InterpolationValue': '-9999'
    }
}

Note that we’re using an interpolation algorithm with a fixed value in our example, but there are other configurations supported, as explained in the Create an Earth Observation Job documentation. The interpolation allows it estimating a value for replacing the cloudy pixels, by considering the surrounding pixels.

We can now run our EOJ with our input and job configurations:

response = gsClient.start_earth_observation_job(
    Name =  'cloudremovaljob',
    ExecutionRoleArn = role,
    InputConfig = eoj_input_config,
    JobConfig = eoj_config,
)

This job takes a few minutes to complete depending on the input area and processing parameters.

When it’s complete, the results of the EOJ are stored in a service-owned location, from where we can either export the results to Amazon S3, or chain these as input for another EOJ. In our example, we export the results to Amazon S3 by running the following code:

response = gsClient.export_earth_observation_job(
    Arn = cr_eoj_arn,
    ExecutionRoleArn = role,
    OutputConfig = {
        'S3Data': {
            'S3Uri': f's3://{bucket}/{prefix}/cloud_removal/',
            'KmsKeyId': ''
        }
    }
)

Now we’re able to visualize the resulting imagery stored in our specified Amazon S3 location for the individual spectral bands. For example, let’s inspect two of the blue band images returned.

Alternatively, you can also check the results of the EOJ graphically by using the geospatial extensions available in Studio, as shown in the following screenshots.

Cloud Removal UI 1   Cloud Removal UI 2

Temporal statistics

Because the satellites continuously orbit around earth, the images for a given geographical area of interest are taken at specific time frames with a specific temporal frequency, such as daily, every 5 days, or 2 weeks, depending on the satellite. The temporal statistics process enables us to combine different observations taken at different times to produce an aggregated view, such as a yearly mean, or the mean of all observations in a specific time range, for the given area.

With SageMaker geospatial capabilities, we can do this by setting the TemporalStatisticsConfig parameter. In our example, we obtain the yearly mean aggregation for the Near Infrared (NIR) band, because this band can reveal vegetation density differences below the top of the canopies:

eoj_config =  {
    'TemporalStatisticsConfig': {
        'GroupBy': 'YEARLY',
        'Statistics': ['MEAN'],
        'TargetBands': ['nir']
    }
}

After a few minutes running an EOJ with this config, we can export the results to Amazon S3 to obtain imagery like the following examples, in which we can observe the different vegetation densities represented with different color intensities. Note the EOJ can produce multiple images as tiles, depending on the satellite data available for the time range and coordinates specified.

Temporal Statistics 1Temporal Statistics 2

Band math

Earth observation satellites are designed to detect light in different wavelengths, some of which are invisible to the human eye. Each range contains specific bands of the light spectrum at different wavelengths, which combined with arithmetic can produce images with rich information about characteristics of the field such as vegetation health, temperature, or presence of clouds, among many others. This is performed in a process commonly called band math or band arithmetic.

With SageMaker geospatial capabilities, we can run this by setting the BandMathConfig parameter. For example, let’s obtain the moisture index images by running the following code:

eoj_config =  {
    'BandMathConfig': {
        'CustomIndices': {
            'Operations': [
                {
                    'Name': 'moisture',
                    'Equation': '(B8A - B11) / (B8A + B11)'
                }
            ]
        }
    }
}

After a few minutes running an EOJ with this config, we can export the results and obtain images, such as the following two examples.

Moisture index 1Moisture index 2Moisture index legend

Stacking

Similar to band math, the process of combining bands together to produce composite images from the original bands is called stacking. For example, we could stack the red, blue, and green light bands of a satellite image to produce the true color image of the AOI.

With SageMaker geospatial capabilities, we can do this by setting the StackConfig parameter. Let’s stack the RGB bands as per the previous example with the following command:

eoj_config =  {
    'StackConfig': {
        'OutputResolution': {
            'Predefined': 'HIGHEST'
        },
        'TargetBands': ['red', 'green', 'blue']
    }
}

After a few minutes running an EOJ with this config, we can export the results and obtain images.

Stacking TCI 1Stacking TCI 2

Semantic segmentation models

As part of our work, we commonly use ML models to run inferences over the preprocessed imagery, such as detecting cloudy areas or classifying the type of land in each area of the images.

With SageMaker geospatial capabilities, you can do this by relying on the built-in segmentation models.

For our example, let’s use the land cover segmentation model by specifying the LandCoverSegmentationConfig parameter. This runs inferences on the input by using the built-in model, without the need to train or host any infrastructure in SageMaker:

response = gsClient.start_earth_observation_job(
    Name =  'landcovermodeljob',
    ExecutionRoleArn = role,
    InputConfig = eoj_input_config,
    JobConfig = {
        'LandCoverSegmentationConfig': {},
    },
)

After a few minutes running a job with this config, we can export the results and obtain images.

Land Cover 1Land Cover 2Land Cover 3Land Cover 4

In the preceding examples, each pixel in the images corresponds to a land type class, as shown in the following legend.

Land Cover legend

This allows us to directly identify the specific types of areas in the scene such as vegetation or water, providing valuable insights for additional analyses.

Bring your own model with SageMaker

If the state-of-the-art geospatial models provided with SageMaker aren’t enough for our use case, we can also chain the results of any of the preprocessing steps shown so far with any custom model onboarded to SageMaker for inference, as explained in this SageMaker Script Mode example. We can do this with any of the inference modes supported in SageMaker, including synchronous with real-time SageMaker endpoints, asynchronous with SageMaker asynchronous endpoints, batch or offline with SageMaker batch transforms, and serverless with SageMaker serverless inference. You can check further details about these modes in the Deploy Models for Inference documentation. The following diagram illustrates the workflow in high-level.

Inference flow options

For our example, let’s assume we have onboarded two models for performing a land cover classification and crop type classification.

We just have to point towards our trained model artifact, in our example a PyTorch model, similar to the following code:

from sagemaker.pytorch import PyTorchModel
import datetime

model = PyTorchModel(
    name=model_name, ### Set a model name
    model_data=MODEL_S3_PATH, ### Location of the custom model in S3
    role=role,
    entry_point='inference.py', ### Your inference entry-point script
    source_dir='code', ### Folder with any dependencies
    image_uri=image_uri, ### URI for your AWS DLC or custom container
    env={
        'TS_MAX_REQUEST_SIZE': '100000000',
        'TS_MAX_RESPONSE_SIZE': '100000000',
        'TS_DEFAULT_RESPONSE_TIMEOUT': '1000',
    }, ### Optional – Set environment variables for max size and timeout
)

predictor = model.deploy(
    initial_instance_count = 1, ### Your number of instances
    instance_type = 'ml.g4dn.8xlarge', ### Your instance type
    async_inference_config=sagemaker.async_inference.AsyncInferenceConfig(
        output_path=f"s3://{bucket}/{prefix}/output",
        max_concurrent_invocations_per_instance=2,
    ), ### Optional – Async config if using SageMaker Async Endpoints
)

predictor.predict(data) ### Your images for inference

This allows you to obtain the resulting images after inference, depending on the model you’re using.

In our example, when running a custom land cover segmentation, the model produces images similar to the following, where we compare the input and prediction images with its corresponding legend.

Land Cover Segmentation 1  Land Cover Segmentation 2. Land Cover Segmentation legend

The following is another example of a crop classification model, where we show the comparison of the original vs. resulting panoptic and semantic segmentation results, with its corresponding legend.

Crop Classification

Automating geospatial pipelines

Finally, we can also automate the previous steps by building geospatial data processing and inference pipelines with Amazon SageMaker Pipelines. We simply chain each preprocessing step required through the use of Lambda Steps and Callback Steps in Pipelines. For example, you could also add a final inference step using a Transform Step, or directly through another combination of Lambda Steps and Callback Steps, for running an EOJ with one of the built-in semantic segmentation models in SageMaker geospatial features.

Note we’re using Lambda Steps and Callback Steps in Pipelines because the EOJs are asynchronous, so this type of step allows us to monitor the run of the processing job and resume the pipeline when it’s complete through messages in an Amazon Simple Queue Service (Amazon SQS) queue.

Geospatial Pipeline

You can check the notebook in the GitHub repository for a detailed example of this code.

Now we can visualize the diagram of our geospatial pipeline through Studio and monitor the runs in Pipelines, as shown in the following screenshot.

Geospatial Pipeline UI

Conclusion

In this post, we presented a summary of the processes we implemented with SageMaker geospatial capabilities for building geospatial data pipelines for our advanced products from xarvio Digital Farming Solutions. Using SageMaker geospatial increased the efficiency of our geospatial work by more than 50%, through the use of pre-built APIs that accelerate and simplify our preprocessing and modeling steps for ML.

As a next step, we’re onboarding more models from our catalog to SageMaker to continue the automation of our solution pipelines, and will continue utilizing more geospatial features of SageMaker as the service evolves.

We encourage you to try SageMaker geospatial capabilities by adapting the end-to-end example notebook provided in this post, and learning more about the service in What is Amazon SageMaker Geospatial Capabilities?.


About the Authors

Julian BlauJulian Blau is a Data Scientist at BASF Digital Farming GmbH, located in Cologne, Germany. He develops digital solutions for agriculture, addressing the needs of BASF’s global customer base by using geospatial data and machine learning. Outside work, he enjoys traveling and being outdoors with friends and family.

Antonio RodriguezAntonio Rodriguez is an Artificial Intelligence and Machine Learning Specialist Solutions Architect in Amazon Web Services, based out of Spain. He helps companies of all sizes solve their challenges through innovation, and creates new business opportunities with AWS Cloud and AI/ML services. Apart from work, he loves to spend time with his family and play sports with his friends.

Read More

Protecting Consumers and Promoting Innovation – AI Regulation and Building Trust in Responsible AI

Protecting Consumers and Promoting Innovation – AI Regulation and Building Trust in Responsible AI

Artificial intelligence (AI) is one of the most transformational technologies of our generation and provides huge opportunities to be a force for good and drive economic growth. It can help scientists cure terminal diseases, engineers build inconceivable structures, and farmers yield more crops. AI allows us to make sense of our world as never before—and build products and services to address some of our most challenging problems, like climate change and responding to humanitarian disasters. AI is also helping industries innovate and overcome more commonplace challenges. Manufacturers are deploying AI to avoid equipment downtime through predictive maintenance and streamlining their logistics and distribution channels through supply chain optimization. Airlines are taking advantage of AI technologies to enhance the customer booking experience, assist with crew scheduling, and transporting passengers with greater fuel efficiency by simulating routes based on distance, aircraft weight, and weather.

While the benefits of AI are already plain to see and improving our lives each day, unlocking AI’s full potential will require building greater confidence among consumers. That means earning public trust that AI will be used responsibly and in a manner that is consistent with the rule of law, human rights, and the values of equity, privacy, and fairness.

Understanding the important need for public trust, we work closely with policymakers across the country and around the world as they assess whether existing consumer protections remain fit-for-purpose in an AI era. An important baseline for any regulation must be to differentiate between high-risk AI applications and those that pose low-to-no risk. The great majority of AI applications fall in the latter category, and their widespread adoption provides opportunities for immense productivity gains and, ultimately, improvements in human well-being. If we are to inspire public confidence in the overwhelmingly good, businesses must demonstrate they can confidently mitigate the potential risks of high-risk AI. The public should be confident that these sorts of high-risk systems are safe, fair, appropriately transparent, privacy protective, and subject to appropriate oversight.

At AWS, we recognize that we are well positioned to deliver on this vision and are proud to support our customers as they invent, build, and deploy AI systems to solve real-world problems. As AWS offers the broadest and deepest set of AI services and the supporting cloud infrastructure, we are committed to developing fair and accurate AI services and providing customers with the tools and guidance needed to build applications responsibly. We recognize that responsible AI is the shared responsibility of all organizations that develop and deploy AI systems.

We are committed to providing tools and resources to aide customers using our AI and machine learning (ML) services. Earlier this year, we launched our Responsible Use of Machine Learning guide, providing considerations and recommendations for responsibly using ML across all phases of the ML lifecycle. In addition, at our 2020 AWS re:Invent conference, we rolled out Amazon SageMaker Clarify, a service that provides developers with greater insights into their data and models, helping them understand why an ML model made a specific prediction and also whether the predictions were impacted by bias. Additional resources, access to AI/ML experts, and education and training can also be found on our Responsible use of artificial intelligence and machine learning page.

We continue to expand efforts to provide guidance and support to customers and the broader community in the responsible use space. This week at our re:Invent 2022 conference, we announced the launch of AWS AI Service Cards, a new transparency resource to help customers better understand our AWS AI services. The new AI Service Cards deliver a form of responsible AI documentation that provide customers with a single place to find information.

Each AI Service Card covers four key topics to help you better understand the service or service features, including intended use cases and limitations, responsible AI design considerations, and guidance on deployment and performance optimization. The content of the AI Service Cards addresses a broad audience of customers, technologists, researchers, and other stakeholders who seek to better understand key considerations in the responsible design and use of an AI service.

Conversations among policymakers regarding AI regulations continue as the technologies become more established. AWS is focused on not only offering the best-in-class tools and services to provide for the responsible development and deployment of AI services, but also continuing our engagement with lawmakers to promote strong consumer protections while encouraging the fast pace of innovation.


About the Author

Nicole Foster is Director of AWS Global AI/ML and Canada Public Policy at Amazon, where she leads the direction and strategy of artificial intelligence public policy for Amazon Web Services (AWS) around the world as well as the company’s public policy efforts in support of the AWS business in Canada. In this role, she focuses on issues related to emerging technology, digital modernization, cloud computing, cyber security, data protection and privacy, government procurement, economic development, skilled immigration, workforce development, and renewable energy policy.

Read More

Stability AI builds foundation models on Amazon SageMaker

Stability AI builds foundation models on Amazon SageMaker

We’re thrilled to announce that Stability AI has selected AWS as its preferred cloud provider to power its state-of-the-art AI models for image, language, audio, video, and 3D content generation. Stability AI is a community-driven, open-source artificial intelligence (AI) company developing breakthrough technologies. With Amazon SageMaker, Stability AI will build AI models on compute clusters with thousands of GPU or AWS Trainium chips, reducing training time and cost by 58%. Stability AI will also collaborate with AWS to enable students, researchers, startups, and enterprises around the world to use its open-source tools and models.

“Our mission at Stability AI is to build the foundation to activate humanity’s potential through AI. AWS has been an integral partner in scaling our open-source foundation models across modalities, and we are delighted to bring these to SageMaker to enable tens of thousands of developers and millions of users to take advantage of them. We look forward to seeing the amazing things built on these models and helping our customers customize and scale their models and solutions.”

-Emad Mostaque, Founder and CEO of Stability AI.

Generative AI models and Stable Diffusion

Generative AI models can create text, images, audio, video, code, and more from simple text instructions. For example, I created the following image by giving this text prompt to the model: “Four people riding a bicycle in the Swiss Alps, renaissance painting, epic breathtaking nature scene, diffused light.” I used a Jupyter notebook in Amazon SageMaker Studio to generate this image with Stable Diffusion.

Stability AI also announced a distilled stable diffusion model, which can generate coherent images up to ten times faster than before. This latest open-source release also introduces models to upscale an image’s resolution and infer depth information to generate new images. The following images show an example of how you can use the new depth2img model to generate new images while preserving the depth and coherence of the original image.

We’re excited by the potential of these generative AI models and by what our customers will create. From inpainting to textual inversion to modifiers, the community continues to innovate and build better open-source models and tools in generative AI.

Training foundation models at scale with SageMaker

Foundation models—large models that are adaptable to a variety of downstream tasks in domains such as language, image, audio, video—are hard to train because they require a high-performance compute cluster with thousands of GPU or Trainium chips, along with software to efficiently utilize the cluster.

Stability AI picked AWS as its preferred cloud provider to provision one of the largest-ever clusters of GPUs in the public cloud. Using SageMaker’s managed infrastructure and optimization libraries, Stability is able to make its model training more resilient and performant. For example, with models such as GPT NeoX, Stability AI was able to reduce training time and cost by 58% using SageMaker and its model parallel library. These optimizations and performance improvements apply to models with tens or hundreds of billions of parameters.

Get started with Stable Diffusion

Stable Diffusion 2.0 is available today on Amazon SageMaker JumpStart. JumpStart is the machine learning (ML) hub of SageMaker that provides hundreds of built-in algorithms, pre-trained models, and end-to-end solution templates to help you quickly get started with ML.

Get started today with Stable Diffusion 2.0.


About the authors

Aditya Bindal is a Principal Product Manager for AWS Deep Learning. He works on software and tools to make large-scale training and inference easier for customers. In his spare time, he enjoys spending time with his daughter, playing tennis, reading historical fiction, and traveling.

Read More

Launch Amazon SageMaker Autopilot experiments directly from within Amazon SageMaker Pipelines to easily automate MLOps workflows

Launch Amazon SageMaker Autopilot experiments directly from within Amazon SageMaker Pipelines to easily automate MLOps workflows

Amazon SageMaker Autopilot, a low-code machine learning (ML) service that automatically builds, trains, and tunes the best ML models based on tabular data, is now integrated with Amazon SageMaker Pipelines, the first purpose-built continuous integration and continuous delivery (CI/CD) service for ML. This enables the automation of an end-to-end flow of building ML models using Autopilot and integrating models into subsequent CI/CD steps.

So far, to launch an Autopilot experiment within Pipelines, you have to build a model-building workflow by writing custom integration code with Pipelines Lambda or Processing steps. For more information, see Move Amazon SageMaker Autopilot ML models from experimentation to production using Amazon SageMaker Pipelines.

With the support for Autopilot as a native step within Pipelines, you can now add an automated training step (AutoMLStep) in Pipelines and invoke an Autopilot experiment with Ensembling training mode. For example, if you’re building a training and evaluation ML workflow for a fraud detection use case with Pipelines, you can now launch an Autopilot experiment using the AutoML step, which automatically runs multiple trials to find the best model on a given input dataset. After the best model is created using the Model step, its performance can be evaluated on test data using the Transform step and a Processing Step for a custom evaluation script within Pipelines. Eventually, the model can be registered into the SageMaker model registry using the Model step in combination with a Condition step.

In this post, we show how to create an end-to-end ML workflow to train and evaluate a SageMaker generated ML model using the newly launched AutoML step in Pipelines and register it with the SageMaker model registry. The ML model with the best performance can be deployed to a SageMaker endpoint.

Dataset overview

We use the publicly available UCI Adult 1994 Census Income dataset to predict if a person has an annual income of greater than $50,000 per year. This is a binary classification problem; the options for the income target variable are either <=50K or >50K.

The dataset contains 32,561 rows for training and validation and 16,281 rows for testing with 15 columns each. This includes demographic information about individuals and class as the target column indicating the income class.

Column Name Description
age Continuous
workclass Private, Self-emp-not-inc, Self-emp-inc, Federal-gov, Local-gov, State-gov, Without-pay, Never-worked
fnlwgt Continuous
education Bachelors, Some-college, 11th, HS-grad, Prof-school, Assoc-acdm, Assoc-voc, 9th, 7th-8th, 12th, Masters, 1st-4th, 10th, Doctorate, 5th-6th, Preschool
education-num Continuous
marital-status Married-civ-spouse, Divorced, Never-married, Separated, Widowed, Married-spouse-absent, Married-AF-spouse
occupation Tech-support, Craft-repair, Other-service, Sales, Exec-managerial, Prof-specialty, Handlers-cleaners, Machine-op-inspct, Adm-clerical, Farming-fishing, Transport-moving, Priv-house-serv, Protective-serv, Armed-Forces
relationship Wife, Own-child, Husband, Not-in-family, Other-relative, Unmarried
race White, Asian-Pac-Islander, Amer-Indian-Eskimo, Other, Black
sex Female, Male
capital-gain Continuous
capital-loss Continuous
hours-per-week Continuous
native-country United-States, Cambodia, England, Puerto-Rico, Canada, Germany, Outlying-US(Guam-USVI-etc), India, Japan, Greece, South, China, Cuba, Iran, Honduras, Philippines, Italy, Poland, Jamaica, Vietnam, Mexico, Portugal, Ireland, France, Dominican-Republic, Laos, Ecuador, Taiwan, Haiti, Columbia, Hungary, Guatemala, Nicaragua, Scotland, Thailand, Yugoslavia, El-Salvador, Trinadad&Tobago, Peru, Hong, Holand-Netherlands
class Income class, either <=50K or >50K

Solution overview

We use Pipelines to orchestrate different pipeline steps required to train an Autopilot model. We create and run an Autopilot experiment as part of an AutoML step as described in this tutorial.

The following steps are required for this end-to-end Autopilot training process:

  • Create and monitor an Autopilot training job using the AutoMLStep.
  • Create a SageMaker model using ModelStep. This step fetches the best model’s metadata and artifacts rendered by Autopilot in the previous step.
  • Evaluate the trained Autopilot model on a test dataset using TransformStep.
  • Compare the output from the previously run TransformStep with the actual target labels using ProcessingStep.
  • Register the ML model to the SageMaker model registry using ModelStep, if the previously obtained evaluation metric exceeds a predefined threshold in ConditionStep.
  • Deploy the ML model as a SageMaker endpoint for testing purposes.

Architecture

The architecture diagram below illustrates the different pipeline steps necessary to package all the steps in a reproducible, automated, and scalable SageMaker Autopilot training pipeline. The data files are read from the S3 bucket and the pipeline steps are called sequentially.

Walkthrough

This post provides a detailed explanation of the pipeline steps. We review the code and discuss the components of each step. To deploy the solution, refer to the example notebook, which provides step-by-step instructions for implementing an Autopilot MLOps workflow using Pipelines.

Prerequisites

Complete the following prerequisites:

When the dataset is ready to use, we need to set up Pipelines to establish a repeatable process to automatically build and train ML models using Autopilot. We use the SageMaker SDK to programmatically define, run, and track an end-to-end ML training pipeline.

Pipeline Steps

In the following sections, we go through the different steps in the SageMaker pipeline, including AutoML training, model creation, batch inference, evaluation, and conditional registration of the best model. The following diagram illustrates the entire pipeline flow.

AutoML training step

An AutoML object is used to define the Autopilot training job run and can be added to the SageMaker pipeline by using the AutoMLStep class, as shown in the following code. The ensembling training mode needs to be specified, but other parameters can be adjusted as needed. For example, instead of letting the AutoML job automatically infer the ML problem type and objective metric, these could be hardcoded by specifying the problem_type and job_objective parameters passed to the AutoML object.

automl = AutoML(
    role=execution_role,
    target_attribute_name=target_attribute_name,
    sagemaker_session=pipeline_session,
    total_job_runtime_in_seconds=max_automl_runtime,
    mode="ENSEMBLING",
)
train_args = automl.fit(
    inputs=[
        AutoMLInput(
            inputs=s3_train_val,
            target_attribute_name=target_attribute_name,
            channel_type="training",
        )
    ]
)
step_auto_ml_training = AutoMLStep(
    name="AutoMLTrainingStep",
    step_args=train_args,
)

Model creation step

The AutoML step takes care of generating various ML model candidates, combining them, and obtaining the best ML model. Model artifacts and metadata are automatically stored and can be obtained by calling the get_best_auto_ml_model() method on the AutoML training step. These can then be used to create a SageMaker model as part of the Model step:

best_auto_ml_model = step_auto_ml_training.get_best_auto_ml_model(
    execution_role, sagemaker_session=pipeline_session
)
step_args_create_model = best_auto_ml_model.create(instance_type=instance_type)
step_create_model = ModelStep(name="ModelCreationStep", step_args=step_args_create_model)

Batch transform and evaluation steps

We use the Transformer object for batch inference on the test dataset, which can then be used for evaluation purposes. The output predictions are compared to the actual or ground truth labels using a Scikit-learn metrics function. We evaluate our results based on the F1 score. The performance metrics are saved to a JSON file, which is referenced when registering the model in the subsequent step.

Conditional registration steps

In this step, we register our new Autopilot model to the SageMaker model registry, if it exceeds the predefined evaluation metric threshold.

Create and run the pipeline

After we define the steps, we combine them into a SageMaker pipeline:

pipeline = Pipeline(
    name="AutoMLTrainingPipeline",
    parameters=[
        instance_count,
        instance_type,
        max_automl_runtime,
        model_approval_status,
        model_package_group_name,
        model_registration_metric_threshold,
        s3_bucket,
        target_attribute_name,
    ],
    steps=[
        step_auto_ml_training,
        step_create_model,
        step_batch_transform,
        step_evaluation,
        step_conditional_registration,
    ],
    sagemaker_session=pipeline_session,
)

The steps are run in sequential order. The pipeline runs all the steps for an AutoML job using Autopilot and Pipelines for training, model evaluation, and model registration.

You can view the new model by navigating to the model registry on the Studio console and opening AutoMLModelPackageGroup. Choose any version of a training job to view the objective metrics on the Model quality tab.

You can view the explainability report on the Explainability tab to understand your model’s predictions.

To view the underlying Autopilot experiment for all the models created in AutoMLStep, navigate to the AutoML page and choose the job name.

Deploy the model

After we have manually reviewed the ML model’s performance, we can deploy our newly created model to a SageMaker endpoint. For this, we can run the cells in the notebook that create the model endpoint using the model configuration saved in the SageMaker model registry.

Note that this script is shared for demonstration purposes, but it’s recommended to follow a more robust CI/CD pipeline for production deployment for ML inference. For more information, refer to Building, automating, managing, and scaling ML workflows using Amazon SageMaker Pipelines.

Summary

This post describes an easy-to-use ML pipeline approach to automatically train tabular ML models (AutoML) using Autopilot, Pipelines, and Studio. AutoML improves ML practitioners’ efficiency, accelerating the path from ML experimentation to production without the need for extensive ML expertise. We outline the respective pipeline steps needed for ML model creation, evaluation, and registration. Get started by trying the example notebook to train and deploy your own custom AutoML models.

For more information on Autopilot and Pipelines, refer to Automate model development with Amazon SageMaker Autopilot and Amazon SageMaker Pipelines.

Special thanks to everyone who contributed to the launch: Shenghua Yue, John He, Ao Guo, Xinlu Tu, Tian Qin, Yanda Hu, Zhankui Lu, and Dewen Qi.


About the Authors

Janisha Anand is a Senior Product Manager in the SageMaker Low/No Code ML team, which includes SageMaker Autopilot. She enjoys coffee, staying active, and spending time with her family.

Marcelo Aberle is an ML Engineer at AWS AI. He helps Amazon ML Solutions Lab customers build scalable ML(-Ops) systems and frameworks. In his spare time, he enjoys hiking and cycling in the San Francisco Bay Area.

Geremy Cohen is a Solutions Architect with AWS where he helps customers build cutting-edge, cloud-based solutions. In his spare time, he enjoys short walks on the beach, exploring the bay area with his family, fixing things around the house, breaking things around the house, and BBQing.

Shenghua Yue is a Software Development Engineer at Amazon SageMaker. She focuses on building ML tools and products for customers. Outside of work, she enjoys the outdoors, yoga, and hiking.

Read More

AI21 Jurassic-1 foundation model is now available on Amazon SageMaker

AI21 Jurassic-1 foundation model is now available on Amazon SageMaker

Today we are excited to announce that AI21 Jurassic-1 (J1) foundation models are available for customers using Amazon SageMaker. Jurassic-1 models are highly versatile, capable of both human-like text generation, as well as solving complex tasks such as question answering, text classification, and many others. You can easily try out this model and use it with Amazon SageMaker JumpStart. JumpStart is the machine learning (ML) hub of SageMaker that provides access to foundation models in addition to built-in algorithms and end-to-end solution templates to help you quickly get started with ML.

In this post, we walk through how to use the Jurassic-1 Grande model in SageMaker.

Foundation models in SageMaker

JumpStart provides access to a range of models from popular model hubs including Hugging Face, PyTorch Hub, and TensorFlow Hub, which you can use within your ML development workflow in SageMaker. Recent advances in ML have given rise to a new class of models known as foundation models, which are typically trained on billions of parameters and are adaptable to a wide category of use cases, such as text summarization, generating digital art, and language translation. Because these models are expensive to train, customers want to use existing pre-trained foundation models and fine-tune them as needed, rather than train these models themselves. SageMaker provides a curated list of models that you can choose from on the SageMaker console.

You can now find foundation models from different model providers within JumpStart, enabling you to get started with foundation models quickly. You can find foundation models based on different tasks or model providers, and easily review model characteristics and usage terms. You can also try out these models using a test UI widget. When you want to use a foundation model at scale, you can do so easily without leaving SageMaker by using pre-built notebooks from model providers. Because the models are hosted and deployed on AWS, you can rest assured that your data, whether used for evaluating or using the model at scale, is never shared with third parties.

Jurassic-1 foundation model

Jurassic-1 is the first generation in a series of large language models trained and made widely accessible by AI21 Labs. For a complete description of Jurassic-1, including benchmarks and quantitative comparisons with other models, refer to the following technical paper. All J1 models were trained on a massive corpus of English text, making them highly versatile general purpose text-generators, capable of composing human-like text and solving complex tasks such as question answering, text classification, and many others. J1 can be applied to virtually any language task by crafting a suitable prompt containing a description of the task and a few examples, a process commonly known as prompt engineering. Popular use cases include generating marketing copy, powering chatbots, and assisting creative writing.

“We are building world-class foundation models for text and want to help our customers innovate with the latest Jurassic-1 models. Amazon SageMaker offers the deepest and broadest set of ML services, and we’re excited to collaborate with Amazon SageMaker so that customers will be able to use these foundation models on SageMaker within their development environment. Now customers can rapidly innovate, lower time-to-value, and drive efficiency in their businesses.”

-Ori Goshen, co-CEO of AI21 Labs.

Walkthrough

Let’s take you on a tour to test the J1-Grande model in SageMaker. You can try out the experience in three simple steps:

  1. Choose the Jurassic-1 model on the SageMaker console.
  2. Evaluate the model using a test widget.
  3. Use a notebook associated with the foundation model to deploy it in your environment.

Let’s expand each step in detail.

Choose the Jurassic-1 model on the SageMaker console

The first step is to login to the AWS Management Console for Amazon SageMaker and request access to the list of foundation models from the foundation model category under JumpStart here:

After your account is allow listed, you can see a list of models on this page. You can quickly search for the Jurassic-1 Grande model from the same view.

Evaluate the Jurassic-1 Grande model with a test widget

On the Jurassic-1 Grande listing, choose View Model. You will see a description of the model and the tasks that you can perform. Read through the EULA for the model before proceeding.

Let’s first try out the model for text summarization. Choose Try out model.

You’re taken to the page in a separate browser tab where you can give sample prompts to the J1-Grande model and view the output.

The following example generates a summary about a restaurant based on reviews.

Note that foundation models and their output are from the model provider, and AWS is not responsible for the content or accuracy therein.

The model output may vary depending on the settings and the prompt. You can generate text from the model using simple instructions, but by providing the model with more examples in the prompt, just as a human would, it can produce completions that are more aligned with your intentions. The best way to guide the model is to provide several examples of input/output pairs in the prompt. This establishes a pattern for the model to mimic. Then add the input for a query example and let the model complete it with an appropriate generation.

After you have played with the model, it’s time to use the notebook and deploy it as an endpoint in your environment.

Deploy the foundation model from a notebook

Go back to the model listing shown earlier and choose View notebook. You should see the Jurassic-1 Grande Jupyter notebook with the walkthrough to deploy the model.

Let’s use this notebook from Amazon SageMaker Studio. Open Studio and pull in the notebook using the Git repo URL https://github.com/AI21Labs/SageMaker.git.

The notebook example uses both the Boto3 SDK and the AI21 SDK to deploy and interact with the endpoint.

Note that this example uses an ml.g5.12xlarge instance. If your default limit for your AWS account is 0, you need to request a limit increase for this GPU instance.

Let’s create the endpoint using SageMaker inference. First we set the necessary variables, then we deploy the model from the model package:

model_name = "j1-grande"

content_type = "application/json"

real_time_inference_instance_type = (
    "ml.g5.12xlarge"
)

# create a deployable model from the model package.
model = ModelPackage(
    role=role, model_package_arn=model_package_arn, sagemaker_session=sagemaker_session
)

# Deploy the model
predictor = model.deploy(1, real_time_inference_instance_type, endpoint_name=model_name, 
                         model_data_download_timeout=3600,
                         container_startup_health_check_timeout=600,
                        )

After the endpoint is deployed, you can run inference queries against the model.

You can think of Jurassic-1 Grande as a smart auto-completion algorithm: it’s very good at latching on to hints and patterns expressed in plain English, and generating text that follows the same patterns. After the model is deployed, you can interact with the deployed endpoint using the following code snippet:

response = ai21.Completion.execute(sm_endpoint="j1-grande",
                                   prompt="To be or",
                                   maxTokens=4,
                                   temperature=0,
                                   numResults=1)

print(response['completions'][0]['data']['text'])

The notebook also contains a walkthrough on how you can run inference queries with the AI21 SDK.

The following video walks through the workflow.

Clean up

After you have tested the endpoint, make sure you delete the SageMaker inference endpoint and delete the model to avoid incurring charges.

Conclusion

In this post, we showed you how you can test and use AI21’s Jurassic Grande model using Amazon SageMaker. Request access, try out the foundation model in SageMaker today and let us know your feedback!


About the authors

Karthik Bharathy is the product leader for the Amazon SageMaker team with over a decade of product management, product strategy, execution, and launch experience.

Tomer Asida is an algo team lead at AI21 Labs. As an algo team lead, Tomer heads the algorithm development efforts of our developer platform Ai21 Studio including Jurassic-1 models and associated APIs.

Read More