Easily customize your notifications while using Amazon Lookout for Metrics

We are excited to announce that you can now add filters to alerts and also edit existing alerts while using Amazon Lookout for Metrics. With this launch, you can add filters to your alerts configuration to only get notifications for anomalies that matter the most to you. You can also modify existing alerts as per your needs for notification as anomalies evolve.

Lookout for Metrics uses machine learning (ML) to automatically monitor the metrics that are most important to businesses with greater speed and accuracy. The service also makes it easier to diagnose the root cause of anomalies like unexpected dips in revenue, high rates of abandoned shopping carts, spikes in payment transaction failures, increases in new user signups, and many more. Lookout for Metrics goes beyond simple anomaly detection. It allows developers to set up autonomous monitoring for important metrics to detect anomalies and identify their root cause in a matter of few clicks, using the same technology used by Amazon internally to detect anomalies in its metrics—all with no ML experience required.

Alert is an optional feature that allows you to set up notifications on anomalies in the datasets, which are sent through Amazon Simple Notification Service (Amazon SNS) and AWS Lambda functions. Previously, when you set up an alert, you were notified on all detected anomalies above the severity score you selected, which made it challenging to quickly identify the most relevant anomalies to your business. Now, by implementing filters and edits in the alert system, different business units within your organization are able to specify the types of alerts they receive. Your developers can benefit from this feature by being able to receive alerts on anomalies that are related to the development of their service, while your business analysts and business managers can track anomalies related to the status of their business, such as a location that is underperforming. For example, you may set up an alert to get notified when there is a spike or drop in your revenue. But you may only be interested in a specific store location and in a particular product. The filtering capability allows you to get alerted only when a revenue anomaly fits the criteria you have set.

Solution overview

In this post, we demonstrate how to create Alert with filters and how the configured filters publish alerts only for anomalies matching the filter criteria. The alert filters are based on metrics and dimensions that are present in the dataset definition for the anomaly detector. The solution enables you to use alert filters to get targeted notifications for anomalies detected in your data. The following diagram illustrates the solution architecture.

Provision resources with AWS CloudFormation

You can use the provided AWS CloudFormation stack to set up resources for the walkthrough. It contains resources to continuously generate live data and publish them to Amazon S3, create a detector (named TestAlertFilters) and add a dataset (named AlertFiltersDataset) to the detector. Complete the following steps:

  1. Choose Launch Stack:
  2. Choose Next.
  3. Enter a stack name (for example, L4MAlertFiltersStack).
  4. Enter the values for the detector (TestAlertFilters) and dataset (AlertFiltersDataset).
  5. Choose Next.
  6. Leave the settings for Configure stack options at their defaults and choose Next.
  7. Select the acknowledgement check box and choose Create stack.

Activate the detector created by CFN template

To set up your detector, complete the following steps:

  1. On the Lookout for Metrics console, choose Detectors in the navigation pane.
  2. Select the detector TestAlertFilters and choose View details.
  3. To activate the detector, you can either choose Activate at the top or choose Activate detector under How it works.
  4. Choose Activate to confirm if you want to activate the detector for continuous detection.

A confirmation message shows that the detector is activating. Activation can take up to 1 hour to complete. In the meantime, we can proceed with alert configuration.

Configure your alert

We now configure an alert to get notifications for anomalies detected by the detector. Alert filters are optional configurations, and you can select up to 5 measures and 5 dimensions while adding filters. In this post, we walk through creating an alert with filters. Complete the following steps:

  1. On your detector details page, choose Add alerts.
  2. Confirm your alert name.
    Lookout for Metrics populates the configuration fields with the metrics and dimensions supplied during dataset creation.In this release, the Severity score field is optional, which previously was a required field. By default, we start with severity score of 70, which you can change or remove.
  3. To add a measure, choose Add criteria and choose Measure.
  4. For Measure EQUALS, choose the revenue measure.
  5. Choose Add criteria again and choose Dimension.

    You can choose up to 5 dimension filters. For this post, we configure two.
  6. For Dimension, choose the marketplace dimension.
  7. For Equals, add the values US and CA.
  8. Add category as your second dimension with the values fashion and jewellery.
  9. For Severity score, enter 20.
  10. For Channel, choose Amazon SNS.
  11. Choose your SNS topic (for this post, we use the SNS topic to which we already subscribed our email to receive the alert notifications).
  12. Choose your format (for this post, we choose Long Text).
  13. Under Service access, select Use an existing service role and choose your role.
  14. Choose Add alert.

    A message appears when the alert is created successfully.
  15. Select the alert and choose View details.

You can review the alert filters and other details. The Filter criteria explains how the configured filters are used to filter anomalies before publishing alert notifications.

If you want to modify the alert configuration, select the alert on the Alerts page and choose Edit.

Alternatively, you can open the alert details page and choose Edit.

You’re redirected to the Edit page, where you can modify the alert configuration as required. You can modify the same configurations you set when you created the alert, but you can’t change the alert name while editing.

Review and analyze the results

When Lookout for Metrics detects anomalies in your data, it sends a notification if alerts were configured on that detector. If the anomaly group details match the filter criteria (measure filter, dimension filter, and severity score) of the alert, a notification is published.

For this example, we created two alerts on the detector, testAlertWithNoFilters and testRevenueForFashionOrJewelleryInUSOrCA, and injected anomalies in our data. We also enabled email subscription on the SNS topic used for alert notification publishing. The following screenshots show the details for each alert.

The following is an example of an anomaly notification for testRevenueForFashionOrJewelleryInUSOrCA:

{
"Type" : "Notification",
 "MessageId" : "0b0a7bfe-d029-5f4f-b706-20f644793c3d",
 "TopicArn" : "arn:aws:sns:us-west-2:488415817882:filterAlertsDemoTopic",
 "Message" : "[Amazon LookoutForMetrics] The anomaly detector TestAlertFilters detected 
             an anomaly in revenue with a severity score of 77.3 on May 25, 2022 at 8:05 PM.
             nAnomalous graphs were detected for the following:n
             nrevenue for: jewellery, thirdParty, CA, regular, priorityn
             nrevenue for: electronics, self, MX, premium, overnightn
             nrevenue for: electronics, self, US, regular, overnightn
             nTo view the anomaly, visit the Lookout for Metrics console at: 
             https://us-west-2.console.aws.amazon.com/lookoutmetrics/home?region=us-west-2#arn:aws:lookoutmetrics:us-west-2:488415817882:AnomalyDetector:TestAlertFilters/anomalies/anomaly/bd0a07e1-c520-46bd-aaa3-dcc00583d707
             nTo modify settings for this alert: https://us-west-2.console.aws.amazon.com/lookoutmetrics/home?region=us-west-2#arn:aws:lookoutmetrics:us-west-2:488415817882:AnomalyDetector:TestAlertFilters/alerts/alertDetails/arn:aws:lookoutmetrics:us-west-2:488415817882:Alert:testRevenueForFashionOrJewelleryInUSOrCA",
 "Timestamp" : "2022-05-25T20:31:12.330Z",
 "SignatureVersion" : "1",
 "Signature" : "pFDZj3TwLrL9rqjkRiVgbWjcrPhxz5PDV485d6NroLXWhrviX7sUEQqOIL5j8YYd0SFBjFEkrZKZ27RSbd+33sRhJ52mmd1eR23cZQP68+iIVdpeWubcPgGnqxoOa3APE1WZr4SmVK/bgJAjX1RXn0rKZvPzwDkxPD2fZB4gnbqPJ8GBw/1dxU5qfJzRpkqc87d1gpvQIwMpb5uUROuPZEQVyaR/By0BTsflkE2Sz2mOeZQkMaXz3q9dwX/qDxyR9q6gNviMagGtOLwtb6StN8/PUYlvK9fCBcJnJxg0bdmMtnXiXWdl1O7J50Wqj4Tkl8amph97UlVAnComoe649g==",
 "SigningCertURL" : "https://sns.us-west-2.amazonaws.com/SimpleNotificationService-7ff5318490ec183fbaddaa2a969abfda.pem",
 "UnsubscribeURL" : "https://sns.us-west-2.amazonaws.com/?Action=Unsubscribe&SubscriptionArn=arn:aws:sns:us-west-2:488415817882:filterAlertsDemoTopic:8f24ae74-b160-44c7-8bc9-96a30e27d365"
}

The following is an example of an anomaly notification for testAlertWithNoFilters:

{
 "Type" : "Notification",
 "MessageId" : "fcc70263-f2c1-52ed-81ec-596b8c399b67",
 "TopicArn" : "arn:aws:sns:us-west-2:488415817882:filterAlertsDemoTopic",
 "Message" : "[Amazon LookoutForMetrics] The anomaly detector TestAlertFilters detected 
             an anomaly in revenue with a severity score of 77.59 on May 25, 2022 at 6:35 PM.
             nAnomalous graphs were detected for the following:n
             nrevenue for: jewellery, self, UK, regular, overnightn
             nrevenue for: jewellery, thirdParty, JP, premium, overnightn
             nrevenue for: electronics, thirdParty, DE, premium, priorityn
             nTo view the anomaly, visit the Lookout for Metrics console at: 
             https://us-west-2.console.aws.amazon.com/lookoutmetrics/home?region=us-west-2#arn:aws:lookoutmetrics:us-west-2:488415817882:AnomalyDetector:TestAlertFilters/anomalies/anomaly/194c87f4-3312-420c-8920-12fbfc9b1700
             nTo modify settings for this alert: https://us-west-2.console.aws.amazon.com/lookoutmetrics/home?region=us-west-2#arn:aws:lookoutmetrics:us-west-2:488415817882:AnomalyDetector:TestAlertFilters/alerts/alertDetails/arn:aws:lookoutmetrics:us-west-2:488415817882:Alert:testAlertWithNoFilters",
 "Timestamp" : "2022-05-25T19:00:08.374Z",
 "SignatureVersion" : "1",
 "Signature" : "e4+BHo4eh8wNbfQMaR3L8MWY2wkpqxoxKKrj2h/QROQHvhcnYfucYchjfppgjM8LNIF7Oo4QfuP6qcLj9DlghiMZ80qpzHyAH6vmIDfSjK7Bz23i8rnIMyKJIVRFN8z69YlC9vfsp3MayWyyMJcskeVJ1bzsdkDIeA5gkT1le8yh/9nhbsgwm+bowNjsnl+/sFwk6QZJlplYB27sOqegrm73nH/CrmTe4FcPtekCRysSECwMLKazPJqR1uiGagnWfUeyTptRg9rVQVQJJdmOUwlv8vodR96s52btAegpY4iZZLUJ87vs1PwOwVfTTIHf+pdnwPUuFupzejUEudP7sQ==",
 "SigningCertURL" : "https://sns.us-west-2.amazonaws.com/SimpleNotificationService-7ff5318490ec183fbaddaa2a969abfda.pem",
 "UnsubscribeURL" : "https://sns.us-west-2.amazonaws.com/?Action=Unsubscribe&SubscriptionArn=arn:aws:sns:us-west-2:488415817882:filterAlertsDemoTopic:8f24ae74-b160-44c7-8bc9-96a30e27d365"
}

We didn’t receive the notification for this anomaly through the testRevenueForFashionOrJewelleryInUSOrCA alert because the anomaly group details don’t match the filter criteria for dimension marketplace. For our filter criteria on the measure revenue, the dimension marketplace must equal US or CA, and the dimension category must equal fashion or jewellery, with a severity threshold of 20.

Although the anomaly detected matches the filter criteria for the measure, severity score, and category dimension, it doesn’t match the criteria for the marketplace dimension, so the alert wasn’t published.

Based on the notifications we received, we can confirm that Lookout for Metrics detected anomalies and verified the alert filter-based notifications.

Clean up

After you complete the testing, you can delete the CloudFormation stack created by the template. Deletion of stack the cleans up all the resources created for the purpose of this test. To delete the stack, open the AWS CloudFormation console, select the stack L4MAlertFiltersStack, and choose Delete.

Deletion of the stack doesn’t delete the S3 bucket created by the template because it’s not empty; you have to delete it manually.

Conclusion

You can now easily customize your notification experience by adding filters and editing existing alerts to reduce noise and focus on the metrics that matter the most to your business.

To learn more about this capability, see Working with Alerts. You can use this capability in all Regions where Lookout for Metrics is publicly available. For more information about Region availability, see AWS Regional Services.


About the Authors

Alex Kim is a Sr. Product Manager for AWS AI Services. His mission is to deliver AI/ML solutions to all customers who can benefit from it. In his free time, he enjoys all types of sports and discovering new places to eat.

Utkarsh Dubey is a Software Development Engineer in the Lookout for Metrics team. His interests lie in building scalable distributed systems. In his spare time, he enjoys traveling and catching up with friends.

Read More

A Breakthrough Preview: JIDU Auto Debuts Intelligent Robo-01 Concept Vehicle, Powered by NVIDIA DRIVE Orin

JIDU Auto sees a brilliant future ahead for intelligent electric vehicles.

The EV startup, backed by tech titan Baidu, took the wraps off the Robo-01 concept vehicle last week during its virtual ROBODAY event. The robot-inspired, software-defined vehicle features cutting-edge AI capabilities powered by the high-performance NVIDIA DRIVE Orin compute platform.

The sleek compact SUV provides a glimpse of JIDU’s upcoming lineup. It’s capable of level 4 autonomous driving, safely operating at highway speeds, on busy urban roads and performing driverless valet parking.

The Robo-01 also showcases a myriad of design innovations, including a retractable yoke steering wheel that folds under the dashboard during autonomous driving mode, as well as lidar sensors that extend and retract from the hood. It features human-like interactive capabilities between passengers and the vehicle’s in-cabin AI using perception and voice recognition.

JIDU is slated to launch a limited production version of the robocar later this year.

Continuous Innovation

A defining feature of the Robo-01 concept is its ability to improve by adding new intelligent capabilities throughout the life of the vehicle.

These updates are delivered over the air, which requires a software-defined vehicle architecture built on high-performance AI compute. The Robo-01 has two NVIDIA DRIVE Orin  systems-on-a-chip (SoC) at the core of its centralized computer system, which provide ample compute for autonomous driving and AI features, with headroom to add new capabilities.

DRIVE Orin is a highly advanced autonomous vehicle processor. This supercomputer on a chip is capable of delivering up to 254 trillion operations per second (TOPS) to handle the large number of applications and deep neural networks that run simultaneously in autonomous vehicles and robots, while meeting systematic safety standards such as ISO 26262 ASIL-D.

The two DRIVE Orin SoCs at the center of JIDU vehicles will deliver more than 500 TOPS of performance to achieve the redundancy and diversity necessary for autonomous operation and in-cabin AI features.

Even More in Store

JIDU will begin taking orders in 2023 for the production version of the Robo-01, with deliveries scheduled for 2024.

The automaker plans to unveil the design of its second production model at this year’s Guangzhou Auto Show in November.

Jam-packed with intelligent features and room to add even more, the Robo-01 shows the incredible possibilities that future electric vehicles can achieve with a centralized, software-defined AI architecture.

The post A Breakthrough Preview: JIDU Auto Debuts Intelligent Robo-01 Concept Vehicle, Powered by NVIDIA DRIVE Orin appeared first on NVIDIA Blog.

Read More

Use a pre-signed URL to provide your business analysts with secure access to Amazon SageMaker Canvas

Agility and security have historically been two aspects of IT of paramount importance for any company. With the simplification of access to advanced IT technologies thanks to low-code and no-code (LCNC) tools, an even bigger number of people must be enabled to access resources, without impacting security. For many companies, the solution has been to develop a company web portal, which simplifies access to cloud applications and resources, by redirecting to or embedding applications, so that employees can have a single point of access to the services they use most.

In this post, we suggest an architecture for a company with an existing web portal to generate a pre-signed URL redirecting to Amazon SageMaker Canvas, a visual point-and-click interface for business analysts to build machine learning (ML) models and generate accurate predictions without writing code or having any previous ML experience, without having to log in via the AWS Management Console.

Solution overview

The solution architecture is composed of three main parts:

  • The company web portal, with its own system for authentication of users and other resources.
  • An AWS Lambda function, responsible for calling the Amazon SageMaker SDK. This function is directly called via its function URL, a simple way to assign an HTTP(S) endpoint to the Lambda function directly, without the need for a REST API.
  • The Canvas app.

The following diagram illustrates the solution workflow.

The flow has four steps:

  1. The business analyst accesses the company portal, (optionally) authenticates, then chooses to generate a Canvas URL.
  2. The Lambda function receives information about the user from the company portal, and uses it to call SageMaker via an AWS SDK to generate a presigned Canvas URL. For this post, we use the AWS SDK for Python (Boto3).
  3. The generated URL is sent back to the business analyst through the company portal.
  4. The business analyst can then choose that link to access Canvas directly, without having to access the console.

Prerequisites

Before you implement the solution architecture, make sure that you have correctly onboarded to an Amazon SageMaker Studio domain using AWS Identity and Access Management (IAM). For instructions, refer to Onboard to Amazon SageMaker Domain Using IAM. IAM as method of authentication is a strict requirement, because the API CreatePresignedDomainURL requires the IAM authentication method, and it won’t work with AWS Single Sign-On authentication for your domain. Also, make sure you have created at least one user profile for your Studio domain.

Deploy the solution

The first step is to create the Lambda function.

  1. On the Lambda console, choose Create function.
  2. For Name, enter a name (for this post, canvas-presignedURL).
  3. For Runtime, choose Python 3.9.
  4. For Architecture, select your preferred architecture (for this post, we select arm64).
  5. Under Permissions, expand Change default execution role.
  6. Select Create a new role with basic Lambda permissions.
    We change the Lambda permissions in a later step.
  7. Under Advanced settings, select Enable function URL.
  8. For Auth type, select NONE.
    For this post, we don’t provide authentication details to our requests. However, this isn’t a best practice and it’s not advised for production workloads. We suggest using IAM authentication for your Lambda function, or another method for authentication and authorization such as Amazon Cognito.
  9. If your domain runs in a VPC, select Enable VPC to access those private resources.
  10. Choose Create function.
    Function creation takes a few seconds to complete. You can now set up the permissions to run SageMaker calls.
  11. On the Configuration tab, choose Permissions in the left pane.
  12. Choose your role name.

    You’re redirected to the IAM console.
  13. Choose Add permissions.
  14. Choose Create inline policy.
  15. For Service, choose SageMaker.
  16. For Actions, choose CreatePresignedDomainUrl.
  17. For Resources, select Any in this account.
  18. Choose Review.
  19. Enter a name for the policy (for this post, CanvasPresignedURLsFromLambda).
  20. Choose Create policy.
    The policy is now created and assigned to the role. You can close the IAM console tab and return to the Lambda console.Now it’s time to change our code base to run a call to SageMaker. We use the Boto3 call create_presigned_domain_url.
  21. On the Code tab, replace the code inside the lambda_function.py file with the following:
    import json
    import boto3
    
    sagemaker = boto3.client('sagemaker')
    SESSION_EXPIRATION_IN_SECONDS = 8*60*60 # the session will be valid for 8 hours
    URL_TIME_TO_LIVE_IN_SECONDS = 60 # the URL is only valid for 60 seconds
    
    def lambda_handler(event, context):
        
        # Parse the event body
        body = json.loads(event['body'])
        
        # Pass the domain ID and user profile name as part of the request
        domain_id = body['domain_id']
        user_profile_name = body['user_profile_name']
        
        # Call the service to create the URL
        response = sagemaker.create_presigned_domain_url(
            DomainId=domain_id,
            UserProfileName=user_profile_name,
            SessionExpirationDurationInSeconds=SESSION_EXPIRATION_IN_SECONDS,
            ExpiresInSeconds=URL_TIME_TO_LIVE_IN_SECONDS
        )
        studio_url = response['AuthorizedUrl']
        
        # Add the redirect to Canvas
        canvas_url = studio_url + '&redirect=Canvas'
        
        # Return to the app
        return {
            'statusCode': 200,
            'body': json.dumps(canvas_url)
        }

    The preceding code consists of three main steps:

    • Parsing the body of the request and retrieving the Studio domain ID and user profile name
    • Calling the API with this information
    • Adding the redirection to Canvas and returning the result

    Now that the function is ready, let’s test it.

  22. Choose Deploy, then choose Test.
  23. In the test event configuration, provide the following event JSON, substituting the correct values:
    {
      "body": "{"domain_id": "<YOUR-DOMAIN-ID>","user_profile_name": "<YOUR-USER-PROFILE>"}"
    }

  24. Save the test event and choose Test again.

Your result should now be available in the body of your response.

You can now test this with your HTTP request tool of choice, such as CURL or Postman, to integrate into your existing company web portal. Below, a screenshot of a Postman POST request to the AWS Lambda function URL created in the previous steps, and the response payload containing the pre-signed URL.

The following screenshot shows an example of a (simplified) company web portal that, upon login, generates a pre-signed URL to access Amazon SageMaker Canvas.

Conclusion

In this post, we discussed a solution to help business analysts experience no-code ML via Canvas in a secured and unified way through their company web portal, without the need to allow access via the console. We used a Lambda function to generate a presigned URL, which the business analyst can use directly in their browser.

To make this solution production-ready, we suggest considering how to implement authentication and authorization, either via IAM authentication of Lambda functions with function URLs, or more advanced solutions based on Amazon API Gateway, such as API Gateway Lambda authorizers. For more information, refer to Security and auth model for Lambda function URLs.

If you haven’t built yet your company web portal, you might want to check out AWS Amplify Studio, a visual development environment that lets developers easily build and ship complete web and mobile apps in hours instead of weeks. With Amplify Studio, you can quickly build an app backend, create rich user interface (UI) components, and connect a UI to the backend with minimal coding.

To learn more about Canvas, check out Announcing Amazon SageMaker Canvas – a Visual, No Code Machine Learning Capability for Business Analysts.


About the Author

Davide Gallitelli is a Specialist Solutions Architect for AI/ML in the EMEA region. He is based in Brussels and works closely with customers throughout Benelux. He has been a developer since very young, starting to code at the age of 7. He started learning AI/ML in his later years of university, and has fallen in love with it since then.

Read More

Enable business analysts to access Amazon SageMaker Canvas without using the AWS Management Console with AWS SSO

IT has evolved in recent years: thanks to low-code and no-code (LCNC) technologies, an increasing number of people with varying backgrounds require access to tools and platforms that were previously a prerogative to more tech-savvy individuals in the company, such as engineers or developers.

Out of those LCNC technologies, we have recently announced Amazon SageMaker Canvas, a visual point-and-click interface for business analysts to build machine learning (ML) models and generate accurate predictions without writing code or having any previous ML experience.

To enable agility for those new users while ensuring security of the environments, many companies have chosen to adopt single sign-on technology, such as AWS Single Sign-On. AWS SSO is a cloud-based single sign-on service that makes it easy to centrally manage SSO access to all your AWS accounts and cloud applications. It includes a user portal where end-users can find and access all their assigned AWS accounts and cloud applications in one place, including custom applications that support Security Assertion Markup Language (SAML) 2.0.

In this post, we walk you through the necessary steps to configure Canvas as a custom SAML 2.0 application in AWS SSO, so that your business analysts can seamlessly access Canvas with their credentials from AWS SSO or other existing identity providers (IdPs), without the need to do so via the AWS Management Console.

Solution overview

To establish a connection from AWS SSO to the Amazon SageMaker Studio domain app, you must complete the following steps:

  1. Create a user profile in Studio for every AWS SSO user that should access Canvas.
  2. Create a custom SAML 2.0 application in AWS SSO and assign it to the users.
  3. Create the necessary AWS Identity and Access Management (IAM) SAML provider and AWS SSO role.
  4. Map the necessary information from AWS SSO to the SageMaker domain via attribute mappings.
  5. Access the Canvas application from AWS SSO.

Prerequisites

To connect Canvas to AWS SSO, you must have the following prerequisites set up:

Create a Studio domain user profile

In a Studio domain, every user has their own user profile. Studio apps like Studio IDE, RStudio, and Canvas can be created by these user profiles, and are bound to the user profile that has created them.

For AWS SSO to access the Canvas app for a given user profile, you have to map the user profile name to the user name in AWS SSO. This way, the AWS SSO user name—and therefore the user profile name—can be passed automatically by AWS SSO to Canvas.

In this post, we assume that AWS SSO users are already available, created during the prerequisites of onboarding to AWS SSO. You need a user profile for each AWS SSO user that you want to onboard to your Studio domain and therefore to Canvas.

To retrieve this information, navigate to the Users page on the AWS SSO console. Here you can see the user name of your user, in our case davide-gallitelli.

With this information, you can now go to your Studio domain and create a new user profile called exactly davide-gallitelli.

If you have another IdP, you can use any information provided by it to name your user profile, as long as it’s unique for your domain. Just make sure you map it correctly according to AWS SSO attribute mapping.

Create the custom SAML 2.0 application in AWS SSO

The next step is to create a custom SAML 2.0 application in AWS SSO.

  1. On the AWS SSO console, choose Applications in the navigation pane.
  2. Choose Add a new application.
  3. Choose Add a custom SAML 2.0 application.
  4. Download the AWS SSO SAML metadata file, which you use during IAM configuration.
  5. For Display name, enter a name, such as SageMaker Canvas followed by your Region.
  6. For Description, enter an optional description.
  7. For Application start URL, leave as is.
  8. For Relay state, enter https://YOUR-REGION.console.aws.amazon.com/sagemaker/home?region=YOUR-REGION#/studio/canvas/open/YOUR-STUDIO-DOMAIN-ID.
  9. For Session duration, choose your session duration. We suggest 8 hours.
    The Session duration value represents the amount of time you want the user session to last before authentication is required again. One hour is the most secure, whereas more time means less need for interaction. We choose 8 hours in this case, equivalent to one work day.
  10. For Application ACS URL, enter https://signin.aws.amazon.com/saml.
  11. For Application SAML audience, enter urn:amazon:webservices.
    After your settings are saved, your application configuration should look similar to the following screenshot.
    You can now assign your users to this application, so that the application appears in their AWS SSO portal after login.
  12. On the Assigned users tab, choose Assign users.
  13. Choose your users.

Optionally, if you want to enable a lot of data scientists and business analysts in your company to use Canvas, the fastest and easiest way is to use AWS SSO groups. To do so, we create two AWS SSO groups: business-analysts and data-scientists. We assign the users to these groups according to their roles, and then give access to the application to both groups.

Configure your IAM SAML provider and AWS SSO role

To configure your IAM SAML provider, complete the following steps:

  1. On the IAM console, choose Identity providers in the navigation pane.
  2. Choose Add provider.
  3. For Provider type, select SAML.
  4. For Provider name, enter a name, such as AWS_SSO_Canvas.
  5. Upload the metadata document you downloaded earlier.
  6. Note the ARN to use in a later step.

    We also need to create a new role for AWS SSO to use to access the application.
  7. On the IAM console, choose Roles in the navigation pane.
  8. Choose Create role.
  9. For Trusted entity type, select SAML 2.0 federation.
  10. For SAML 2.0-based provider, choose the provider you created (AWS_SSO_Canvas).
  11. Don’t select either of the two SAML 2.0 access methods.
  12. For Attribute, choose SAML:sub_type.
  13. For Value, enter persistent.
  14. Choose Next.

    We need to give AWS SSO the permission to create a Studio domain presigned URL, which we need to perform the redirect to Canvas.
  15. On the Permissions policies page, choose Create policy.
  16. On the Create policy tab, choose JSON and enter the following code:
    {
        "Version": "2012-10-17",
        "Statement": [
            {
                "Sid": "VisualEditor0",
                "Effect": "Allow",
                "Action": [
                    "sagemaker:CreatePresignedDomainUrlWithPrincipalTag",
                    "sagemaker:CreatePresignedDomainUrl"
                ],
                "Resource": "*"
            }
        ]
    }

  17. Choose Next:Tags and provide tags if needed.
  18. Choose Next:Review.
  19. Name the policy, for example CanvasSSOPresignedURL.
  20. Choose Create policy.
  21. Return to the Add permissions page and search for the policy you created.
  22. Select the policy, then choose Next.
  23. Name the role, for example AWS_SSO_Canvas_Role, and provide an optional description.
  24. On the review page, edit the trust policy to match the following code:
    {
        "Version": "2012-10-17",
        "Statement": [
            {
                "Effect": "Allow",
                "Principal": {
                    "Federated": "<ARN OF THE SAML PROVIDER FROM IAM>"
                },
                "Action": [
                    "sts:AssumeRoleWithSAML",
                    "sts:SetSourceIdentity",
                    "sts:TagSession"
                ],
                "Condition": {
                    "StringEquals": {
                        "SAML:sub_type": "persistent",
                        "SAML:aud": "https://signin.aws.amazon.com/saml"
                    }
                }
            }
        ]
    }

  25. Save the changes, then choose Create role.
  26. Note the ARN of this role as well, to use in the following section.

Configure the attribute mappings in AWS SSO

The final step is to configure the attribute mappings. The attributes you map here become part of the SAML assertion that is sent to the application. You can choose which user attributes in your application map to corresponding user attributes in your connected directory. For more information, refer to Attribute mappings.

  1. On the AWS SSO console, navigate to the application you created.
  2. On the Attribute mappings tab, configure the following mappings:
User attribute in the application Maps to this string value or user attribute in AWS SSO
Subject ${user:email}
https://aws.amazon.com/SAML/Attributes/RoleSessionName ${user:email}
https://aws.amazon.com/SAML/Attributes/PrincipalTag:SageMakerStudioUserProfileName ${user:subject}
https://aws.amazon.com/SAML/Attributes/Role <ARN OF THE SAML PROVIDER FROM IAM>, <ARN OF THE CANVAS SSO ROLE FROM IAM>
  1. Choose Save changes.

You’re done!

Access the Canvas application from AWS SSO

On the AWS SSO console, note down the user portal URL. We suggest you log out of your AWS account first, or open an incognito browser window. Navigate to the user portal URL, log in with the credentials you set for the AWS SSO user, then choose your Canvas application.

You’re automatically redirected to the Canvas application.

Conclusion

In this post, we discussed a solution to enable business analysts to experience no-code ML via Canvas in a secured and unified way through a single sign-on portal. To do this, we configured Canvas as a custom SAML 2.0 application within AWS SSO. Business analysts are now one click away from using Canvas and solving new challenges with no-code ML. This enables the security needed by cloud engineering and security teams, while allowing for the agility and independence of business analysts teams. A similar process can be replicated in any IdP by reproducing these steps and adapting them to the specific SSO.

To learn more about Canvas, check out Announcing Amazon SageMaker Canvas – a Visual, No Code Machine Learning Capability for Business Analysts. Canvas also enables easy collaboration with data science teams. To learn more, see Build, Share, Deploy: how business analysts and data scientists achieve faster time-to-market using no-code ML and Amazon SageMaker Canvas. For IT administrators, we suggest checking out Setting up and managing Amazon SageMaker Canvas (for IT administrators).


About the Author

Davide Gallitelli is a Specialist Solutions Architect for AI/ML in the EMEA region. He is based in Brussels and works closely with customer throughout Benelux. He has been a developer since very young, starting to code at the age of 7. He started learning AI/ML in his later years of university, and has fallen in love with it since then.

Read More

Scanned Objects by Google Research: A Dataset of 3D-Scanned Common Household Items

Many recent advances in computer vision and robotics rely on deep learning, but training deep learning models requires a wide variety of data to generalize to new scenarios. Historically, deep learning for computer vision has relied on datasets with millions of items that were gathered by web scraping, examples of which include ImageNet, Open Images, YouTube-8M, and COCO. However, the process of creating these datasets can be labor-intensive, and can still exhibit labeling errors that can distort the perception of progress. Furthermore, this strategy does not readily generalize to arbitrary three-dimensional shapes or real-world robotic data.

Real-world robotic data collection is very useful, but difficult to scale and challenging to label.

Simulating robots and environments using tools such as Gazebo, MuJoCo, and Unity can mitigate many of the inherent limitations in these datasets. However, simulation is only an approximation of reality — handcrafted models built from polygons and primitives often correspond poorly to real objects. Even if a scene is built directly from a 3D scan of a real environment, the movable objects in that scan will act like fixed background scenery and will not respond the way real-world objects would. Due to these challenges, there are few large libraries with high-quality models of 3D objects that can be incorporated into physical and visual simulations to provide the variety needed for deep learning.

In “Google Scanned Objects: A High-Quality Dataset of 3D Scanned Household Items”, presented at ICRA 2022, we describe our efforts to address this need by creating the Scanned Objects dataset, a curated collection of over 1000 3D-scanned common household items. The Scanned Objects dataset is usable in tools that read Simulation Description Format (SDF) models, including the Gazebo and PyBullet robotics simulators. Scanned Objects is hosted on Open Robotics, an open-source hosting environment for models compatible with the Gazebo simulator.

History
Robotics researchers within Google began scanning objects in 2011, creating high-fidelity 3D models of common household objects to help robots recognize and grasp things in their environments. However, it became apparent that 3D models have many uses beyond object recognition and robotic grasping, including scene construction for physical simulations and 3D object visualization for end-user applications. Therefore, this Scanned Objects project was expanded to bring 3D experiences to Google at scale, collecting a large number of 3D scans of household objects through a process that is more efficient and cost effective than traditional commercial-grade product photography.

Scanned Objects was an end-to-end effort, involving innovations at nearly every stage of the process, including curation of objects at scale for 3D scanning, the development of novel 3D scanning hardware, efficient 3D scanning software, fast 3D rendering software for quality assurance, and specialized frontends for web and mobile viewers. We also executed human-computer interaction studies to create effective experiences for interacting with 3D objects.

Objects that were acquired for scanning.

These object models proved useful in 3D visualizations for Everyday Robots, which used the models to bridge the sim-to-real gap for training, work later published as RetinaGAN and RL-CycleGAN. Building on these earlier 3D scanning efforts, in 2019 we began preparing an external version of the Scanned Objects dataset and transforming the previous set of 3D images into graspable 3D models.

Object Scanning
To create high-quality models, we built a scanning rig to capture images of an object from multiple directions under controlled and carefully calibrated conditions. The system consists of two machine vision cameras for shape detection, a DSLR camera for high-quality HDR color frame extraction, and a computer-controlled projector for pattern recognition. The scanning rig uses a structured light technique that infers a 3D shape from camera images with patterns of light that are projected onto an object.

The scanning rig used to capture 3D models.
A shoe being scanned (left). Images are captured from several directions with different patterns of light and color. A shadow passing over an object (right) illustrates how a 3D shape can be captured with an off-axis view of a shadow edge.

Simulation Model Conversion
The early internal scanned models used protocol buffer metadata, high-resolution visuals, and formats that were not suitable for simulation. For some objects, physical properties, such as mass, were captured by weighing the objects at scanning time, but surface properties, such as friction or deformation, were not represented.

So, following data collection, we built an automated pipeline to solve these issues and enable the use of scanned models in simulation systems. The automated pipeline filters out invalid or duplicate objects, automatically assigns object names using text descriptions of the objects, and eliminates object mesh scans that do not meet simulation requirements. Next, the pipeline estimates simulation properties (e.g., mass and moment of inertia) from shape and volume, constructs collision volumes, and downscales the model to a usable size. Finally, the pipeline converts each model to SDF format, creates thumbnail images, and packages the model for use in simulation systems.

The pipeline filters models that are not suitable for simulation, generates collision volumes, computes physical properties, downsamples meshes, generates thumbnails, and packages them all for use in simulation systems.
A collection of Scanned Object models rendered in Blender.

The output of this pipeline is a simulation model in an appropriate format with a name, mass, friction, inertia, and collision information, along with searchable metadata in a public interface compatible with our open-source hosting on Open Robotics’ Gazebo.

The output objects are represented as SDF models that refer to Wavefront OBJ meshes averaging 1.4 Mb per model. Textures for these models are in PNG format and average 11.2 Mb. Together, these provide high resolution shape and texture.

Impact
The Scanned Objects dataset contains 1030 scanned objects and their associated metadata, totaling 13 Gb, licensed under the CC-BY 4.0 License. Because these models are scanned rather than modeled by hand, they realistically reflect real object properties, not idealized recreations, reducing the difficulty of transferring learning from simulation to the real world.

Input views (left) and reconstructed shape and texture from two novel views on the right (figure from Differentiable Stereopsis).
Visualized action scoring predictions over three real-world 3D scans from the Replica dataset and Scanned Objects (figure from Where2Act).

The Scanned Objects dataset has already been used in over 25 papers across as many projects, spanning computer vision, computer graphics, robot manipulation, robot navigation, and 3D shape processing. Most projects used the dataset to provide synthetic training data for learning algorithms. For example, the Scanned Objects dataset was used in Kubric, an open-sourced generator of scalable datasets for use in over a dozen vision tasks, and in LAX-RAY, a system for searching shelves with lateral access X-rays to automate the mechanical search for occluded objects on shelves.

Unsupervised 3D keypoints on real-world data (figure from KeypointDeformer).

We hope that the Scanned Objects dataset will be used by more robotics and simulation researchers in the future, and that the example set by this dataset will inspire other owners of 3D model repositories to make them available for researchers everywhere. If you would like to try it yourself, head to Gazebo and start browsing!

Acknowledgments
The authors thank the Scanned Objects team, including Peter Anderson-Sprecher, J.J. Blumenkranz, James Bruce, Ken Conley, Katie Dektar, Charles DuHadway, Anthony Francis, Chaitanya Gharpure, Topraj Gurung, Kristy Headley, Ryan Hickman, John Isidoro, Sumit Jain, Brandon Kinman, Greg Kline, Mach Kobayashi, Nate Koenig, Kai Kohlhoff, James Kuffner, Thor Lewis, Mike Licitra, Lexi Martin, Julian (Mac) Mason, Rus Maxham, Pascal Muetschard, Kannan Pashupathy, Barbara Petit, Arshan Poursohi, Jared Russell, Matt Seegmiller, John Sheu, Joe Taylor, Josh Weaver, and Tommy McHugh.

Special thanks go to Krista Reyman for organizing this project, helping write the paper, and editing this blogpost, James Bruce for the scanning pipeline design and Pascal Muetschard for maintaining the database of object models.

Read More

The Data Center’s Traffic Cop: AI Clears Digital Gridlock

Gal Dalal wants to ease the commute for those who work from home — or the office.

The senior research scientist at NVIDIA, who is part of a 10-person lab in Israel, is using AI to reduce congestion on computer networks.

For laptop jockeys, a spinning circle of death — or worse, a frozen cursor — is as bad as a sea of red lights on the highway. Like rush hour, it’s caused by a flood of travelers angling to get somewhere fast, crowding and sometimes colliding on the way.

AI at the Intersection

Networks use congestion control to manage digital traffic. It’s basically a set of rules embedded into network adapters and switches, but as the number of users on networks grows their conflicts can become too complex to anticipate.

AI promises to be a better traffic cop because it can see and respond to patterns as they develop. That’s why Dalal is among many researchers around the world looking for ways to make networks smarter with reinforcement learning, a type of AI that rewards models when they find good solutions.

But until now, no one’s come up with a practical approach for several reasons.

Racing the Clock

Networks need to be both fast and fair so no request gets left behind. That’s a tough balancing act when no one driver on the digital road can see the entire, ever-changing map of other drivers and their intended destinations.

And it’s a race against the clock. To be effective, networks need to respond to situations in about a microsecond, that’s one-millionth of a second.

To smooth traffic, the NVIDIA team created new  reinforcement learning techniques inspired by state-of-the-art computer game AI and adapted them to the networking problem.

Part of their breakthrough, described in a 2021 paper, was coming up with an algorithm and a corresponding reward function for a balanced network based only on local information available to individual network streams. The algorithm enabled the team to create, train and run an AI model on their NVIDIA DGX system.

A Wow Factor

Dalal recalls the meeting where a fellow Nvidian, Chen Tessler, showed the first chart plotting the model’s results on a simulated InfiniBand data center network.

“We were like, wow, ok, it works very nicely,” said Dalal, who wrote his Ph.D. thesis on reinforcement learning at Technion, Israel’s prestigious technical university.

“What was especially gratifying was we trained the model on just 32 network flows, and it nicely generalized what it learned to manage more than 8,000 flows with all sorts of intricate situations, so the machine was doing a much better job than preset rules,” he added.

Reinforcement learning for congestion control
Reinforcement learning (purple) outperformed all rule-based congestion control algorithms in NVIDIA’s tests.

In fact, the algorithm delivered at least 1.5x better throughput and 4x lower latency than the best rule-based technique.

Since the paper’s release, the work’s won praise as a real-world application that shows the potential of reinforcement learning.

Processing AI in the Network

The next big step, still a work in progress, is to design a version of the AI model that can run at microsecond speeds using the limited compute and memory resources in the network. Dalal described two paths forward.

His team is collaborating with the engineers designing NVIDIA BlueField DPUs to optimize the AI models for future hardware. BlueField DPUs aim to run inside the network an expanding set of communications jobs, offloading tasks from overburdened CPUs.

Separately, Dalal’s team is distilling the essence of its AI model into a machine learning technique called boosting trees, a series of yes/no decisions that’s nearly as smart but much simpler to run. The team aims to present its work later this year in a form that could be immediately adopted to ease network traffic.

A Timely Traffic Solution

To date, Dalal has applied reinforcement learning to everything from autonomous vehicles to data center cooling and chip design. When NVIDIA acquired Mellanox in April 2020, the NVIDIA Israel researcher started collaborating with his new colleagues in the nearby networking group.

“It made sense to apply our AI algorithms to the work of their congestion control teams, and now, two years later, the research is more mature,” he said.

It’s good timing. Recent reports of double-digit increases in Israel’s car traffic since pre-pandemic times could encourage more people to work from home, driving up network congestion.

Luckily, an AI traffic cop is on the way.

The post The Data Center’s Traffic Cop: AI Clears Digital Gridlock appeared first on NVIDIA Blog.

Read More