AWS Finance and Global Business Services builds an automated contract-processing platform using Amazon Textract and Amazon Comprehend

AWS Finance and Global Business Services builds an automated contract-processing platform using Amazon Textract and Amazon Comprehend

Processing incoming documents such as contracts and agreements is often an arduous task. The typical workflow for reviewing signed contracts involves loading, reading, and extracting contractual terms from agreements, which requires hours of manual effort and intensive labor.

At AWS Finance and Global Business Services (AWS FGBS), this process typically takes more than 150 employee hours per month. Often, multiple analysts manually input key contractual data into an Excel workbook in batches of a hundred contracts at a time.

Recently, one of the AWS FGBS teams responsible for analyzing contract agreements set out to implement an automated workflow to process incoming documents. The goal was to liberate specialized accounting resources from routine and tedious manual labor, so they had more free time to perform value-added financial analysis.

As a result, the team built a solution that consistently parses and stores important contractual data from an entire contract in under a minute with high fidelity and security. Now an automated process only requires a single analyst working 30 hours a month to maintain and run the platform. This is a 5x reduction in processing time and significantly improves operational productivity.

This application was made possible by two machine learning (ML)-powered AWS-managed services: Amazon Textract, which enables efficient document ingestion, and Amazon Comprehend, which provides downstream text processing that enables the extraction of key terms.

The following post presents an overview of this solution, a deep dive into the architecture, and a summary of the design choices made.

Contract workflow

The following diagram illustrates the architecture of the solution:

The AWS FGBS team, with help from the AWS Machine Learning (ML) Professional Services (ProServe) team, created an automated, durable, and scalable contract processing platform. Incoming contract data is stored in an Amazon Simple Storage Service (Amazon S3) data lake. The solution uses Amazon Textract to convert the contracts into text form and Amazon Comprehend for text analysis and term extraction. Critical terms and other metadata extracted from the contracts are stored in Amazon DynamoDB, a database designed to accept key-value and document data types. Accounting users can access the data via a custom web user interface hosted in Amazon CloudFront, where they can perform key user actions such as error checking, data validation, and custom term entry. They can also generate reports using a Tableau server hosted in Amazon Appstream, a fully-managed application streaming service. You can launch and host this end-to-end contract-processing platform in an AWS production environment using an AWS CloudFormation template.

Building an efficient and scalable document-ingestion engine

The contracts that the AWS FGBS team encounters often include high levels of sophistication, which has historically required human review to parse and extract relevant information. The format of these contracts has changed over time, varying in length and complexity. Adding to the challenge, the documents not only contain free text, but also tables and forms that hold important contextual information.

To meet these needs, the solution uses Amazon Textract as the document-ingestion engine. Amazon Textract is a powerful service built on top of a set of pre-trained ML computer vision models tuned to perform Optical Character Recognition (OCR) by detecting text and numbers from a rendering of a document, such as an image or PDF. Amazon Textract takes this further by recognizing tables and forms so that contextual information for each word is preserved. The team was interested in extracting important key terms from each contract, but not every contract contained the same set of terms. For example, many contracts hold a main table that has the name of a term on the left-hand side, and the value of the term on the right-hand side.  The solution can use the result from the Form Extraction feature of Amazon Textract to construct a key-value pair that links the name and the value of a contract term.

The following diagram illustrates the architecture for batch processing in Amazon Textract:

A pipeline processes incoming contracts using the asynchronous Amazon Textract APIs enabled by Amazon Simple Queue Service (Amazon SQS), a distributed message queuing service. The AWS Lambda function StartDocumentTextAnalysis initiates the Amazon Textract processing jobs, which are triggered when new files are deposited into the contract data lake in Amazon S3. Because contracts are loaded in batches, and the asynchronous API can accommodate PDF files natively without first converting to an image file format, the design choice was made to build an asynchronous process to improve scalability. The solution uses DocumentAnalysis APIs instead of the DocumentDetection APIs so that recognition of tables and forms is enabled.

When a DocumentAnalysis job is complete, a JobID is returned and input into a second queue, where it is used to retrieve the completed output from Amazon Textract in the form of a JSON object. The response contains both the extracted text and document metadata within a large nested data structure. The GetDocumentTextAnalysis function handles the retrieval and storage of JSON outputs into an S3 bucket to await post-processing and term extraction.

Extracting standard and non-standard terms with Amazon Comprehend

The following diagram illustrates the architecture of key term extraction:

After the solution has deposited the processed contract output from Amazon Textract into an S3 bucket, a term extraction pipeline begins. The primary worker for this pipeline is the ExtractTermsFromTextractOutput function, which encodes the intelligence behind the term extraction.

There are a few key actions performed at this step:

  1. The contract is broken up into sections and basic attributes are extracted, such as the contract title and contract ID number.
  2. The standard contract terms are identified as key-value pairs. Using the table relationships uncovered in the Amazon Textract output, the solution can find the right term and value from tables containing key terms, while taking additional steps to convert the term value into the correct data format, such as parsing dates.
  3. There is a group of non-standard terms that can either be within a table or embedded within the free text in certain sections of the contract. The team used the Amazon Comprehend custom classification model to identify these sections of interest.

To produce the proper training data for the model, subject matter experts on the AWS FGBS team annotated a large historical set of contracts to identify examples of standard contract sections (that don’t contain any special terms) and non-standard contract sections (that contain special terms and language). An annotation platform hosted on an Amazon SageMaker instance displays individual contract sections that were split up previously using the ExtractTermsFromTextractOutput function so that a user could label the sections accordingly.

A final custom text classification model was trained to perform paragraph section classification to identify non-standard sections with an F1 score over 85% and hosted using Amazon Comprehend. One key benefit of using Amazon Comprehend for text classification is the straightforward format requirements of the training data, because the service takes care of text preprocessing steps such as feature engineering and accounting for class imbalance. These are typical considerations that need to be addressed in custom text models.

After dividing contracts into sections, a subset of the paragraphs is passed to the custom classification model endpoint from the ExtractTermsFromTextractOutput function to detect the presence of non-standard terms. The final list of contract sections that were checked and sections that were flagged as non-standard is recorded in the DynamoDB table. This mechanism notifies an accountant to examine sections of interest that require human review to interpret non-standard contract language and pick out any terms that are worth recording.

After these three stages are complete (breaking down contract sections, extracting standard terms, and classifying non-standard sections using a custom model), a nested dictionary is created with key-value pairs of terms. Error checking is also built into this processing stage. For each contract, an error file is generated that specifies exactly which term extraction caused the error and the error type. This file is deposited into the S3 bucket where contract terms are stored so that users can trace back any failed extractions for a single contract to its individual term. A special error alert phrase is incorporated into the error logging to simplify searching through Amazon CloudWatch logs to locate points where extraction errors occurred. Ultimately, a JSON file containing up to 100 or more terms is generated for each contract and pushed into an intermediate S3 bucket.

The extracted data from each contract is sent to a DynamoDB database for storage and retrieval. DynamoDB serves as a durable data store for two reasons:

  • This key-value and document database has flexibility over a relational database because it can accept data structures such as large string values and nested key-value pairs, while not requiring a pre-defined schema, so that new terms can easily be added to the database in the future as contracts evolved.
  • This fully managed database delivers single-digit millisecond performance at scale so the custom front end and reporting can be integrated on top of this service.

Additionally, DynamoDB supports continuous backups and point-in-time-recovery, which enables data restoration in tables to any single second for the prior 35 days. This feature was crucial in earning trust from team; it protects their critical data against accidental deletions or edits and provides business continuity of essential downstream tasks such as financial reporting. An important check to perform during data upload is the DynamoDB item-size limit, which is set at 400 KB. With longer contracts, and long lists of extracted terms, some contract outputs exceed this limit, so a check is performed in the term extraction function to break up larger entries into multiple JSON objects.

Secure data access and validation through a custom web UI

The following diagram illustrates the custom web user interface (UI) and reporting architecture:

To effectively interact with the extracted contract data, the team built a custom web UI. The web application, built using the Angular v8 framework, is hosted through Amazon S3 on Amazon CloudFront, a cloud content delivery network. When the Amazon CloudFront URL is accessed, the user is first authorized and authenticated against a user pool with strict permissions allowing them to view and interact with this UI. After validation, the session information is saved to make sure the user stays logged in for only a set amount of time.

On the landing page, the user can navigate to the three key user scenarios displayed as links on the navigation panel:

  • Search for Records – View and edit a record for a contract
  • View Record History – View the record edit history for a contract
  • Appstream Reports – Open the reporting dashboard hosted in Appstream

To view a certain record, the user is prompted to search for the file using the specific customer name or file name. In the former, the search returns the latest records for each contract with the same customer name. In the latter, the search only returns the latest record for a specific contract with that file name.

After the user identifies the right contract to examine or edit, the user can choose the file name and go to the Edit Record page. This displays the full list of terms for a contract and allows the user to edit, add to, or delete the information extracted from the contract. The latest record is retrieved with form fields for each term, which allows the user to choose the value to validate the data, and edit the data if errors are identified. The user can also add new fields by choosing Add New Field and entering a key-value pair for the custom term.

After updating the entry with edited or new fields, the user can choose the Update Item button. This triggers the data for the new record being passed from the frontend via Amazon API Gateway to the PostFromDynamoDB function using a POST method, generating a JSON file that is pushed to the S3 bucket holding all the extracted term data. This file triggers the same UpdateDynamoDB function that pushed the original term data to the DynamoDB table after the first Amazon Textract processing run.

After verification, the user can choose Delete Record to delete all versions of the contract consistently through the DynamoDB and to the S3 buckets that store all the extracted contract data. This is initiated using the DELETE method, triggering the DeleteFromDynamoDB function that expunges the data. During contract data validation and editing, every update to a field that is pushed creates a new data record that automatically logs a timestamp and user identity, based on the login profile, to ensure fine-grained tracking of the edit history.

To take advantage of the edit tracking, the user can search for a contract and view the entire collection of records for a single contract, ordered by timestamp. This allows the user to compare edits made across time, including any custom fields that were added, and link those edits back to the editor identity.

To make sure the team could ultimately realize the business value from this solution, they added a reporting dashboard pipeline to the architecture. Amazon AppStream 2.0, a fully managed application streaming service, hosts an instance of the reporting application. AWS Glue crawlers build a schema for the data residing in Amazon S3, and Amazon Athena queries and loads the data into the Amazon AppStream instance and into the reporting application.

Given the sensitivity of this data for Amazon, the team implemented several security considerations so they could safely interact with the data without unauthorized third-party access. An Amazon Cognito authorizer secures the API Gateway. During user login, Amazon internal single sign-on with two-factor authentication is verified against an Amazon Cognito user pool that only contains relevant team members.

To view record history, the user can search for a contract and examine the entire collection of records for a single contract to review any erroneous or suspicious edits and trace back to the exact time of edit and identity of editor.

The team also took additional application-hardening steps to secure the architecture and front end. AWS roles and policies are ensured to be narrowly scoped according to least privilege. For storage locations such as Amazon S3 and DynamoDB, access is locked down and protected with server-side encryption enabled to cover encryption at rest, all public access is blocked, and versioning and logging is enabled. To cover encryption in transit, VPC interface endpoints powered by AWS PrivateLink connect AWS services together so data is transmitted over private AWS endpoints versus public internet. In all the Lambda functions, temporary data handling best practices are adhered to by using the Python tempfile library to address time of use attacks (TOCTOU) where attackers may pre-emptively place files at specified locations. Because Lambda is a serverless service, all data stored in a temp directory is deleted after invoking the Lambda function unless the data is pushed to another storage location, such as Amazon S3.

Customer data privacy is a top concern for the AWS FGBS team and AWS service teams. Although incorporating additional data to further train ML models in AWS services like Amazon Textract and Amazon Comprehend is crucial towards improving performance, you always retain the option to withhold your data. To prevent the retention of contract data processed by Amazon Textract and Amazon Comprehend to be stored for model retraining purposes, the AWS FGBS team requested for Customer Data Opt-Out by creating a customer support ticket.

To verify the integrity of the application given the sensitivity of the data processed, both internal security reviews and penetration testing from external vendors were performed. That included static code review, dynamic fuzz testing, form validation, script injection and cross site scripting (XSS), and other items from the OWASP top 10 web application security risks. To address distributed denial-of-service (DDos) attacks, throttling limits were set on the API Gateway, and CloudWatch alarms were set for a threshold invocation limit.

After security requirements were integrated into the application, the entire solution was codified into an AWS CloudFormation template so the solution can be launched into the team’s production account. The CloudFormation templates are comprised of nested stacks, delineating key components of the application’s infrastructure (front-end versus back-end services). A continuous deployment pipeline was set up using AWS CodeBuild to build and deploy any change to the application code or infrastructure. Development and production environments were created in two separate AWS accounts, with deployment into these environments managed by environment variables set in the respective accounts.

Conclusion

The application is now live and being tested with hundreds of contracts every month. Most importantly, the business is beginning to realize the benefits of saving time and value by automating this previously routine business process.

The AWS ProServe team is working with the AWS FGBS team on a second phase of the project to further enhance the solution. To improve accuracy of term extraction, they’re exploring pretrained and custom Amazon Comprehend NER models. They will implement a retraining pipeline so the platform intelligence can improve over time with new data. They’re also considering a new service, Amazon Augmented AI (Amazon A2I), which is a human-review capability, for aiding in reviewing low-confidence Amazon Textract outputs and generating new training data for model improvement.

As exciting new ML services are being launched by AWS, the AWS FGBS team hopes that solutions like these will replace legacy accounting practices as they continue to achieve their goals of modernizing their business operations.

Get started today! Explore your use case with the services mentioned in this post and many others on the AWS Management Console.

 


About the Authors

Han Man is a Senior Data Scientist with AWS Professional Services. He has a PhD in engineering from Northwestern University and has several years of experience as a management consultant advising clients in manufacturing, financial services, and energy. Today he is passionately working with customers from a variety of industries to develop and implement machine learning & AI solutions on AWS. He enjoys following the NBA and playing basketball in his spare time.

 

AWS Finance and Global Business Services Team

Carly Huang is a Senior Financial Analyst at Amazon Accounting supporting AWS revenue. She holds a Bachelor of Business Administration from Simon Fraser University and is a Chartered Professional Accountant (CPA) with several years of experience working as an auditor focusing on technology and manufacturing clients. She is excited to find new ways using AWS services to improve and simplify existing accounting processes. In her free time, she enjoys traveling and running.

 

 

Shonoy Agrawaal is a Senior Manager at Amazon Accounting supporting AWS revenue. He holds a Bachelor of Business Administration from the University of Washington and is a Certified Public Accountant with several years of experience working as an auditor focused on retail and financial services clients. Today, he supports the AWS business with accounting and financial reporting matters. In his free time, he enjoys traveling and spending time with family and friends.

 

 

AWS ProServe Team

Nithin Reddy Cheruku is a Sr. AI/ML architect with AWS Professional Services, helping customers digital transform leveraging emerging technologies. He likes to solve and work on business and community problems that can bring a change in a more innovative manner . Outside of work, Nithin likes to play cricket and ping pong.

 

 

Huzaifa Zainuddin is a Cloud Infrastructure Architect with AWS Professional Services. He has several years of experience working in a variety different technical roles. He currently works with customers to help design their infrastructure as well deploy, and scale applications on AWS. When he is not helping customers, he enjoys grilling, traveling, and playing the occasional video games.

 

 

Ananya Koduri is an Application Cloud Architect with the AWS Professional Services – West Coast Applications Teams. With a Master’s Degree in Computer Science, she’s been a consultant in the tech industry for 5 years, with varied clients in the Government, Mining and the Educational sector. In her current role, she works closely with the clients to implement the architectures on AWS. In her spare time she enjoys long hikes and is a professional classical dancer.

 

 

Vivek Lakshmanan is a Data & Machine Learning Engineer at Amazon Web Services. He has a Master’s degree in Software Engineering with specialization in Data Science from San Jose State University. Vivek is excited on applying cutting-edge technologies and building AI/ML solutions to customers in cloud. He is passionate about Statistics, NLP and Model Explainability in AI/ML. In his spare time, he enjoys playing cricket and taking unplanned road trips.

Read More

Meet Olivia: The first NTTS voice in Australian English for Amazon Polly

Meet Olivia: The first NTTS voice in Australian English for Amazon Polly

Amazon Polly is launching a new Australian English voice, Olivia. Amazon Polly turns text into lifelike speech, allowing you to build speech-enabled products. Building upon the existing Australian English Standard voices, Nicole and Russell, Olivia is the first Australian English voice in Amazon Polly powered by the Neural Text-to-Speech (NTTS) technology.

The NTTS voices in Amazon Polly are designed to create a great user experience that is engaging and similar to listening to a real person. While building Amazon Polly voices, our goal is not only to offer a natural, human-like sound, but also to give end-users a sense of personality.

As the first NTTS Australian voice for Polly, Olivia reaches a new level of expressiveness that brings text to life. Olivia’s sweet Australian accent is warm and welcoming, striking the right balance between casualness and professionalism. Olivia’s bright personality and friendly tone provides an engaging user experience, which makes the voice suitable for a large variety of use cases such as voice bots, contact center IVRs, training videos, audio books, or long-form reading.

 

You can use Olivia via the Amazon Polly console, the AWS Command Line Interface (AWS CLI), or AWS SDK. The feature is available across all AWS Regions supporting NTTS. For more information, see What Is Amazon Polly? For the full list of available voices, see Voices in Amazon Polly, or log in to the Amazon Polly console to try it out for yourself! To get more control over the speech output, try SSML tags and tailor the voice to your needs.

 


About the Author

Sarah Schopper is a Language Engineer for English Text-to-Speech. At work, she delights customers with new voices for Amazon Polly. In her spare time, she enjoys playing board games and experimenting with new cooking ingredients.

Read More

Configuring Amazon SageMaker Studio for teams and groups with complete resource isolation

Configuring Amazon SageMaker Studio for teams and groups with complete resource isolation

Amazon SageMaker is a fully managed service that provides every machine learning (ML) developer and data scientist with the ability to build, train, and deploy ML models quickly. Amazon SageMaker Studio is a web-based, integrated development environment (IDE) for ML that lets you build, train, debug, deploy, and monitor your ML models. Amazon SageMaker Studio provides all the tools you need to take your models from experimentation to production while boosting your productivity. You can write code, track experiments, visualize data, and perform debugging and monitoring within a single, integrated visual interface.

This post outlines how to configure access control for teams or groups within Amazon SageMaker Studio using attribute-based access control (ABAC). ABAC is a powerful approach that you can utilize to configure Studio so that different ML and data science teams have complete isolation of team resources.

We provide guidance on how to configure Amazon SageMaker Studio access for both AWS Identity and Access Management (IAM) and AWS Single Sign-On (AWS SSO) authentication methods. This post helps you set up IAM policies for users and roles using ABAC principals. To demonstrate the configuration, we set up two teams as shown in the following diagram and showcase two use cases:

  • Use case 1 – Only User A1 can access their studio environment; User A2 can’t access User A1’s environment, and vice versa
  • Use case 2 – Team B users cannot access artifacts (experiments, etc.) created by Team A members

You can configure policies according to your needs. You can even include a project tag in case you want to further restrict user access by projects within a team. The approach is very flexible and scalable.

Authentication

Amazon SageMaker Studio supports the following authentication methods for onboarding users. When setting up Studio, you can pick an authentication method that you use for all your users:

  • IAM – Includes the following:
    • IAM users – Users managed in IAM
    • AWS account federation – Users managed in an external identity provider (IdP)
  • AWS SSO – Users managed in an external IdP federated using AWS SSO

Data science user personas

The following table describes two different personas that interact with Amazon SageMaker Studio resources and the level of access they need to fulfill their duties. We use this table as a high-level requirement to model IAM roles and policies to establish desired controls based on resource ownership at the team and user level.

User Personas Permissions
Admin User

Create, modify, delete any IAM resource.

Create Amazon SageMaker Studio user profiles with a tag.

Sign in to the Amazon SageMaker console.

Read and describe Amazon SageMaker resources.

Data Scientists or Developers

Launch an Amazon SageMaker Studio IDE assigned to a specific IAM or AWS SSO user.

Create Amazon SageMaker resources with necessary tags. For this post, we use the team tag.

Update, delete, and run resources created with a specific tag.

Sign in to the Amazon SageMaker console if an IAM user.

Read and describe Amazon SageMaker resources.

Solution overview

We use the preceding requirements to model roles and permissions required to establish controls. The following flow diagram outlines the different configuration steps:

Applying your policy to the admin user

You should apply the following policy to the admin user who creates Studio user profiles. This policy requires the admin to include the studiouserid tag. You could use a different name for the tag if need be. The Studio console doesn’t allow you to add tags when creating user profiles, so we use the AWS Command Line Interface (AWS CLI).

For admin users managed in IAM, attach the following policy to the user. For admin users managed in an external IdP, add the following policy to the rule that the user assumes upon federation. The following policy enforces the studiouserid tag to be present when the sagemaker:CreateUserProfile action is invoked.

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "CreateSageMakerStudioUserProfilePolicy",
            "Effect": "Allow",
            "Action": "sagemaker:CreateUserProfile",
            "Resource": "*",
            "Condition": {
                "ForAnyValue:StringEquals": {
                    "aws:TagKeys": [
                        "studiouserid"
                    ]
                }
            }
        }
    ]
}

AWS SSO doesn’t require this policy; it performs the identity check.

Assigning the policy to Studio users

The following policy limits Studio access to the respective users by requiring the resource tag to match the user name for the sagemaker:CreatePresignedDomainUrl action. When a user tries to access the Amazon SageMaker Studio launch URL, this check is performed.

For IAM users, attach the following policy to the user. Use the user name for the studiouserid tag value.

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "AmazonSageMakerPresignedUrlPolicy",
            "Effect": "Allow",
            "Action": [
                "sagemaker:CreatePresignedDomainUrl"
            ],
            "Resource": "*",
            "Condition": {
                "StringEquals": {
                    "sagemaker:ResourceTag/studiouserid": "${aws:username}" 
                }
            }
        }
    ]
}

For AWS account federation, attach the following policy to role that the user assumes after federation:

{
   "Version": "2012-10-17",
   "Statement": [
       {
           "Sid": "AmazonSageMakerPresignedUrlPolicy",
           "Effect": "Allow",
           "Action": [
                "sagemaker:CreatePresignedDomainUrl"
           ],
           "Resource": "*",
           "Condition": {
                  "StringEquals": {
                      "sagemaker:ResourceTag/studiouserid": "${aws:PrincipalTag/studiouserid}"
                 }
            }
      }
  ]
}

Add the following statement to this policy in the Trust Relationship section. This statement defines the allowed transitive tag.

"Statement": [
     {
        --Existing statements
      },
      {
      "Sid": "IdentifyTransitiveTags",
      "Effect": "Allow",
      "Principal": {
        "Federated": "arn:aws:iam::<account id>:saml-provider/<identity provider>"
      },
      "Action": "sts:TagSession",
      "Condition": {
        "ForAllValues:StringEquals": {
          "sts:TransitiveTagKeys": [
            "studiouserid"
          ]
        }
      }
  ]

For users managed in AWS SSO, this policy is not required. AWS SSO performs the identity check.

Creating roles for the teams

To create roles for your teams, you must first create the policies. For simplicity, we use the same policies for both teams. In most cases, you just need one set of policies for all teams, but you have the flexibility to create different policies for different teams. In the second step, you create a role for each team, attach the policies, and tag the roles with appropriate team tags.

Creating the policies

Create the following policies. For this post, we split them into three policies for more readability, but you can create them according to your needs.

Policy 1: Amazon SageMaker read-only access

The following policy gives privileges to List and Describe Amazon SageMaker resources. You can customize this policy according to your needs.

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "AmazonSageMakerDescribeReadyOnlyPolicy",
            "Effect": "Allow",
            "Action": [
                "sagemaker:Describe*",
                "sagemaker:GetSearchSuggestions"
            ],
            "Resource": "*"
        },
        {
            "Sid": "AmazonSageMakerListOnlyPolicy",
            "Effect": "Allow",
            "Action": [
                "sagemaker:List*"
            ],
            "Resource": "*"
        },
        {
            "Sid": "AmazonSageMakerUIandMetricsOnlyPolicy",
            "Effect": "Allow",
            "Action": [
                "sagemaker:*App",
                "sagemaker:Search",
                "sagemaker:RenderUiTemplate",
                "sagemaker:BatchGetMetrics"
            ],
            "Resource": "*"
        },
        {
            "Sid": "AmazonSageMakerEC2ReadOnlyPolicy",
            "Effect": "Allow",
            "Action": [
                "ec2:DescribeDhcpOptions",
                "ec2:DescribeNetworkInterfaces",
                "ec2:DescribeRouteTables",
                "ec2:DescribeSecurityGroups",
                "ec2:DescribeSubnets",
                "ec2:DescribeVpcEndpoints",
                "ec2:DescribeVpcs"
            ],
            "Resource": "*"
        },
        {
            "Sid": "AmazonSageMakerIAMReadOnlyPolicy",
            "Effect": "Allow",
            "Action": [
                "iam:ListRoles"
            ],
            "Resource": "*"
        }
    ]
}

Policy 2: Amazon SageMaker access for supporting services

The following policy gives privileges to create, read, update, and delete access to Amazon Simple Storage Service (Amazon S3), Amazon Elastic Container Registry (Amazon ECR), and Amazon CloudWatch, and read access to AWS Key Management Service (AWS KMS). You can customize this policy according to your needs.

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "AmazonSageMakerCRUDAccessS3Policy",
            "Effect": "Allow",
            "Action": [
"s3:PutObject",
"s3:GetObject",
"s3:AbortMultipartUpload",
"s3:DeleteObject",
"s3:CreateBucket",
"s3:ListBucket",
"s3:PutBucketCORS",
"s3:ListAllMyBuckets",
"s3:GetBucketCORS",
               	"s3:GetBucketLocation"         
              ],
            "Resource": "<S3 BucketName>"
        },
        {
            "Sid": "AmazonSageMakerReadOnlyAccessKMSPolicy",
            "Effect": "Allow",
            "Action": [
                "kms:DescribeKey",
                "kms:ListAliases"
            ],
            "Resource": "*"
        },
        {
            "Sid": "AmazonSageMakerCRUDAccessECRPolicy",
            "Effect": "Allow",
            "Action": [
"ecr:Set*",
"ecr:CompleteLayerUpload",
"ecr:Batch*",
"ecr:Upload*",
"ecr:InitiateLayerUpload",
"ecr:Put*",
"ecr:Describe*",
"ecr:CreateRepository",
"ecr:Get*",
                 	"ecr:StartImageScan"
            ],
            "Resource": "*"
        },
        {
            "Sid": "AmazonSageMakerCRUDAccessCloudWatchPolicy",
            "Effect": "Allow",
            "Action": [
"cloudwatch:Put*",
"cloudwatch:Get*",
"cloudwatch:List*",
"cloudwatch:DescribeAlarms",
"logs:Put*",
"logs:Get*",
"logs:List*",
"logs:CreateLogGroup",
"logs:CreateLogStream",
"logs:ListLogDeliveries",
"logs:Describe*",
"logs:CreateLogDelivery",
"logs:PutResourcePolicy",
                 	"logs:UpdateLogDelivery"
            ],
            "Resource": "*"
        }
    ]
} 

Policy 3: Amazon SageMaker Studio developer access

The following policy gives privileges to create, update, and delete Amazon SageMaker Studio resources.
It also enforces the team tag requirement during creation. In addition, it enforces start, stop, update, and delete actions on resources restricted only to the respective team members.

The team tag validation condition in the following code makes sure that the team tag value matches the principal’s team. Refer to the bolded code for specifcs.

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "AmazonSageMakerStudioCreateApp",
            "Effect": "Allow",
            "Action": [
                "sagemaker:CreateApp"
            ],
            "Resource": "*"
        },
        {
            "Sid": "AmazonSageMakerStudioIAMPassRole",
            "Effect": "Allow",
            "Action": [
                "iam:PassRole"
            ],
            "Resource": "*"
        },
        {
            "Sid": "AmazonSageMakerInvokeEndPointRole",
            "Effect": "Allow",
            "Action": [
                "sagemaker:InvokeEndpoint"
            ],
            "Resource": "*"
        },
        {
            "Sid": "AmazonSageMakerAddTags",
            "Effect": "Allow",
            "Action": [
                "sagemaker:AddTags"
            ],
            "Resource": "*"
        },
        {
            "Sid": "AmazonSageMakerCreate",
            "Effect": "Allow",
            "Action": [
                "sagemaker:Create*"
            ],
            "Resource": "*",
            "Condition": { "ForAnyValue:StringEquals": { "aws:TagKeys": [ "team" ] }, "StringEqualsIfExists": { "aws:RequestTag/team": "${aws:PrincipalTag/team}" } }
        },
        {
            "Sid": "AmazonSageMakerUpdateDeleteExecutePolicy",
            "Effect": "Allow",
            "Action": [
                "sagemaker:Delete*",
                "sagemaker:Stop*",
                "sagemaker:Update*",
                "sagemaker:Start*",
                "sagemaker:DisassociateTrialComponent",
                "sagemaker:AssociateTrialComponent",
                "sagemaker:BatchPutMetrics"
            ],
            "Resource": "*",
            "Condition": { "StringEquals": { "aws:PrincipalTag/team": "${sagemaker:ResourceTag/team}" } }
        }
    ]
}

Creating and configuring the roles

You can now create a role for each team with these policies. Tag the roles on the IAM console or with the CLI command. The steps are the same for all three authentication types. For example, tag the role for Team A with the tag key= team and value = “<Team Name>”.

Creating the Amazon SageMaker Studio user profile

In this step, we add the studiouserid tag when creating Studio user profiles. The steps are slightly different for each authentication type.

IAM users

For IAM users, you create Studio user profiles for each user by including the role that was created for the team the user belongs to. The following code is a sample CLI command. As of this writing, including a tag when creating a user profile is available only through AWS CLI.

aws sagemaker create-user-profile --domain-id <domain id> --user-profile-name <unique profile name> --tags Key=studiouserid,Value=<aws user name> --user-settings ExecutionRole=arn:aws:iam::<account id>:role/<Team Role Name>

AWS account federation

For AWS account federation, you create a user attribute (studiouserid) in an external IdP with a unique value for each user. The following code shows how to configure the attribute in Okta:

Example below shows how to add “studiouserid” attribute in OKTA. In OKTA’s SIGN ON METHODS screen, configure following SAML 2.0 attributes, as shown in the image below. 

Attribute 1:
Name: https://aws.amazon.com/SAML/Attributes/PrincipalTag:studiouserid 
Value: user.studiouserid

Attribute 2:
Name: https://aws.amazon.com/SAML/Attributes/TransitiveTagKeys
Value: {"studiouserid"}

The following screenshot shows the attributes on the Okta console.

Next, create the user profile using the following command. Use the user attribute value in the preceding step for the studiouserid tag value.

aws sagemaker create-user-profile --domain-id <domain id> --user-profile-name <unique profile name> --tags Key=studiouserid,Value=<user attribute value> --user-settings ExecutionRole=arn:aws:iam::<account id>:role/<Team Role Name>

AWS SSO

For instructions on assigning users in AWS SSO, see Onboarding Amazon SageMaker Studio with AWS SSO and Okta Universal Directory.

Update the Studio user profile to include the appropriate execution role that was created for the team that the user belongs to. See the following CLI command:

aws sagemaker update-user-profile --domain-id <domain id> --user-profile-name <user profile name> --user-settings ExecutionRole=arn:aws:iam::<account id>:role/<Team Role Name> --region us-west-2

Validating that only assigned Studio users can access their profiles

When a user tries to access a Studio profile that doesn’t have studiouserid tag value matching their user name, an AccessDeniedException error occurs. You can test this by copying the link for Launch Studio on the Amazon SageMaker console and accessing it when logged in as a different user. The following screenshot shows the error message.

Validating that only respective team members can access certain artifacts

In this step, we show how to configure Studio so that members of a given team can’t access artifacts that another team creates.

In our use case, a Team A user creates an experiment and tags that experiment with the team tag. This limits access to this experiment to Team A users only. See the following code:

import sys
!{sys.executable} -m pip install sagemaker
!{sys.executable} -m pip install sagemaker-experiments

import time
import sagemaker
from smexperiments.experiment import Experiment

demo_experiment = Experiment.create(experiment_name = "USERA1TEAMAEXPERIMENT1",
                                    description = "UserA1 experiment",
                                    tags = [{'Key': 'team', 'Value': 'TeamA'}])

If a user who is not in Team A tries to delete the experiment, Studio denies the delete action. See the following code:

#command run from TeamB User Studio Instance
import time
from smexperiments.experiment import Experiment
experiment_to_cleanup = Experiment.load(experiment_name="USERA1TEAMAEXPERIMENT1")
experiment_to_cleanup.delete()

[Client Error]
An error occurred (AccessDeniedException) when calling the DeleteExperiment operation: User: arn:aws:sts:: :<AWS Account ID>::assumed-role/ SageMakerStudioDeveloperTeamBRole/SageMaker is not authorized to perform: sagemaker:DeleteExperiment on resource: arn:aws:sagemaker:us-east-1:<AWS Account ID>:experiment/usera1teamaexperiment1

Conclusion

In this post, we demonstrated how to isolate Amazon SageMaker Studio access using the ABAC technique. We showcased two use cases: restricting access to a Studio profile to only the assigned user (using the studiouserid tag) and restricting access to Studio artifacts to team members only. We also showed how to limit access to experiments to only the members of the team using the team tag. You can further customize policies by applying more tags to create more complex hierarchical controls.

Try out this solution for isolating resources by teams or groups in Amazon SageMaker Studio. For more information about using ABAC as an authorization strategy, see What is ABAC for AWS?


About the Authors

Vikrant Kahlir is Senior Solutions Architect in the Solutions Architecture team. He works with AWS strategic customers product and engineering teams to help them with technology solutions using AWS services for Managed Databases, AI/ML, HPC, Autonomous Computing, and IoT.

 

 

 

Rakesh Ramadas is an ISV Solution Architect at Amazon Web Services. His focus areas include AI/ML and Big Data.

 

 

 

 

Rama Thamman is a Software Development Manager with the AI Platforms team, leading the ML Migrations team.

Read More

British Newscaster speaking style now available in Amazon Polly

British Newscaster speaking style now available in Amazon Polly

Amazon Polly turns text into lifelike speech, allowing you to create applications that talk and build entirely new categories of speech-enabled products. We’re thrilled to announce the launch of a brand-new, British Newscaster speaking style voice: Amy. The speaking style mimics a formal and authoritative British newsreader. This Newscaster voice is the result of our latest achievements in Neural Text-to-Speech (NTTS) technology, making it possible to release new voices with only a few hours of recordings.

Amy’s British English Newscaster voice offers an alternative to the existing Newscaster speaking styles in US English (Matthew and Joanna, launched in July 2019) and US Spanish (Lupe, launched in April 2020). The style is suitable for a multitude of sectors, such as publishing and media. The high quality of the voice and its broadcaster-like style contribute to a more pleasant listening experience to relay news content.

Don’t just take our word for it! Our customer SpeechKit is a text-to-audio service that utilizes Amazon Polly as a core component of their toolkit. Here’s what their co-founder and COO, James MacLeod, has to say about this exciting new style: “News publishers use SpeechKit to publish their articles and newsletters in audio. The Amy Newscaster style is another great improvement from the Polly team, the pitch and clarity of intonation of this style fits well with this type of short-to-mid form news publishing. It provides listeners with a direct and informative style they’re used to hearing from human-read audio articles. As these voices advance, and new listening habits develop, publishers continue to observe improvements in audio engagement. News publishers can now start using the Amy Newscaster style through SpeechKit to make their articles available in audio, at scale, and track audio engagement.

You can listen to the following samples to hear how this brand-new British Newscaster speaking style sounds:

Amy: 

The following samples are the other Newscaster speaking styles in US English and US Spanish: 

Matthew:

Joanna:

Lupe: 

You can use Amy’s British Newscaster speaking style via the Amazon Polly console, the AWS Command Line Interface (AWS CLI), or AWS SDK. The feature is available in all AWS Regions supporting NTTS. For more information, see What Is Amazon Polly? For the full list of available voices, see Voices in Amazon Polly. Or log in to the Amazon Polly console to try it out for yourself! Additionally, Amy Newscaster and other selected Polly voices are now available to Alexa skill developers.

 


About the Author

Goeric Huybrechts is a Software Development Engineer in the Amazon Text-to-Speech Research team. At work, he is passionate about everything that touches AI. Outside of work, he loves sports, football in particular, and loves to travel.

Read More

Learn from the winner of the AWS DeepComposer Chartbusters challenge The Sounds of Science

Learn from the winner of the AWS DeepComposer Chartbusters challenge The Sounds of Science

AWS is excited to announce the winner of the AWS DeepComposer Chartbusters The Sounds of Science Challenge, Sungin Lee. AWS DeepComposer gives developers a creative way to get started with machine learning (ML). In June, we launched Chartbusters, a monthly global competition during which developers use AWS DeepComposer to create original compositions and compete to showcase their ML skills. The third challenge, The Sounds of Science, challenged developers to create background music for a video-clip.

Sungin is a Junior Solutions Architect for MegazoneCloud, one of the largest AWS partners in South Korea. Sungin studied linguistics and anthropology in university, but made a career change to cloud engineering. When Sungin first started learning about ML, he never knew he would create the winning composition for the Chartbusters challenge.

We interviewed Sungin to learn about his experience competing in the third Chartbusters challenge, which ran from September 2–23, 2020, and asked him to tell us more about how he created his winning composition.


Sungin Lee at his work station.

Getting started with machine learning

Sungin began his interest in ML and Generative Adversarial Networks (GANs) through the vocational education he received as he transitioned to cloud engineering.

“As part of the curriculum, there was a team project in which my team tried to make a model that generates an image according to the given sentence through GANs. Unfortunately, we failed at training the model due to the complexity of it but [the experience] deepened my interest in GANs.”

After receiving his vocational education, Sungin chose to pursue a career in cloud engineering and joined Megazone Cloud. Six months in to his career, Sungin’s team leader at work encouraged him to try AWS DeepComposer.

“When the challenge first launched, my team leader told me about the challenge and encouraged me to participate in it. I was already interested in GANs and music, and as a new hire, I wanted to show my machine learning skills.” 

Building in AWS DeepComposer

In The Sounds of Science, developers composed background music for a video clip using the Autoregressive Convolutional Neural Network (AR-CNN) algorithm and edited notes with the newly launched Edit melody feature to better match the music with the provided video.

“I began by selecting the initial melody. When I first saw the video, I thought that one of the sample melodies, ‘Ode to Joy,’ went quite well with the atmosphere of the video and decided to use it. But I wanted the melody to sound more soothing than the original so I slightly lowered the pitch. Then I started enhancing the melody with AR-CNN.”


Sungin composing his melody.

Sungin worked on his competition for a day before generating his winning melody.

“I generated multiple compositions with AR-CNN until I liked the melody. Then I started adding more instruments. I experimented with all sample models from MuseGan and decided that rock suits melody the best. I found the ‘edit melody’ feature very helpful. In the process of enhancing the melody with AR-CNN, some off-key notes would appear and disrupt the harmony. But with the ‘edit melody’ feature, I could just remove or modify the wrong note and put the music back in key!”

The Edit melody feature on the AWS DeepComposer console.

“The biggest obstacle was my own doubt. I had a hard time being satisfied with the output, and even thought of giving up on the competition and never submitting any compositions. But then I thought, why give up? So I submitted my best composition by far and won the challenge.”

You can listen to Sungin’s winning composition, “The Joy,” on the AWS DeepComposer SoundCloud page.

Conclusion

Sungin believes that the AWS DeepComposer Chartbusters challenge gave him the confidence in his career transition to continue pursuing ML.

“It has been only a year since I started studying machine learning properly. As a non-Computer Science major without any basic computer knowledge, it was hard to successfully achieve my goals with machine learning. For example, my team project during the vocational education ended up unsuccessful, and the AWS DeepRacer model that I made could not finish the track. Then, when I was losing confidence in myself, I won first place in the AWS DeepComposer Chartbusters challenge! This victory reminded me that I could actually win something with machine learning and motivated me to keep studying.”

Overall, Sungin completed the challenge with a feeling of accomplishment and a desire to learn more.

“This challenge gave me self-confidence. I will keep moving forward on my machine learning path and keep track on new GAN techniques.”

Congratulations to Sungin for his well-deserved win!

We hope Sungin’s story has inspired you to learn more about ML and get started with AWS DeepComposer. Check out the next AWS DeepComposer Chartbusters challenge, and start composing today.

 


About the Author

Paloma Pineda is a Product Marketing Manager for AWS Artificial Intelligence Devices. She is passionate about the intersection of technology, art, and human centered design. Out of the office, Paloma enjoys photography, watching foreign films, and cooking French cuisine.

Read More