AWS Announces the global expansion of AWS CCI Solutions

We’re excited to announce the global availability of AWS Contact Center Intelligence (AWS CCI) solutions powered by AWS AI Services and made available through the AWS Partner Network. AWS CCI solutions enable you to leverage AWS machine learning (ML) capabilities with your current contact center provider to gain greater efficiencies and deliver increasingly tailored customer experiences —with no ML expertise required.

AWS CCI solutions use a combination of AWS AI-powered services for text-to-speech, translation, intelligent search, conversational AI, transcription, and language comprehension capabilities. We’re delighted to announce the addition of AWS Technology Partners: Salesforce, Avaya, Talkdesk, 8×8, Clarabridge, Clevy, XappAI, and Voiceworx. We are also adding new AWS Consulting Partners: Inawisdom, Cation Consulting, HCL Technologies, Wipro, First Derivatives, Servion, and Lucy in the Cloud/Micropole for customers who require a custom solution or seek additional assistance with AWS CCI. These new partners provide customers across the globe more opportunities to benefit from AWS ML-powered contact center intelligence solutions to enhance self-service, analyze calls in real time to assist agents, and learn from all contact center interactions with post-call analytics.

Around the world, the volume of interactions in contact centers continues to increase. Companies see multiple opportunities to leverage AI technology to improve the customer experience. This can include 24/7 self-serve virtual agents that can provide timely and accurate answers to customer queries, call analytics and agent assist to improve agent productivity, or call analytics to generate further improvements in their operations. However, piecing together the various technologies to build an ML-driven intelligent contact center unique to the goals and needs of each business can be a significant undertaking. You want the benefits that intelligent contact center technologies bring, but the resources, time and cost to implement are often too high to overcome. AWS CCI provides a simple and fast route to deploy AWS ML solutions no matter which contact center provider you use.

AWS CCI customer success stories

Multiple customers already benefit from an improved customer experience and reduced operational costs as a result of using AWS CCI solutions through AWS Partners. Here are some example of AWS CCI customer stories.

Maximus is a leading pure-play provider in the administration of government health and human services programs, and is the largest provider of contact center services to the government. Tom Romeo, the General Manager at Maximus Federal, says, “At Maximus, we are constantly looking for new ways to innovate and improve the Citizen Journey and contact center experience. With AWS Partner SuccessKPI, we were able to add AWS CCI into our Genesys Cloud environment in a matter of hours and deliver a 360-degree view of the citizen experience. This program allowed us to deliver increased capacity, automated quality review, and agent compliance and performance improvements for government agencies.”

Magellan Health is a large managed health care company focused on special population, complete pharmacy benefits and other specialty areas. Brian Lichtle, the Senior Director of Software Engineering at Magellan Rx says,
“We chose Amazon Kendra, a service within AWS CCI to build a secure and scalable agent assist application. This helped call center agents, and the customer they serve quickly uncover the information they need. Since implementing CCI and Amazon Kendra, early results show an average reduction in call times of about 9-15 seconds, which saves more than 4.4k hours on over 2.2 million calls per calendar year.”

Cation Consulting is an AWS consulting partner focused on delivering robust, conversational AI experiences to customers. Alan Kiernan, the co-founder and CTO at Cation Consulting says, “At Cation Consulting, we provide customers with conversational AI and self-service experiences that allow them to significantly reduce customer support costs while improving the customer experience. AWS Contact Center Intelligence enables us to move quickly and scale seamlessly with customers such as Ryanair, the largest airline in Europe. The Ryanair chatbot has handled millions of customer enquiries per year as a trusted extension of the Ryanair’s customer care team. We are excited to leverage Amazon Lex’s recent expansion into European languages and design virtual agents who can resolve customer issues quickly and improve customer service ratings.”

New AWS CCI language support and partner additions

In addition to our new partners, AWS CCI continues to expand its global capabilities with new language expansions. AWS CCI has 3 pre-configured solutions available through participating APN partners, focused on the contact center workflow: Self-Service, Live Call Analytics and Agent Assist, and Post-Call Analytics. The Self-Service solution uses ML-driven chatbots and Interactive Voice Response (IVR) to address and deflect the most common tasks and queries so that the contact center workforce can focus on resolving interactions that need a human touch. The Self-Service solution utilizes the conversational interface of Amazon Lex and the text to speech voices of Amazon Polly to create a dynamic virtual agent in multiple languages such as French, German, Italian, and Spanish. Adding Amazon Kendra can boost the ability of these virtual agents to answer questions by finding the best answers from internal knowledge bases. The Live Call Analytics & Agent Assist and Post-Call Analytics solutions use Amazon Transcribe to perform real-time or post-call speech transcription with Amazon Comprehend to automatically analyze the interaction, detect call sentiment, and identify key words and phrases in the conversation using natural language processing (NLP) to increase agent productivity. These key words can then be utilized by the intelligent search capabilities of Amazon Kendra to help agents find timely and relevant information to resolve live call issues more quickly. Transcribing live calls is now available in German, Italian, Japanese, Korean, and Portuguese languages. Amazon Translate can also be used to translate calls into an agent’s preferred language and supports a total of 71 languages and variants.

“At Amazon, we want to meet the customer wherever they are in their contact center journey. With AWS CCI, we wanted to make it easy for customers who use different contact centers providers to add AI and achieve new levels of operational efficiency.” says Vasi Philomin, GM of AWS Language Services, AI. “Having a global partner network is critical to enabling our customers to realize the benefits of cloud-based machine learning services and removing the need to hire specialized developers to build and maintain these systems.”

Talkdesk is a cloud contact center for innovative enterprises, combining enterprise performance with consumer simplicity resulting in higher customer satisfaction, productivity and cost savings. Tiago Paiva, chief executive officer at Talkdesk, shares, “Combining Talkdesk cloud innovations with powerful AI and machine learning services from AWS extends the capabilities and choices available to Talkdesk customers. We are excited to add new, out-of-the-box options through AWS Contact Center Intelligence solutions to help the Talkdesk user base rise above their market peers through superior customer service.”

8×8 is a leading contact center provider. Manu Mukerji, the Vice President of Engineering at 8×8, Inc., says, “By partnering with AWS, we can deliver to businesses and organizations superior bi-directional integration with AWS CCI, providing a best-in-class experience for customers. The 8×8 integration with AWS CCI makes it easy for customers to leverage AI capabilities even if they have no AI experience. The 8×8 Virtual Agent is the only fully managed and customizable solution in the market that works seamlessly for both unified communications and contact center use cases, enhancing contact center efficiency for reduced wait times and faster time to resolution.”

Pat Higbie, Co-founder and CEO of XAPP AI, an AWS Technology Partner, says, “Amazon Lex, Amazon Kendra and Amazon Polly provide a powerful combination of AI services that enables contact centers to transcend the limitations of traditional chatbots and IVR to transform their operations with truly conversational self-service that improves the customer experience and delivers dramatic ROI. And, AWS CCI solutions can be integrated with all contact center brands to bring the value of AWS AI services to any enterprise quickly.”

We are excited to have all these new partners join

Getting started

There are multiple ways to get started with AWS CCI. To find a participating partner, see the AWS CCI partner page for more information and contact details.

To learn more, please join us for any or all of the following sessions hosted by AWS and our AWS CCI partners.

re:Invent sessions

Learn how you can leverage AWS CCI solutions to improve the customer experience and reduce cost with AI. Explore how AWS CCI solutions can be built easily through an expanding network of partners to provide self-service interactions, live and post-call analytics, and agent assist on existing contact center systems. AWS Partner SuccessKPI shares how it uses CCI solutions to improve the customer experience and tackle challenging business problems such as reducing call volume, improving agent effectiveness, and automating quality management in enterprise contact centers for customers like Maximus.

Numerous stakeholders including content designers, developers, and business owners collaborate to create a bot. In this session, hear how Dropbox used the Amazon Lex interface to build a chatbot as a support offering. The session covers how the Amazon Lex console allows you to easily create flows and manage them, and it details the decoupling that should exist between the bot author and developer for an optimal collaborative model. Finally, this session provides insights into conversational interface (CI) and conversational design (CD), language localization, and deployment practices.

 Answering customer questions is essential to the customer support experience. Powered by ML, Amazon Kendra is an enterprise search service that can add Q&A capabilities to your virtual agents or boost call center agent productivity with live call agent assistance. In this session, you hear how Magellan RX Management augmented the call center experience using Amazon Kendra to help agents find accurate information faster.

In this session, learn how to train custom language models in Amazon Transcribe that supercharge speech recognition accuracy. Octopus Energy, a UK-based utility company, shares how it leverages domain-specific data to train a custom language model that is fine-tuned for its business needs and specific use case.

Partner sessions

  • How to boost the return on your contact center investments with AI
    January 26 at 10:00 am PST – REGISTER HERE
    Presented by Acqueon and AWS

With AI technologies maturing, enterprises are embracing them to delight customers and improve the operational productivity of their contact centers. In this educational webinar, AI expert Chris Featherstone, Global Business Development Leader for AWS CCI and industry veteran Nicolas de Kouchkovsky, CMO at Acqueon, discuss how to integrate AI into your contact center software stack. They will provide an update on industry adoption and share the art of the possible without having to overhaul your technology investments.

  • Gain Control of your CX with a 360 CCI Power View: A step by step guide
    January 27, 2021 at 1PM EST/10AM PST – REGISTER HERE
    Presented by SuccessKPI and AWS

Managing customer experience requires tackling a complex set of metrics across agents, queues, geographies, customer types, and channels. Mix in the data from speech analytics, chatbots, and post call surveys, and the picture gets blurry very quickly. In this informative webinar, we explore the factors that make customer experience management such a quagmire and provide a series of recommendations and steps to help put you in control of your customer experience.

  • Add Intelligence to your existing contact center with AWS Contact Center Intelligence and Talkdesk
    February 24, 2021 at 9am BRT, 9am MXT, and 9am PST – REGISTER HERE
    Presented by Talkdesk and AWS at AWS Innovate – AI/ML Edition

Learn how your organization can leverage AWS Contact Center Intelligence (CCI) solutions and AWS Partner, Talkdesk, to improve customer experience and reduce cost with AI. We will explore how AWS CCI solutions can be built easily to provide self-service interactions, live and post-call analytics and agent assist on existing contact center systems. Talkdesk will also share how they improve customer experience and tackle challenging business problems such as improving agent effectiveness, and automating quality management in enterprise contact centers.


About the Author

Eron Kelly is the worldwide leader of Product Marketing for a broad portfolio of AWS services that cover Compute, Storage, Networking, Contact Centers, End User Computing and Business Applications. In this capacity, his team leads all aspects of product marketing including messaging, positioning, launches, web strategy and execution, service adoption, and field enablement. Prior to AWS, he has led sales and marketing teams at Microsoft, Proctor and Gamble and was a Captain in the Air Force. Outside of work, Mr. Kelly is very active raising a family of four kids. He is a member of the Board of Trustees at Eastside Catholic School in Sammamish, WA, and spent the last 10 years coaching youth lacrosse.

Esther Lee is a Product Manager for AWS Language AI Services. She is passionate about the intersection of technology and education. Out of the office, Esther enjoys long walks along the beach, dinners with friends and friendly rounds of Mahjong.

Read More

Hosting a private PyPI server for Amazon SageMaker Studio notebooks in a VPC

Amazon SageMaker Studio notebooks provide a full-featured integrated development environment (IDE) for flexible machine learning (ML) experimentation and development. Security measures secure and support a versatile and collaborative environment. In some cases, such as to protect sensitive data or meet regulatory requirements, security protocols require that public internet access be disabled in the development environment.

Typically, developers have access to the public internet and can install any new libraries you want to import. You can install Python packages from the public Python Package Index (PyPI), a Python software repository, using standard tools such as pip. You can find hundreds of thousands of packages, including common packages such as NumPy, Pandas, Matplotlib, Pytest, Requests, Django, and BeautifulSoup.

In a development environment with internet access disabled, you can instead mirror packages and host your own PyPI server hosted in your own Amazon Virtual Private Cloud (Amazon VPC). A VPC is a logically isolated virtual network into which you can launch resources, such as Amazon Elastic Compute Cloud (Amazon EC2) instances and SageMaker Studio domains. You have fine-grained access control over its network connectivity. You can specify an IP address range for the VPC and associate security groups to control its inbound and outbound traffic. You can also add subnets that use a subset of IP addresses within the VPC, and choose whether each subnet is open to the public internet or is private.

When you use a local PyPI server with this architecture and install Python libraries from your SageMaker Studio notebook, you connect to your private server instead of a public package index, and all traffic remains within a single secured VPC and private subnet.

SageMaker Studio recently launched VPC integration to meet these security needs. You can now launch Studio notebooks within a private VPC, disabling internet access. To install Python packages within this secure environment, you can configure an EC2 instance in your VPC that acts as a PyPI server for your notebooks. This enables you to maintain productivity and ease of package installation while working within a private environment that isn’t accessible from the public internet.

Solution overview

This solution creates a private PyPI server on an EC2 instance, and connects it to a SageMaker Studio notebook through network configuration including a VPC, private subnet, security group, and elastic network interface. The following diagram illustrates this architecture.

The following diagram illustrates this architecture.

You complete the following steps to implement this solution:

  1. Launch an EC2 instance within a VPC, subnet, and security group.
  2. Configure the instance to function as a private PyPI server.
  3. Create a VPC endpoint and add security group rules.
  4. Create a VPC-only SageMaker Studio domain, user, and notebook with the necessary permissions and networking.
  5. Install a Python package from the PyPI server onto the SageMaker Studio notebook.

Prerequisites

This is an intermediate-level solution with the following prerequisites:

  • An AWS account
  • Sufficient level of access to create Amazon SageMaker, Amazon EC2, and Amazon VPC resources
  • Familiarity with creating and modifying AWS resources on the AWS Management Console
  • Basic command-line experience, such as SSHing onto an EC2 instance, installing packages, and editing files using vim or another command-line text editor

Launching an EC2 instance

For this post, we launch a new EC2 instance in the us-east-2 Region. For the full list of available Regions supporting SageMaker Studio, see Supported Regions and Quotas.

  1. On the Amazon EC2 console, launch a new instance in a Region supporting SageMaker Studio.
  2. Choose an Amazon Linux 2 AMI.
  3. Choose a t2.medium instance (or larger t2, if preferred).
  4. On the Step 3: Configure Instance Details page, for Network, choose your VPC.
  5. For Subnet, choose your subnet.

You can use the default VPC and subnet, use other existing resources, or create new ones. Make sure to note the VPC and subnet you select for later reference.

  1. Leave all other settings as-is.
  2. Use default storage and tag settings.
  3. On the Step 6: Configure Security Group page, for Assign a security group, select Create a new security group.
  4. For Security group name, enter studio-SG.
  5. For Type, choose SSH on port range 22.
  6. For Source, choose My IP.

This allows you to SSH onto the instance from your current internet network.

  1. Create a new key pair, studio-host.
  2. Launch the instance.

For more information about launching an instance, see Tutorial: Getting started with Amazon EC2 Linux instances.

Configuring the instance as a PyPI server

To configure your instance, complete the following steps:

  1. Open a terminal window and navigate to the directory containing your .pem file.
  2. Change the key permissions and SSH onto your instance, substituting in the public IP address and Region:
    chmod 400 studio-host.pem
    ssh -i "studio-host.pem" ec2-user@ec2-x-x-x-x.{region}.compute.amazonaws.com

If needed, you can find the SSH command by selecting your instance on the console, choosing Connect, and navigating to the SSH Client tab.

  1. Install pip, which you use to install Python packages, and bandersnatch, which you use to mirror packages from the public PyPI server onto your instance. For this post, we use the package AWS Data Wrangler, an AWS Professional Services open-source library that integrates Pandas DataFrames with AWS services:
    sudo yum install python3-pip
    sudo pip3 install multidict==4.7.6
    sudo pip3 install yarl==1.6.0
    sudo pip3 install bandersnatch

You now configure bandersnatch to specify packages and their versions to mirror.

  1. Open a config file:
    sudo vim /etc/bandersnatch.conf

  1. Enter the following file contents:
    [mirror]
    directory = /pypi
    master = https://pypi.org
    timeout = 10
    workers = 3
    hash-index = false
    stop-on-error = false
    json = false
    
    [plugins]
    enabled =
        whitelist_project
        allowlist_release
    
    [whitelist]
    packages =
        awswrangler==1.10.0
        pyarrow==2.0.0
        SQLAlchemy==1.3.10
        s3fs==0.4.2
        numpy==1.18.4
        sqlalchemy-redshift==0.7.9
        boto3==1.15.10
        pandas==1.1.0
        psycopg2-binary==2.8.0
        pymysql==0.9.3
        botocore==1.18.10
        fsspec==0.7.4
        s3transfer==0.3.2
        jmespath==0.9.4
        pytz==2019.3
        python-dateutil==2.8.1
        urllib3==1.25.8
        six==1.14.0
    

  1. Mirror the libraries and list the directory contents to view that the libraries have been copied onto the instance:
    sudo /usr/local/bin/bandersnatch mirror
    ls /pypi/web/simple/

You must configure pip so that when pip is run to install packages, they are searched for within your private PyPI server instead of on the public server. The file already exists, and you add two more lines to the existing file.

  1. Open the file:
    sudo vim /etc/pip.conf

  1. Ensure your pip config file reads as follows, adding the last two lines:
    [global] 
    disable_pip_version_check = 1 
    format = columns 
    index-url = http://localhost/simple 
    trusted-host = localhost

  1. Install and configure nginx so that the instance can function as a private web server:
    sudo amazon-linux-extras install nginx1
    sudo vim /etc/nginx/nginx.conf

  1. Update the server section of the nginx config file to change the server_name to localhost, listen on the private IP address, and add the root and index locations. The server section of the nginx config file should be as follows:
    server {
            listen x.x.x.x:80;
            listen       80;
            listen       [::]:80;
            server_name localhost;
            root         /usr/share/nginx/html;
    
            # Load configuration files for the default server block.
            include /etc/nginx/default.d/*.conf;
    
            location / { root /pypi/web/; index index.html index.htm index.php; }
    
            error_page 404 /404.html;
                location = /40x.html {
            }
    
            error_page 500 502 503 504 /50x.html;
                location = /50x.html {
            }
        }
    

  2. Start the server and install the package locally to test it out:
    sudo service nginx start
    pip3 install --user awswrangler

Note that the packages are collected from the localhost, not the public package index.

You now have a private PyPI server ready for use.

Creating a VPC endpoint

VPC endpoints allow resources within a VPC to access AWS services. For this solution, you will create an endpoint for the SageMaker API. You can extend this solution by adding more endpoints for other services you need to access from your notebook.

There are two types of VPC endpoints:

  • Interface endpoints – Elastic network interfaces within a subnet that serve as entry points for traffic destined to a supported AWS service, such as SageMaker
  • Gateway endpoints – Only supported for Amazon Simple Storage Service (Amazon S3) and Amazon DynamoDB
  1. On the Amazon VPC console, choose Endpoints.
  2. Choose Create Endpoint.
  3. Create the SageMaker API endpoint com.amazonaws.{region}.sagemaker.api.
  4. Make sure you choose the same VPC, subnet, and security group used by your EC2 instance.

Make sure you choose the same VPC, subnet, and security group used by your EC2 instance.

When finished, your endpoint is listed as shown in the following screenshot.

For more information about VPC endpoints, including the distinction between interface endpoints and gateway endpoints, see VPC endpoints.

Editing your security group rules

Edit your security group to add an inbound rule allowing all traffic from within the security group. This allows the Studio notebook to communicate with the EC2 instance because they both reside within this security group.

You can search for the security group name on the Amazon EC2 console, and you receive a suggested ID.

After you add the rule, the security group has two inbound rules: one allowing SSH on port 22 from your IP to connect to the EC2 instance, and another allowing all traffic from within the security group.

For more information about security groups, see Security groups for your VPC.

Creating VPC-only SageMaker Studio resources

All SageMaker Studio resources reside within a domain, with a maximum of one domain per Region in an AWS account. A domain contains one or more users, and as a user you can open a Studio notebook. For more information about creating a domain, see CreateDomain.

With the recent release of VPC support for Studio, you can choose from two networking options: public internet only and VPC only. For more information, see Connect SageMaker Studio Notebooks to Resources in a VPC and Securing Amazon SageMaker Studio connectivity using a private VPC. For this post, we create a VPC-only domain.

  1. On the SageMaker Studio console, Select Standard setup.

This allows for detailed configuration.

  1. For Authentication method, select AWS Identity and Access Management (IAM).For Authentication method, select AWS Identity and Access Management (IAM).
  2. Under Permissions, choose Create a new role.
  3. Use the default settings.
  4. Choose Create role.

This creates a new SageMaker execution role.

  1. In the Network and Storage section, configure your VPC and subnet to match those of the EC2 instance.
  2. For Network Access for Studio, select VPC Only.
  3. For Security group(s), choose the same security group as used for the EC2 instance.
  4. Choose Submit.

Wait approximately a minute to see the banner notification that SageMaker Studio is ready.

You now create a Studio user within the domain.

  1. Choose Add user.
  2. Give the user a name (for example, studio-user).
  3. Choose the role you just created, AmazonSageMaker-ExecutionRole-<timestamp when the role was created>.
  4. Choose Submit.

This concludes the initial SageMaker Studio resource creation. You now have a Studio domain and user ready for use and can proceed with creating and using a notebook.

Installing a Python package onto the SageMaker Studio notebook

To start using the PyPI server from the SageMaker Studio notebook, complete the following steps:

  1. On the SageMaker Studio Control Panel, choose Open Studio next to the user name.
  2. Wait for your Studio environment to load.

You can now see the Studio UI. For more information, see the Amazon SageMaker Studio UI Overview.

  1. Use the default SageMaker JumpStart Data Science image and create a new Notebook Python 3.
  2. Wait a few minutes for the image to launch and your notebook to be available.

If you try to run a command before the notebook is available, you get the message: Note: The kernel is still starting. Please execute this cell again after the kernel is started. After your image has launched, you see it listed under Kernel Sessions, along with items for Running Instances and Running Apps. The kernel runs within the app, and the app runs on the instance.

Now you’re ready to configure your notebook. The first step is pip configuration, so that when you install a package using pip, your notebook searches for the package on the private PyPI server instead of through the public internet at pypi.org.

  1. Run the following command in a notebook cell, substituting your EC2 instance’s private IP address:
    !printf '[global]nindex-url = http://x.x.x.x/simplentrusted-host = x.x.x.x'| sudo tee /etc/pip.conf

  1. To check that the file was successfully written, run the following command:
    !head /etc/pip.conf

Now you’re ready to install Python packages from your server.

  1. To see that AWS Data Wrangler isn’t installed by default, try to import it with the command:
    import awswrangler

  1. Install the package and append to your Python path:
    !pip install awswrangler
    import sys
    sys.path.append('/home/sagemaker-user/.local/lib/python3.7/site-packages')

The library was installed from your private server’s index, as you specified in the pip config file, http://{EC2-IP}/simple.

The library was installed from our private server’s index, as you specified in the pip config file,

  1. Now that the package has been installed, you can import the package smoothly:
    import awswrangler

    Now that the package has been installed, you can import the package smoothly:

Now your notebook is ready for development, including installation of the Python libraries of your choice! Moreover, your PyPI server remains operational and available even when you delete your notebooks or use multiple notebooks. Your PyPI server is separated from your development environment, giving you freedom to manage your notebook resources in the way that best suits your needs.

Cleaning up

To clean up your resources, complete the following steps:

  1. Shut down the running instance in the SageMaker Studio notebook.
  2. Delete any remaining user’s apps on the SageMaker Studio console, including the default app.
  3. Delete the SageMaker Studio user.
  4. Delete Studio in the SageMaker Studio Control Panel.
  5. Stop the EC2 instance.
  6. Terminate the EC2 instance.
  7. Delete the IAM role, VPC endpoint, studio-SG security group, and Amazon Elastic File System (EFS) file system.
  8. Delete the rules in the inbound and outbound NFS security groups.
  9. Delete the security groups.

Conclusion

This post demonstrated how to get started with SageMaker Studio in VPC-only mode, while retaining the ability to install Python packages by hosting a private PyPI server. Now you can move forward with your ML development in notebooks residing within this secure environment.

We invite you to explore other exciting applications of SageMaker Studio, including Amazon SageMaker Experiments and scheduling notebooks on SageMaker ephemeral instances.


About the Author

Julia Kroll is a Data & Machine Learning Engineer for AWS Professional Services. She works with enterprise and public sector customers to build data lake, analytics, and machine learning solutions.

Read More

Artificial intelligence and machine learning continues at AWS re:Invent

A fresh new year is here, and we wish you all a wonderful 2021. We signed off last year at AWS re:Invent on the artificial intelligence (AI) and machine learning (ML) track with the first ever machine learning keynote and over 50 AI/ML focused technical sessions covering industries, use cases, applications, and more. You can access all the content for the AI/ML track on the AWS re:Invent website. But, the exciting news is we’re not done yet. We’re kicking off 2021 by bringing you even more content for AI and ML through a set of new sessions that you can stream live starting Jan 12, 2021. Each session will be offered multiple times, so you can find the time that works best for your location and schedule.

And of course, AWS re:Invent is free. Register now if you have not already and build your schedule from the complete agenda. Here are some sample sessions from the AI/ML track that will stream live starting next week

Here are a few sample sessions that will stream live starting next week.

Customers using AI/ML solutions from AWS

A day in the life of a machine learning data scientist at J P Morgan Chase (AIM319)

Thursday, January 14 – 8 AM to 8:30 AM PST

Thursday, January 14 – 4 PM to 4:30 PM PST

Friday, January 15 – 12 AM to 12:30 AM PST

Learn how data scientists at J P Morgan Chase use custom ML solutions built on top of Amazon SageMaker to gather intelligent insights, while adhering to secure control policies and regulatory requirements.

Streamlining media content with PBS (AIM318)

Wednesday, January 13 – 3 PM to 3:30 PM PST

Wednesday, January 13 – 11 PM to 11:30 PM PST

Thursday, January 14 – 7 AM to 7:30 AM PST

Enhancing the viewer experience by streamlining operational tasks to review, search, and analyze image and video content is a critical factor for the media and entertainment industry. Learn how PBS uses Amazon Rekognition to build relevant features such as deep content search, brand safety, and automated ad insertion to get more out of their content.

Fraud detection with AWS and Coinbase (AIM320)

Thursday, January 14 – 10:15 AM to 10:45 AM PST

Thursday, January 14 – 6:15 PM to 6:45 PM PST

Friday, January 15 – 2:15 AM to 2:45 AM PST

Among many use cases, ML helps mitigate a universally expensive problem: fraud. Join AWS and Coinbase to learn how to detect fraud faster using sample datasets and architectures, and help save millions of dollars for your organization.

Autonomous vehicle solutions with Lyft (AIM315)

Wednesday, January 13 – 2 PM to 2:30 PM PST

Wednesday, January 13 – 10 PM to 10:30 PM PST

Thursday, January 14 – 6 AM to 6:30 AM PST

In this session, we discuss how computer vision models are labeled and trained at Lyft using Amazon SageMaker Ground Truth for visual perception tasks that are critical for autonomous driving systems.

Modernize your contact center with AWS Contact Center Intelligence (CCI) (AIM214)

Tuesday, January 12 – 1:15 PM to 1:45 PM PST

Tuesday, January 12 – 9:15 PM to 9:45 PM PST

Wednesday, January 13 – 5:15 AM to 5:45 AM PST

Improve the customer experience with reduced costs using AWS Contact Center Intelligence (CCI) solutions. You will hear from SuccessKPI, an AWS partner, on how they use CCI solutions to solve business problems such as improving agent effectiveness and automating quality management in enterprise contact centers.

Machine learning concepts with AWS

Consistent and portable environments with containers (AIM317)

Wednesday, January 13 – 8:45 AM to 9:15 AM PST

Wednesday, January 13 – 4:45 PM to 5:15 PM PST
Thursday, January 14 – 12:45 AM to 1:15 AM PST

Learn how to build consistent and portable ML environments using containers with AWS services such as Amazon SageMaker and Amazon Elastic Kubernetes Service (Amazon EKS) across multiple deployment clusters. This session will help you build these environments with ease and at scale in the midst of the ever-growing list of open-source frameworks and tools.

Achieve real-time inference at scale with Deep Java Library (AIM410)

Thursday, January 14 – 3:30 PM to 4 PM PST

Thursday, January 14 – 11:30 PM to 12 AM PST

Friday, January 15 – 7:30 AM to 8 AM PST

Deep Java Library (DJL) from AWS helps you build ML applications without needing to learn a new language. Learn how to use DJL and deploy models including BERT in the DJL model zoo to achieve real-time inference at scale.

Don’t miss out on all the action. We look forward to seeing you on the artificial intelligence and machine learning track. Please see the re:Invent agenda for more details and to build your schedule.


About the Author

Shyam Srinivasan is on the AWS Machine Learning marketing team. He cares about making the world a better place through technology and loves being part of this journey. In his spare time, Shyam likes to run long distances, travel around the world, and experience new cultures with family and friends.

Read More

Accelerating MLOps at Bayer Crop Science with Kubeflow Pipelines and Amazon SageMaker

This is a guest post by the data science team at Bayer Crop Science. 

Farmers have always collected and evaluated a large amount of data with each growing season: seeds planted, crop protection inputs applied, crops harvested, and much more. The rise of data science and digital technologies provides farmers with a wealth of new information. At Bayer Crop Science, we use AI and machine learning (ML) to help farmers achieve more bountiful and sustainable harvests. We also use data science to accelerate our research and development process; create efficiencies in production, operations, and supply chain; and improve customer experience.

To evaluate potential products, like a short-stature line of corn or an advanced herbicide, Bayer scientists often plant a small trial in a greenhouse or field. We then use advanced sensors and analytical models to evaluate the experimental results. For example, we might fly an unmanned aerial vehicle over a field and use computer vision models to count the number of plants or measure their height. In this way, we’ve collected data from millions of test plots around the world and used them to train models that can determine the size and position of every plant in our image library.

Analytical models like these are powerful but require effort and skill to design and train effectively. science@scale, the ML engineering team at Bayer Crop Science, has made these techniques more accessible by integrating Amazon SageMaker with open-source tools like KubeFlow Pipelines to create reproducible templates for analytical model training, hosting, and access. These resources help standardize how our data scientists interact with SageMaker services. They also make it easier to meet Bayer-specific requirements, such as using multiple AWS accounts and resource tags.

Standardizing the ML workflow for Bayer Crop Science

Data science teams at Bayer Crop Science follow a common pattern to develop and deploy ML models:

  1. A data scientist develops model and training code in a SageMaker notebook or other coding environment running in a project-specific AWS account.
  2. A data scientist trains the model on data stored in Amazon Simple Storage Service (Amazon S3).
  3. A data scientist partners with an ML engineer to deploy the trained model as an inference service.
  4. An ML engineer creates the API proxies required for applications outside of the project-specific account to call the inference service.
  5. ML and other engineers perform additional steps to meet Bayer-specific infrastructure and security requirements.

To automate this process, our team transformed the steps into a reusable, parameterized workflow using KubeFlow Pipelines (KFP). Each step of a workflow (a KFP component) is associated with a Docker container and connected via the KFP Pipelines framework. Using Kubeflow to host Bayer’s model training and deployment process was enabled through the use of the Amazon SageMaker Components for KubeFlow Pipelines, pre-built modules that simplify the process of running SageMaker operations from within KFP. We combined these with custom components to automate the Bayer-specific engineering steps, particularly those relating to cybersecurity. The resulting pipeline allows data scientists to trigger model training and deployment with only a few lines of code and ensures that the model artifacts are generated and maintained consistently. This provides data scientists more time to focus on improving the models themselves.

 

AWS account setup

Bayer Crop Science organizes its cloud resources into a large number of application-, team-, and project-specific accounts. For this reason, many ML projects require resources in at least three AWS accounts:

  • ML support account – Contains the shared infrastructure necessary to perform Bayer-specific proxy generation and other activities across multiple projects
  • KubeFlow account – Contains an Amazon Elastic Kubernetes Service (Amazon EKS) cluster hosting our KubeFlow deployment
  • Scientist account – At least one project-specific account in which data scientists store most of the required data and perform model development and training

The following diagram illustrates this architecture.

 

ML support AWS account

One centralized account contains the infrastructure required to perform Bayer-specific post-processing steps across multiple ML projects. Most notably, this includes a KubeFlow Master Pipeline Execution AWS Identity and Access Management (IAM) role. This role has trust relationships with all the pipeline execution roles in the scientist account, which it can assume when running the pipeline. It’s separate from the Pipeline Runner IAM role in the KubeFlow AWS account to allow management of these relationships independent from other entities within the KubeFlow cluster. The following code shows the trust relationship:

Trust Relationship (one element for each scientist account):
{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Sid": "",
      "Effect": "Allow",
      "Principal": {
        "AWS": "arn:aws:iam::[kubeflow-account-number]:role/[kubeflow-pipeline-exeution-role-name]"
      },
      "Action": "sts:AssumeRole"
    }
  ]
}

KubeFlow AWS account

Bayer Crop Science uses a standard installation of KubeFlow hosted on Amazon EKS in a centralized AWS account. At the time of this writing, all KubeFlow pipelines run within the same namespace on a KubeFlow cluster and all components assume a custom IAM role when they run. The components can inherit the role from the worker instance, applied via OIDC integration (preferred) or obtained using open-source methods such as kube2iam.

Scientist AWS account

To enable access by model training and hosting resources, all scientist accounts must contain several IAM roles with standard permission sets. These are typically provisioned on request by an ML engineer using Terraform. These roles include:

  • Model Execution – Supports SageMaker inference endpoints
  • Training Execution – Supports SageMaker training jobs
  • KubeFlow Pipeline Execution – Supports creating, updating, or deleting resources using the Amazon SageMaker Components for KubeFlow Pipelines

These IAM roles are given policies that are appropriate for their associated tasks, which can vary depending on organizational needs. An S3 bucket is also created to store trained model artifacts and any data required by the model during inference or training.

KubeFlow pipeline setup

Our ML pipeline (see the following diagram) uses Amazon SageMaker Components for KubeFlow Pipelines to standardize the integration with SageMaker training and deployment services.

 

The ML pipeline exposes the parameters summarized in the following table.

Parameter Description
model_name Name of the model to train and deploy. Influences the job, endpoint, endpoint config, and model names.
model_docker_image If present, the pipeline attempts to deploy a model using this base Docker image.
model_artifact_s3_path If a model artifact already exists and doesn’t need to be trained, its S3 path can be specified.
environment JSON object containing environment variables injected into the model endpoint.
training_algorithm_name If training without a Docker image, one of preconfigured AWS training algorithms can be specified.
training_docker_image If training with a base Docker image, it can be specified here.
training_hyperparameters JSON object containing hyperparameters for the training job.
training_instance_count Specifies the number of training instances for use in distributed training scenarios.
training_instance_type String indicating which ML instance type is used to host the training process.
endpoint_instance_type String indicating which ML instance type is used to host the endpoint process.
training_channels JSON array of data channels that are injected into the training job.
training_s3_output_path Base S3 path where model artifacts are written in the case of a training job.
account_id Account number of the data scientist account. Used in role assumption logic.

See the following pipeline code:

@dsl.pipeline(name='Kubeflow Sagemaker Component Deployment Pipeline')
def pipeline(model_name = "",
             account_id = "",
             model_docker_image = "",
             model_artifact_s3_path = "",
             environment = '{}',
             training_algorithm_name = '',
             training_docker_image = "",
             training_hyperparameters = '{}',
             training_instance_count = 2,
             endpoint_instance_type = "ml.m5.large",
             training_instance_type = "ml.m5.large",
             training_channels = '',
             training_s3_output_path = ""
):

….Pipeline Component Code

if __name__ == '__main__':
    kfp.compiler.Compiler().compile(pipeline, __file__ + '.tar.gz')
    print("Pipeline compiled successfully.")

To create the pipeline, we ran the .py file to compile it into a .tar.gz file and uploaded it into the KubeFlow UI.

Running the pipeline

After pipeline creation is complete, data scientists can invoke the pipeline from multiple Jupyter notebooks using the KubeFlow SDK. They can then track the pipeline run for their model in the KubeFlow UI. See the following code:

kfp_token = get_oauth_token(client_id, client_secret)
kfp_client = kfp.Client(host=kubeflow_api, client_id=client_id, existing_token=kfp_token)
print("Connect to: " + str(kfp_client._run_api.api_client.configuration.host))
experiment = kfp_client.get_experiment(experiment_name="Default")
print(experiment)

def get_sgm_deploy_pipeline_id():
    pipelines = kfp_client.list_pipelines(page_size=1000)
    pipeline_id = None
    for pipeline in pipelines.pipelines:
        if pipeline.name == "sagemaker-components-poc":
            pipeline_id = pipeline.id
            break
    return pipeline_id

sagemaker_deployment_parameters = {
    "model_name": "your-model-name",
    "account_id": boto3.client("sts").get_caller_identity()["Account"],
    "model_docker_image": "520713654638.dkr.ecr.us-east-1.amazonaws.com/sagemaker-tensorflow-serving:1.12-cpu",
    "environment": json.dumps({ "SAGEMAKER_TFS_NGINX_LOGLEVEL": "info"}),
    "training_docker_image": "520713654638.dkr.ecr.us-east-1.amazonaws.com/sagemaker-tensorflow-scriptmode:1.12-cpu-py3",
    "training_hyperparameters": json.dumps({
      "model_dir": "/opt/ml/model",
      "sagemaker_container_log_level": "20",
      "sagemaker_enable_cloudwatch_metrics": "false",
      "sagemaker_mpi_custom_mpi_options": "-verbose --NCCL_DEBUG=INFO -x OMPI_MCA_btl_vader_single_copy_mechanism=none",
      "sagemaker_mpi_enabled": "true",
      "sagemaker_mpi_num_of_processes_per_host": "2",
      "sagemaker_program": "train_mnist.py",
      "sagemaker_region": "us-east-1",
      "sagemaker_submit_directory": "s3://path/to/sourcedir.zip"
}),
    "training_instance_count": "2",
    "training_channels": '[{"ChannelName":"train","DataSource":{"S3DataSource":{"S3Uri":"s3://path/to/training-data","S3DataType":"S3Prefix","S3DataDistributionType":"FullyReplicated"}},"ContentType":"","CompressionType":"None","RecordWrapperType":"None","InputMode":"File"},{"ChannelName":"test","DataSource":{"S3DataSource":{"S3Uri":"s3://path/to/test/data","S3DataType":"S3Prefix","S3DataDistributionType":"FullyReplicated"}},"ContentType":"","CompressionType":"None","RecordWrapperType":"None","InputMode":"File"}]',
    "training_s3_output_path": "s3://path/to/model/artifact/output/"
}

run = {
    "name": "my-run-name",
    "pipeline_spec": { 
        "parameters": [
            { "name": param, "value": sagemaker_deployment_parameters[param] } for param in sagemaker_deployment_parameters.keys()
        ], 
        "pipeline_id": get_sgm_deploy_pipeline_id() 
    },
    "resource_references": [
        {
            "key": {
                "id": experiment.id,
                "type": "EXPERIMENT"
            },
            "relationship": "OWNER"
        }
    ]
}

requests.post("{}/apis/v1beta1/runs".format(kubeflow_api), data=json.dumps(run), headers={ "Authorization": "Bearer " + kfp_token })

Each run consists of a series of steps:

  1. Create a persistent volume claim.
  2. Generate AWS credentials.
  3. Generate resource tags.
  4. (Optional) Transfer the Docker image to Amazon Elastic Container Registry (Amazon ECR).
  5. Train the model.
  6. Generate a model artifact.
  7. Deploy the model on SageMaker hosting services.
  8. Perform Bayer-specific postprocessing.

Step 1: Creating a persistent volume claim

The first step of the process verifies that a persistent volume claim (PVC) exists within the Kubernetes cluster hosting the KubeFlow instances. This volume is returned to the pipeline and used to pass data to various components within the pipeline. See the following code:

def get_namespace():
    return open("/var/run/secrets/kubernetes.io/serviceaccount/namespace").read()


def check_pvc_exists(pvc):
    config.load_incluster_config()
    v1 = client.CoreV1Api()
    namespace = get_namespace()
    try:
        response = v1.read_namespaced_persistent_volume_claim(pvc, namespace)
    except ApiException as error:
        if error.status == 404:
            print("PVC {} does not exist, so it will be created.".format(pvc))
            return False
        raise
    print(response)
    return True


def create_pvc(pvc_name):
    config.load_incluster_config()
    v1 = client.CoreV1Api()
    namespace = get_namespace()
    pvc_metadata = client.V1ObjectMeta(name=pvc_name)
    requested_resources = client.V1ResourceRequirements(requests={"storage": "50Mi"})
    pvc_spec = client.V1PersistentVolumeClaimSpec(
        access_modes=["ReadWriteMany"],
        resources=requested_resources,
        storage_class_name="efs",
        data_source=None,
        volume_name=None
    )
    k8s_resource = client.V1PersistentVolumeClaim(
        api_version="v1",
        kind="PersistentVolumeClaim",
        metadata=pvc_metadata,
        spec=pvc_spec
    )
    response = v1.create_namespaced_persistent_volume_claim(namespace, k8s_resource)
    print(response)

Step 2: Generating AWS credentials

This step generates a session token for the pipeline execution role in the specified scientist AWS account. It then writes a credentials file to the PVC in a way that allows boto3 to access it as a configuration. Downstream pipeline components mount the PVC as a volume and use the credentials file to perform operations against SageMaker.

This credential generation step is required for KubeFlow to operate across multiple AWS accounts in Bayer’s environment. This is because all pipelines run in the same namespace and run using the generic KubeFlow Pipeline Runner IAM role from the Kubeflow AWS account. Each pipeline in Bayer’s Kubeflow environment has a dedicated IAM role associated with it that has a trust relationship with the Kubeflow Pipeline Runner IAM role. For this deployment workflow, the KSageMaker Deployment Master Pipeline Executor IAM role is assumed by the KubeFlow Pipeline Runner IAM role, and then the appropriate deployment role within the data scientist account is assumed by that role in turn. This keeps the trust relationships for the deployment process as self-contained as possible. See the following code:

import os
credentials_file_path = "/tmp/aws_credentials"
if os.path.exists(credentials_file_path):
    os.remove(credentials_file_path)

import argparse
import sts_ops

parser = argparse.ArgumentParser()
parser.add_argument("--account_id", help="AWS Account Id", required=True)
parser.add_argument("--master_pipeline_role", help="ARN of master pipeline role", required=True)


args = parser.parse_args()

master_session = sts_ops.assume_master_pipeline_role(args.master_pipeline_role)
creds = sts_ops.generate_deploy_session_credentials(master_session, args.account_id)
credentials_output = """[default]
aws_access_key_id = {}
aws_secret_access_key = {}
aws_session_token = {}
""".format(creds["AccessKeyId"], creds["SecretAccessKey"], creds["SessionToken"])
open("/tmp/aws_credentials", "w").write(credentials_output)
open("/tmp/aws_credentials_location.txt", "w").write(credentials_file_path) 

Step 3: Generating resource tags

Within Bayer, a standard set of tags are used to help identify Amazon resources. These tags are specified in an S3 path and applied to the model and endpoints via parameters in the corresponding SageMaker components.

Step 4: (Optional) Transferring a Docker image to Amazon ECR

If the model training and inference images are not stored in a SageMaker-compatible Docker repository, this step copies them into Amazon ECR using a custom KubeFlow component.

Step 5: Training the model

The SageMaker Training KubeFlow Pipelines component creates a SageMaker training job and outputs a path to the eventual model artifact for downstream use. See the following code:

train_model_op = kfp.components. load_component_from_url('https://raw.githubusercontent.com/kubeflow/pipelines/cb36f87b727df0578f4c1e3fe9c24a30bb59e5a2/components/aws/sagemaker/train/component.yaml')
train_model_step = apply_environment_variables(train_model_op(
    algorithm_name=training_algorithm_name,
    hyperparameters=training_hyperparameters,
    image=training_docker_image,
    instance_type=training_instance_type,
    channels=training_channels,
    region=aws_region,
    instance_count=training_instance_count,
    role="arn:aws:iam::{}:role/sagemaker-deploy-model-execution-role".format(account_id),
    model_artifact_path=training_s3_output_path,
    network_isolation=False
), sgm_volume, create_secret_step.output

Step 6: Generating a model artifact

The SageMaker Create Model KubeFlow Pipelines component generates a .tar.gz file containing the model configuration and trained parameters for downstream use. If a model artifact already exists in the specified S3 location, this step deletes it before generating a new one. See the following code:

sagemaker_create_model_op = components.load_component_from_url('https://raw.githubusercontent.com/kubeflow/pipelines/cb36f87b727df0578f4c1e3fe9c24a30bb59e5a2/components/aws/sagemaker/model/component.yaml')
sagemaker_create_model_step = sagemaker_create_model_op(
    region=aws_region,
    model_name=model_name,
    image=image,
    role="arn:aws:iam::{}:role/sagemaker-deploy-model-execution-role".format(account_id),
    model_artifact_url=model_artifact_url,
    network_isolation=False,
    environment=environment,
    tags=tags
)

Step 7: Deploying the model on SageMaker hosting services

The SageMaker Create Endpoint KubeFlow Pipelines component creates an endpoint configuration and HTTPS endpoint. This process can take some time because the component pauses until the endpoint is in a ready state. See the following code:

Sagemaker_deploy_op = components.load_component_from_url('https://raw.githubusercontent.com/kubeflow/pipelines/cb36f87b727df0578f4c1e3fe9c24a30bb59e5a2/components/aws/sagemaker/deploy/component.yaml')
create_endpoint_step = apply_environment_variables(sagemaker_deploy_op(
    region=aws_region,
    endpoint_config_name=full_model_name,
    model_name_1=full_model_name,
    instance_type_1=endpoint_instance_type,
    endpoint_name=full_model_name,
    endpoint_config_tags=generate_tags_step.output,
    endpoint_tags=generate_tags_step.output
), sgm_volume, create_secret_step.output)
create_endpoint_step.after(create_model_step)

Step 8: Performing Bayer-specific postprocessing

Finally, the pipeline generates an Amazon API Gateway deployment and other Bayer-specific resources required for other applications within the Bayer network to use the model.

Conclusion

Data science is complex enough without asking data scientists to take on additional engineering responsibilities. By integrating open-source tools like KubeFlow with the power of Amazon SageMaker, the science@scale team at Bayer Crop Science is making it easier to develop and share advanced ML models. The MLOps workflow described in this post gives data scientists a self-service method to deploy scalable inference endpoints in the same notebooks they use for exploratory data analysis and model development. The result is rapid iteration, more successful data science products, and ultimately greater value for our farmer customers.

In the future, we’re looking forward to adding additional SageMaker components for hyperparameter optimization and data labeling to our pipeline. We’re also looking at ways to recommend instance types, configure endpoint autoscaling, and support multi-model endpoints. These additions will allow us to further standardize our ML workflows.


About the Authors

Thomas Kantowski is a cloud engineer at Bayer Crop Science. He received his master’s degree from the University of Oklahoma.

Brian Loyal leads science@scale, the enterprise ML engineering team at Bayer Crop Science.

Bhaskar Dutta is a data scientist at Bayer Crop Science. He designs machine learning models using deep neural networks and Bayesian statistics.

Read More