British Newscaster speaking style now available in Amazon Polly

British Newscaster speaking style now available in Amazon Polly

Amazon Polly turns text into lifelike speech, allowing you to create applications that talk and build entirely new categories of speech-enabled products. We’re thrilled to announce the launch of a brand-new, British Newscaster speaking style voice: Amy. The speaking style mimics a formal and authoritative British newsreader. This Newscaster voice is the result of our latest achievements in Neural Text-to-Speech (NTTS) technology, making it possible to release new voices with only a few hours of recordings.

Amy’s British English Newscaster voice offers an alternative to the existing Newscaster speaking styles in US English (Matthew and Joanna, launched in July 2019) and US Spanish (Lupe, launched in April 2020). The style is suitable for a multitude of sectors, such as publishing and media. The high quality of the voice and its broadcaster-like style contribute to a more pleasant listening experience to relay news content.

Don’t just take our word for it! Our customer SpeechKit is a text-to-audio service that utilizes Amazon Polly as a core component of their toolkit. Here’s what their co-founder and COO, James MacLeod, has to say about this exciting new style: “News publishers use SpeechKit to publish their articles and newsletters in audio. The Amy Newscaster style is another great improvement from the Polly team, the pitch and clarity of intonation of this style fits well with this type of short-to-mid form news publishing. It provides listeners with a direct and informative style they’re used to hearing from human-read audio articles. As these voices advance, and new listening habits develop, publishers continue to observe improvements in audio engagement. News publishers can now start using the Amy Newscaster style through SpeechKit to make their articles available in audio, at scale, and track audio engagement.

You can listen to the following samples to hear how this brand-new British Newscaster speaking style sounds:

Amy: 

The following samples are the other Newscaster speaking styles in US English and US Spanish: 

Matthew:

Joanna:

Lupe: 

You can use Amy’s British Newscaster speaking style via the Amazon Polly console, the AWS Command Line Interface (AWS CLI), or AWS SDK. The feature is available in all AWS Regions supporting NTTS. For more information, see What Is Amazon Polly? For the full list of available voices, see Voices in Amazon Polly. Or log in to the Amazon Polly console to try it out for yourself! Additionally, Amy Newscaster and other selected Polly voices are now available to Alexa skill developers.

 


About the Author

Goeric Huybrechts is a Software Development Engineer in the Amazon Text-to-Speech Research team. At work, he is passionate about everything that touches AI. Outside of work, he loves sports, football in particular, and loves to travel.

Read More

Learn from the winner of the AWS DeepComposer Chartbusters challenge The Sounds of Science

Learn from the winner of the AWS DeepComposer Chartbusters challenge The Sounds of Science

AWS is excited to announce the winner of the AWS DeepComposer Chartbusters The Sounds of Science Challenge, Sungin Lee. AWS DeepComposer gives developers a creative way to get started with machine learning (ML). In June, we launched Chartbusters, a monthly global competition during which developers use AWS DeepComposer to create original compositions and compete to showcase their ML skills. The third challenge, The Sounds of Science, challenged developers to create background music for a video-clip.

Sungin is a Junior Solutions Architect for MegazoneCloud, one of the largest AWS partners in South Korea. Sungin studied linguistics and anthropology in university, but made a career change to cloud engineering. When Sungin first started learning about ML, he never knew he would create the winning composition for the Chartbusters challenge.

We interviewed Sungin to learn about his experience competing in the third Chartbusters challenge, which ran from September 2–23, 2020, and asked him to tell us more about how he created his winning composition.


Sungin Lee at his work station.

Getting started with machine learning

Sungin began his interest in ML and Generative Adversarial Networks (GANs) through the vocational education he received as he transitioned to cloud engineering.

“As part of the curriculum, there was a team project in which my team tried to make a model that generates an image according to the given sentence through GANs. Unfortunately, we failed at training the model due to the complexity of it but [the experience] deepened my interest in GANs.”

After receiving his vocational education, Sungin chose to pursue a career in cloud engineering and joined Megazone Cloud. Six months in to his career, Sungin’s team leader at work encouraged him to try AWS DeepComposer.

“When the challenge first launched, my team leader told me about the challenge and encouraged me to participate in it. I was already interested in GANs and music, and as a new hire, I wanted to show my machine learning skills.” 

Building in AWS DeepComposer

In The Sounds of Science, developers composed background music for a video clip using the Autoregressive Convolutional Neural Network (AR-CNN) algorithm and edited notes with the newly launched Edit melody feature to better match the music with the provided video.

“I began by selecting the initial melody. When I first saw the video, I thought that one of the sample melodies, ‘Ode to Joy,’ went quite well with the atmosphere of the video and decided to use it. But I wanted the melody to sound more soothing than the original so I slightly lowered the pitch. Then I started enhancing the melody with AR-CNN.”


Sungin composing his melody.

Sungin worked on his competition for a day before generating his winning melody.

“I generated multiple compositions with AR-CNN until I liked the melody. Then I started adding more instruments. I experimented with all sample models from MuseGan and decided that rock suits melody the best. I found the ‘edit melody’ feature very helpful. In the process of enhancing the melody with AR-CNN, some off-key notes would appear and disrupt the harmony. But with the ‘edit melody’ feature, I could just remove or modify the wrong note and put the music back in key!”

The Edit melody feature on the AWS DeepComposer console.

“The biggest obstacle was my own doubt. I had a hard time being satisfied with the output, and even thought of giving up on the competition and never submitting any compositions. But then I thought, why give up? So I submitted my best composition by far and won the challenge.”

You can listen to Sungin’s winning composition, “The Joy,” on the AWS DeepComposer SoundCloud page.

Conclusion

Sungin believes that the AWS DeepComposer Chartbusters challenge gave him the confidence in his career transition to continue pursuing ML.

“It has been only a year since I started studying machine learning properly. As a non-Computer Science major without any basic computer knowledge, it was hard to successfully achieve my goals with machine learning. For example, my team project during the vocational education ended up unsuccessful, and the AWS DeepRacer model that I made could not finish the track. Then, when I was losing confidence in myself, I won first place in the AWS DeepComposer Chartbusters challenge! This victory reminded me that I could actually win something with machine learning and motivated me to keep studying.”

Overall, Sungin completed the challenge with a feeling of accomplishment and a desire to learn more.

“This challenge gave me self-confidence. I will keep moving forward on my machine learning path and keep track on new GAN techniques.”

Congratulations to Sungin for his well-deserved win!

We hope Sungin’s story has inspired you to learn more about ML and get started with AWS DeepComposer. Check out the next AWS DeepComposer Chartbusters challenge, and start composing today.

 


About the Author

Paloma Pineda is a Product Marketing Manager for AWS Artificial Intelligence Devices. She is passionate about the intersection of technology, art, and human centered design. Out of the office, Paloma enjoys photography, watching foreign films, and cooking French cuisine.

Read More

Bringing your own custom container image to Amazon SageMaker Studio notebooks

Bringing your own custom container image to Amazon SageMaker Studio notebooks

Amazon SageMaker Studio is the first fully integrated development environment (IDE) for machine learning (ML). SageMaker Studio lets data scientists spin up Studio notebooks to explore data, build models, launch Amazon SageMaker training jobs, and deploy hosted endpoints. Studio notebooks come with a set of pre-built images, which consist of the Amazon SageMaker Python SDK and the latest version of the IPython runtime or kernel. With this new feature, you can bring your own custom images to Amazon SageMaker notebooks. These images are then available to all users authenticated into the domain. In this post, we share how to bring a custom container image to SageMaker Studio notebooks.

Developers and data scientists may require custom images for several different use cases:

  • Access to specific or latest versions of popular ML frameworks such as TensorFlow, MxNet, PyTorch, or others.
  • Bring custom code or algorithms developed locally to Studio notebooks for rapid iteration and model training.
  • Access to data lakes or on-premises data stores via APIs, and admins need to include the corresponding drivers within the image.
  • Access to a backend runtime, also called kernel, other than IPython such as R, Julia, or others. You can also use the approach outlined in this post to install a custom kernel.

In large enterprises, ML platform administrators often need to ensure that any third-party packages and code is pre-approved by security teams for use, and not downloaded directly from the internet. A common workflow might be that the ML Platform team approves a set of packages and frameworks for use, builds a custom container using these packages, tests the container for vulnerabilities, and pushes the approved image in a private container registry such as Amazon Elastic Container Registry (Amazon ECR). Now, ML platform teams can directly attach approved images to the Studio domain (see the following workflow diagram). You can simply select the approved custom image of your choice in Studio. You can then work with the custom image locally in your Studio notebook. With this release, a single Studio domain can contain up to 30 custom images, with the option to add a new version or delete images as needed.

We now walk through how you can bring a custom container image to SageMaker Studio notebooks using this feature. Although we demonstrate the default approach over the internet, we include details on how you can modify this to work in a private Amazon Virtual Private Cloud (Amazon VPC).

Prerequisites

Before getting started, you need to make sure you meet the following prerequisites:

  • Have an AWS account.
  • Ensure that the execution role you use to access Amazon SageMaker has the following AWS Identity and Access Management (IAM) permissions, which allow SageMaker Studio to create a repository in Amazon ECR with the prefix smstudio, and grant permissions to push and pull images from this repo. To use an existing repository, replace the Resource with the ARN of your repository. To build the container image, you can either use a local Docker client or create the image from SageMaker Studio directly, which we demonstrate here. To create a repository in Amazon ECR, SageMaker Studio uses AWS CodeBuild, and you also need to include the CodeBuild permissions shown below.
    {
                "Effect": "Allow",
                "Action": [
                    "ecr:CreateRepository",
                    "ecr:BatchGetImage",
                    "ecr:CompleteLayerUpload",
                    "ecr:DescribeImages",
                    "ecr:DescribeRepositories",
                    "ecr:UploadLayerPart",
                    "ecr:ListImages",
                    "ecr:InitiateLayerUpload",
                    "ecr:BatchCheckLayerAvailability",
                    "ecr:GetDownloadUrlForLayer",
                    "ecr:PutImage"
                ],
                "Resource": "arn:aws:ecr:*:*:repository/smstudio*"
            },
            {
                "Effect": "Allow",
                "Action": "ecr:GetAuthorizationToken",
                "Resource": "*"
               }
    {
                "Effect": "Allow",
                "Action": [
                    "codebuild:DeleteProject",
                    "codebuild:CreateProject",
                    "codebuild:BatchGetBuilds",
                    "codebuild:StartBuild"
                ],
                "Resource": "arn:aws:codebuild:*:*:project/sagemaker-studio*"
    }

  • Your SageMaker role should also have a trust policy with AWS CodeBuild as shown below. For more information, see Using the Amazon SageMaker Studio Image Build CLI to build container images from your Studio notebooks
    {
      "Version": "2012-10-17",
      "Statement": [
        {
          "Effect": "Allow",
          "Principal": {
            "Service": [
              "codebuild.amazonaws.com"
            ]
          },
          "Action": "sts:AssumeRole"
        }
      ]
    }

  • Install the AWS Command Line Interface (AWS CLI) on your local machine. For instructions, see Installing the AWS.

If you wish to use your private VPC to securely bring your custom container, you also need the following:

To set up these resources, see Securing Amazon SageMaker Studio connectivity using a private VPC and the associated GitHub repo.

Creating your Dockerfile

To demonstrate the common need from data scientists to experiment with the newest frameworks, we use the following Dockerfile, which uses the latest TensorFlow 2.3 version as the base image. You can replace this Dockerfile with a Dockerfile of your choice. Currently, SageMaker Studio supports a number of base images, such as Ubuntu, Amazon Linux 2, and others. The Dockerfile installs the IPython runtime required to run Jupyter notebooks, and installs the Amazon SageMaker Python SDK and boto3.

In addition to notebooks, data scientists and ML engineers often iterate and experiment on their local laptops using various popular IDEs such as Visual Studio Code or PyCharm. You may wish to bring these scripts to the cloud for scalable training or data processing. You can include these scripts as part of your Docker container so they’re visible in your local storage in SageMaker Studio. In the following code, we copy the train.py script, which is a base script for training a simple deep learning model on the MNIST dataset. You may replace this script with your own scripts or packages containing your code.

FROM tensorflow/tensorflow:2.3.0
RUN apt-get update 
RUN apt-get install -y git
RUN pip install --upgrade pip
RUN pip install ipykernel && 
    python -m ipykernel install --sys-prefix && 
    pip install --quiet --no-cache-dir 
    'boto3>1.0<2.0' 
    'sagemaker>2.0<3.0'
COPY train.py /root/train.py #Replace with your own custom scripts or packages

import tensorflow as tf
import os 
mnist = tf.keras.datasets.mnist
(x_train, y_train), (x_test, y_test) = mnist.load_data()
x_train, x_test = x_train / 255.0, x_test / 255.0
model = tf.keras.models.Sequential([
  tf.keras.layers.Flatten(input_shape=(28, 28)),
  tf.keras.layers.Dense(128, activation='relu'),
  tf.keras.layers.Dropout(0.2),
  tf.keras.layers.Dense(10, activation='softmax')
])
model.compile(optimizer='adam',
              loss='sparse_categorical_crossentropy',
              metrics=['accuracy'])
model.fit(x_train, y_train, epochs=1)
model.evaluate(x_test, y_test)

Instead of a custom script, you can also include other files, such as Python files that access client secrets and environment variables via AWS Secrets Manager or AWS Systems Manager Parameter Store, config files to enable connections with private PyPi repositories, or other package management tools. Although you can copy the script using the custom image, any ENTRYPOINT or CMD commands in your Dockerfile don’t run.

Setting up your installation folder

You need to create a folder on your local machine, and add the following files in that folder:

  • The Dockerfile that you created in the previous step
  • A file named app-image-config-input.json with the following content:
    "AppImageConfigName": "custom-tf2",
        "KernelGatewayImageConfig": {
            "KernelSpecs": [
                {
                    "Name": "python3",
                    "DisplayName": "Python 3"
                }
            ],
            "FileSystemConfig": {
                "MountPath": "/root/data",
                "DefaultUid": 0,
                "DefaultGid": 0
            }
        }
    }

We set the backend kernel for this Dockerfile as an IPython kernel, and provide a mount path to the Amazon Elastic File System (Amazon EFS). Amazon SageMaker recognizes kernels as defined by Jupyter. For example, for an R kernel, set Name in the preceding code to ir.

  • Create a file named default-user-settings.json with the following content. If you’re adding multiple custom images, just add to the list of CustomImages.
    {
      "DefaultUserSettings": {
        "KernelGatewayAppSettings": {
          "CustomImages": [
              {
                       "ImageName": "tf2kernel",
                       "AppImageConfigName": "custom-tf2"
                    }
                ]
            }
        }
    }

Creating and attaching the image to your Studio domain

If you have an existing domain, you simply need to update the domain with the new image. In this section, we demonstrate how existing Studio users can attach images. For instructions on onboarding a new user, see Onboard to Amazon SageMaker Studio Using IAM.

First, we use the SageMaker Studio Docker build CLI to build and push the Dockerfile to Amazon ECR. Note that you can use other methods to push containers to ECR such as your local docker client, and the AWS CLI.

    1. Log in to Studio using your user profile.
    2. Upload your Dockerfile and any other code or dependencies you wish to copy into your container to your Studio domain.
    3. Navigate to the folder containing the Dockerfile.
    4. In a terminal window or in a notebook —>
!pip install sagemaker-studio-image-build
  1. Export a variable called IMAGE_NAME, and set it to the value you specified in the default-user-settings.json
    sm-docker build . --repository smstudio-custom:IMAGE_NAME

  2. If you wish to use a different repository, replace smstudio-custom in the preceding code with your repo name.

SageMaker Studio builds the Docker image for you and pushes the image to Amazon ECR in a repository named smstudio-custom, tagged with the appropriate image name. To customize this further, such as providing a detailed file path or other options, see Using the Amazon SageMaker Studio Image Build CLI to build container images from your Studio notebooks. For the pip command above to work in a private VPC environment, you need a route to the internet or access to this package in your private repository.

  1. In the installation folder from earlier, create a new file called create-and-update-image.sh:
    ACCOUNT_ID=AWS ACCT ID # Replace with your AWS account ID
    REGION=us-east-2 #Replace with your region
    DOMAINID=d-####### #Replace with your SageMaker Studio domain name.
    IMAGE_NAME=tf2kernel #Replace with your Image name
    
    # Using with SageMaker Studio
    ## Create SageMaker Image with the image in ECR (modify image name as required)
    ROLE_ARN='The Execution Role ARN for the execution role you want to use'
    
    aws --region ${REGION} sagemaker create-image 
        --image-name ${IMAGE_NAME} 
        --role-arn ${ROLE_ARN}
    
    aws --region ${REGION} sagemaker create-image-version 
        --image-name ${IMAGE_NAME} 
        --base-image "${ACCOUNT_ID}.dkr.ecr.${REGION}.amazonaws.com/smstudio-custom:${IMAGE_NAME}"
        
    ## Create AppImageConfig for this image (modify AppImageConfigName and KernelSpecs in app-image-config-input.json as needed)
    aws --region ${REGION} sagemaker create-app-image-config --cli-input-json file://app-image-config-input.json
    
    ## Update the Domain, providing the Image and AppImageConfig
    aws --region ${REGION} sagemaker update-domain --domain-id ${DOMAINID} --cli-input-json file://default-user-settings.json

    Refer to the AWS CLI to read more about the arguments you can pass to the create-image API. To check the status, navigate to your Amazon SageMaker console and choose Amazon SageMaker Studio from the navigation pane.

Attaching images using the Studio UI

You can perform the final step of attaching the image to the Studio domain via the UI. In this case, the UI will handle the setting up of the

  1. On the Amazon SageMaker console, choose Amazon SageMaker Studio.

On the Control Panel page, you can see that the Studio domain was provisioned, along with any user profiles that you created.

  1. Choose Attach image.

  1. Select whether you wish to attach a new or pre-existing image.
    1. If you select Existing image, choose an image from the Amazon SageMaker image store.
    2. If you select New image, provide the Amazon ECR registry path for your Docker image. The path needs to be in the same Region as the studio domain. The ECR repo also needs to be in the same account as your Studio domain or cross-account permissions for Studio need to be enabled.
  2. Choose Next.
  1. For Image name, enter a name.
  2. For Image display name, enter a descriptive name.
  3. For Description, enter a label definition.
  4. For IAM role, choose the IAM role required by Amazon SageMaker to attach Amazon ECR images to Amazon SageMaker images on your behalf.
  5. Additionally, you can tag your image.
  6. Choose Next.

  1. For Kernel name, enter Python 3.
  2. Choose Submit.

The green check box indicates that the image has been successfully attached to the domain.

The Amazon SageMaker image store automatically versions your images. You can select a pre-attached image and choose Detach to detach the image and all versions, or choose Attach image to attach a new version. There is no limit to the number of versions per image or the ability to detach images.

User experience with a custom image

Let’s now jump into the user experience for a Studio user.

  1. Log in to Studio using your user profile.
  2. To launch a new activity, choose Launcher.
  3. For Select a SageMaker image to launch your activity, choose tf2kernel.

  1. Choose the Notebook icon to open a new notebook with the custom kernel.

The notebook kernel takes a couple minutes to spin up and you’re ready to go!

Testing your custom container in the notebook

When the kernel is up and running, you can run code in the notebook. First, let’s test that the correct version of TensorFlow that was specified in the Dockerfile is available for use. In the following screenshot, we can see that the notebook is using the tf2kernel we just launched.

Amazon SageMaker notebooks also display the local CPU and memory usage.

Next, let’s try out the custom training script directly in the notebook. Copy the training script into a notebook cell and run it. The script downloads the mnist dataset from the tf.keras.datasets utility, splits the data into training and test sets, defines a custom deep neural network algorithm, trains the algorithm on the training data, and tests the algorithm on the test dataset.

To experiment with the TensorFlow 2.3 framework, you may wish to test out newly released APIs, such as the newer feature preprocessing utilities in Keras. In the following screenshot, we import the keras.layers.experimental library released with TensorFlow 2.3, which contains newer APIs for data preprocessing. We load one of these APIs and re-run the script in the notebook.

Amazon SageMaker also dynamically modifies the CPU and memory usage as the code runs. By bringing your custom container and training scripts, this feature allows you to experiment with custom training scripts and algorithms directly in the Amazon SageMaker notebook. When you’re satisfied with the experimentation in the Studio notebook, you can start a training job.

What about the Python files or custom files you included with the Dockerfile using the COPY command? SageMaker Studio mounts the elastic file system in the file path provided in the app-image-config-input.json, which we set to root/data. To avoid Studio from overwriting any custom files you want to include, the COPY command loads the train.py file in the path /root. To access this file, open a terminal or notebook and run the code:

! cat /root/train.py

You should see an output as shown in the screenshot below.

The train.py file is in the specified location.

Logging in to CloudWatch

SageMaker Studio also publishes kernel metrics to Amazon CloudWatch, which you can use for troubleshooting. The metrics are captured under the /aws/sagemaker/studio namespace.

To access the logs, on the CloudWatch console, choose CloudWatch Logs. On the Log groups page, enter the namespace to see logs associated with the Jupyter server and the kernel gateway.

Detaching an image or version

You can detach an image or an image version from the domain if it’s no longer supported.

To detach an image and all versions, select the image from the Custom images attached to domain table and choose Detach.

You have the option to also delete the image and all versions, which doesn’t affect the image in Amazon ECR.

To detach an image version, choose the image. On the Image details page, select the image version (or multiple versions) from the Image versions attached to domain table and choose Detach. You see a similar warning and options as in the preceding flow.

Conclusion

SageMaker Studio enables you to collaborate, experiment, train, and deploy ML models in a streamlined manner. To do so, data scientists often require access to the newest ML frameworks, custom scripts, and packages from public and private code repositories and package management tools. You can now create custom images containing all the relevant code, and launch these using Studio notebooks. These images will be available to all users in the Studio domain. You can also use this feature to experiment with other popular languages and runtimes besides Python, such as R, Julia, and Scala. The sample files are available on the GitHub repo. For more information about this feature, see Bring your own SageMaker image.


About the Authors

Stefan Natu is a Sr. Machine Learning Specialist at AWS. He is focused on helping financial services customers build end-to-end machine learning solutions on AWS. In his spare time, he enjoys reading machine learning blogs, playing the guitar, and exploring the food scene in New York City.

 

 

Jaipreet Singh is a Senior Software Engineer on the Amazon SageMaker Studio team. He has been working on Amazon SageMaker since its inception in 2017 and has contributed to various Project Jupyter open-source projects. In his spare time, he enjoys hiking and skiing in the Pacific Northwest.

 

 

Huong Nguyen is a Sr. Product Manager at AWS. She is leading the user experience for SageMaker Studio. She has 13 years’ experience creating customer-obsessed and data-driven products for both enterprise and consumer spaces. In her spare time, she enjoys reading, being in nature, and spending time with her family.

Read More

Amazon Translate now enables you to mark content to not get translated

Amazon Translate now enables you to mark content to not get translated

While performing machine translations, you may have situations where you wish to preserve specific sections of text from being translated, such as names, unique identifiers, or codes. We at the Amazon Translate team are excited to announce a tag modifications that allows you to specify what text should not be translated. This feature is available in both the real-time TranslateText API and asynchronous batch TextTranslation API. You can tag segments of text that you don’t want to translate in an HTML element. In this post, we walk through the step-by-step method to use this feature.

Using the translate-text operation in Command Line Interface

The following example shows you how to use the translate-text operation from the command line. This example is formatted for Unix, Linux, and macOS. For Windows, replace the backslash () Unix continuation character at the end of each line with a caret (^). At the command line, enter the following code:

aws translate translate-text 
--source-language-code "en" 
--target-language-code "es" 
--region us-west-2 
--text “This can be translated to any language. <p translate=no>But do not translate this!</p>”

You can specify any type of HTML element to do so, for example, paragraph <p>, text section <span>, or block section <div>. When you run the command, you get the following output:

{
    "TranslatedText": "Esto se puede traducir a cualquier idioma. <p translate=no>But do not translate this!</p>",
    "SourceLanguageCode": "en",
    "TargetLanguageCode": "es"
}

Using the span tag in Amazon Translate Console

In this example, we translate the following text from French to English:

Musée du Louvre, c’est ainsi que vous dites Musée du Louvre en français.

You don’t want to translate the first instance of “Musée du Louvre,” but you do want to translate the second instance to “Louvre Museum.” You can tag the first instance using a simple span tag:

Musée du Louvre, c'est ainsi que vous dites Musée du Louvre en français.

The following screenshot shows the output on the Amazon Translate console.

The following screenshot shows the output translated to Arabic.

Conclusion

In this post, we showed you how to tag and specify text that should not be translated. For more information, see the Amazon Translate Developer Guide and Amazon Translate resources. If you’re new to Amazon Translate, try it out using our Free Tier, which offers 2 million characters per month for free for the first 12 months, starting from your first translation request.

 


About the Author

Watson G. Srivathsan is the Sr. Product Manager for Amazon Translate, AWS’s natural language processing service. On weekends you will find him exploring the outdoors in the Pacific Northwest.

Read More

Intelligently connect to customers using machine learning in the COVID-19 pandemic

Intelligently connect to customers using machine learning in the COVID-19 pandemic

The pandemic has changed how people interact, how we receive information, and how we get help. It has shifted much of what used to happen in-person to online. Many of our customers are using machine learning (ML) technology to facilitate that transition, from new remote cloud contact centers, to chatbots, to more personalized engagements online. Scale and speed are important in the pandemic—whether it’s processing grant applications or limiting call wait times for customers. ML tools like Amazon Lex and Amazon Connect are just a few of the solutions helping to power this change with speed, scale, and accuracy. In this post, we explore companies who have quickly pivoted to take advantage of AI capabilities to engage more effectively online and deliver immediate impact.

Chatbots connect governments and their citizens

GovChat is South Africa’s largest citizen engagement platform, connecting over 50 million citizens to 10,000 public representatives in the government. Information flowing to and from the government has gained a new level of urgency, and this connection between citizens and the government is critical in how we adjust and respond to the pandemic. GovChat exists to meet that demand—working directly with the South African government to facilitate the digitization of their COVID-19 social relief grants, help citizens find their closest COVID-19 testing facility, and enable educational institutions to reopen safely.

GovChat uses a chatbot powered by Amazon Lex, a managed AI service for building conversational interfaces into any application using voice and text. The chatbot, available on popular social media platforms such as WhatsApp and Facebook, provides seamless communication between the government and its citizens.

At the beginning of the pandemic, GovChat worked with the South African Social Security Agency to digitize, facilitate, and track applications for a COVID-19 social relief grant. The plan was to create a chatbot that could help citizens easily file and track their grant applications. GovChat needed to act quickly and provide an infrastructure that could rapidly scale to support the unprecedented demand for government aid. To provide speed of delivery and scalability while keeping costs down, GovChat turned to Amazon Lex for voice and text conversational interfaces and AWS Lambda, a serverless compute service. Within days, the chatbot was handling up to 14.2 million messages a day across social media platforms in South Africa regarding the social relief grant.

More recently, the South African Human Rights Commission (SAHRC) turned to GovChat to help gauge schools’ readiness to reopen safety. Parents, students, teachers, and community members can use their mobile devices to provide first-hand, real-time details of their school’s COVID-19 safety checks and readiness as contact learning is resumed, with special attention paid to children with disabilities. In GovChat’s engagements during the COVID-19 pandemic, they found that 28% of service requests at schools have been in relation to a disruption in access to water, which is critical for effective handwashing—a preventative component to fight the spread of the virus. With the real-time data provided by citizens via the chatbot, the government was able to better understand the challenges schools faced and identify areas of improvement. GovChat has processed over 250 million messages through their platform, playing an important role in enabling more effective and timely communications between citizens and their government.

ML helps power remote call centers

Organizations of all kinds have also experienced a rapid increase in call volume to their call centers—from local government, to retail, to telecommunications, to healthcare providers. Organizations have also had to quickly shift to a remote work environment in response to the pandemic. Origin Energy, one of Australia’s largest integrated energy companies serving over 4 million customer accounts, launched an Amazon Connect contact center in March as part of their customer experience transformation. Amazon Connect is an omnichannel cloud contact center with AI/ML capabilities that understands context and can transcribe conversations.

This transition to Amazon Connect accelerated Origin’s move to remote working during the COVID-19 pandemic. This allowed their agents to continue to serve their customers, while also providing increased self-service and automation options such as bill payments, account maintenance, and plan renewals to customers. They deployed new AI/ML capabilities, including neural text-to-speech through Amazon Polly. Since the March 2020 launch, they’ve observed an increase in call quality scores, improved customer satisfaction, and agent productivity—all while managing up to 1,200 calls at a time. They’re now looking to further leverage natural language understanding with Amazon Lex and automated quality management with built-in speech-to-text and sentiment analysis from Contact Lens for Amazon Connect. Amazon Connect has supported Origin in their efforts to respond rapidly to opportunities and customer feedback as they focus on continually improving their customer experience with affordable, reliable, and sustainable energy.

Conclusion

Organizations are employing creative strategies to engage their customers and provide a more seamless experience. This is a two-way street; not only can organizations more effectively distribute key information, but—more importantly—they can listen. They can hear the evolving needs of their customers and adjust in real time to meet them.

To learn about another way AWS is working toward solutions from the COVID-19 pandemic, check out the blog article Introducing the COVID-19 Simulator and Machine Learning Toolkit for Predicting COVID-19 Spread.

 

 


About the Author

Taha A. Kass-Hout, MD, MS, is director of machine learning and chief medical officer at Amazon Web Services (AWS). Taha received his medical training at Beth Israel Deaconess Medical Center, Harvard Medical School, and during his time there, was part of the BOAT clinical trial. He holds a doctor of medicine and master’s of science (bioinformatics) from the University of Texas Health Science Center at Houston.

Read More