Relevance tuning with Amazon Kendra

Relevance tuning with Amazon Kendra

Amazon Kendra is a highly accurate and easy-to-use enterprise search service powered by machine learning (ML). As your users begin to perform searches using Kendra, you can fine-tune which search results they receive. For example, you might want to prioritize results from certain data sources that are more actively curated and therefore more authoritative. Or if your users frequently search for documents like quarterly reports, you may want to display the more recent quarterly reports first.

Relevance tuning allows you to change how Amazon Kendra processes the importance of certain fields or attributes in search results. In this post, we walk through how you can manually tune your index to achieve the best results.

It’s important to understand the three main response types of Amazon Kendra: matching to FAQs, reading comprehension to extract suggested answers, and document ranking. Relevance tuning impacts document ranking. Additionally, relevance tuning is just one of many factors that impact search results for your users. You can’t change specific results, but you can influence how much weight Amazon Kendra applies to certain fields or attributes.

Faceting

Because you’re tuning based on fields, you need to have those fields faceted in your index. For example, if you want to boost the signal of the author field, you need to make the author field a searchable facet in your index. For more information about adding facetable fields to your index, see Creating custom document attributes.

Performing relevance tuning

You can perform relevance tuning in several different ways, such as on the AWS Management Console through the Amazon Kendra search console or with the Amazon Kendra API. You can also use several different types of fields when tuning:

  • Date fields – Boost more recent results
  • Number fields – Amplify content based on number fields, such as total view counts
  • String fields – Elevate results based on string fields, for example those that are tagged as coming from a more authoritative data source

Prerequisites

This post requires you to complete the following prerequisites: set up your environment, upload the example dataset, and create an index.

Setting up your environment

Ensure you have the AWS CLI installed. Open a terminal window and create a new working directory. From that directory, download the following files:

  • The sample dataset, available from: s3://aws-ml-blog/artifacts/kendra-relevance-tuning/ml-blogs.tar.gz
  • The Python script to create your index, available from: s3://aws-ml-blog/artifacts/kendra-relevance-tuning/create-index.py

The following screenshot shows how to download the dataset and the Python script.

Uploading the dataset

For this use case, we use a dataset that is a selection of posts from the AWS Machine Learning Blog. If you want to use your own dataset, make sure you have a variety of metadata. You should ideally have varying string fields and date fields. In the example dataset, the different fields include:

  • Author name – Author of the post
  • Content type – Blog posts and whitepapers
  • Topic and subtopic – The main topic is Machine Learning and subtopics include Computer Vision and ML at the Edge
  • Content language – English, Japanese, and French
  • Number of citations in scientific journals – These are randomly fabricated numbers for this post

To get started, create two Amazon Simple Storage Service (Amazon S3) buckets. Make sure to create them in the same Region as your index. Our index has two data sources.

Within the ml-blogs.tar.gz tarball there are two directories. Extract the tarball and sync the contents of the first directory, ‘bucket1’ to your first S3 bucket. Then sync the contents of the second directory, ‘bucket2’, to your second S3 bucket.

The following screenshot shows how to download the dataset and upload it to your S3 buckets.

Creating the index

Using your preferred code editor, open the Python script ‘create-index.py’ that you downloaded previously. You will need to set your bucket name variables to the names of the Amazon S3 buckets you created earlier. Make sure you uncomment those lines.

Once this is done, run the script by typing python create-index.py. This does the following:

  • Creates an AWS Identity and Access Management (IAM) role to allow your Amazon Kendra index to read data from Amazon S3 and write logs to Amazon CloudWatch Logs
  • Creates an Amazon Kendra index
  • Adds two Amazon S3 data sources to your index
  • Adds new facets to your index, which allows you to search based on the different fields in the dataset
  • Initiates a data source sync job

Working with relevance tuning

Now that our data is properly indexed and our metadata is facetable, we can test different settings to understand how relevance tuning affects search results. In the following examples, we will boost based on several different attributes. These include the data source, document type, freshness, and popularity.

Boosting your authoritative data sources

The first kind of tuning we look at is based on data sources. Perhaps you have one data source that is well maintained and curated, and another with information that is less accurate and dated. You want to prioritize the results from the first data source so your users get the most relevant results when they perform searches.

When we created our index, we created two data sources. One contains all our blog posts—this is our primary data source. The other contains only a single file, which we’re treating as our legacy data source.

Our index creation script set the field _data_source_id to be facetable, searchable, and displayable. This is an essential step in boosting particular data sources.

The following screenshot shows the index fields of our Amazon Kendra index.

  1. On the Amazon Kendra search console, search for Textract.

Your results should reference posts about Amazon Textract, a service that can automatically extract text and data from scanned documents.

The following screenshot shows the results of a search for ‘Textract’.

Also in the results should be a file called Test_File.txt. This is a file from our secondary, less well-curated data source. Make a note of where this result appears in your search results. We want to de-prioritize this result and boost the results from our primary source.

  1. Choose Tuning to open the Relevance tuning
  2. Under Text fields, expand data source.
  3. Drag the slider for your first data source to the right to boost the results from this source. For this post, we start by setting it to 8.
  4. Perform another search for Textract.

You should find that the file from the second data source has moved down the search rankings.

  1. Drag the slider all the way to the right, so that the boost is set to 10, and perform the search again.

You should find that the result from the secondary data source has disappeared from the first page of search results.

The following screenshot shows the relevance tuning panel with data source field boost applied to one data source, and the search results excluding the results from our secondary data source.

Although we used this approach with S3 buckets as our data sources, you can use it to prioritize any data source available in Amazon Kendra. You can boost the results from your Amazon S3 data lake and de-prioritize the results from your Microsoft SharePoint system, or vice-versa.

Boosting certain document types

In this use case, we boost the results of our whitepapers over the results from the AWS Machine Learning Blog. We first establish a baseline search result.

  1. Open the Amazon Kendra search console and search for What is machine learning?

Although the top result is a suggested answer from a whitepaper, the next results are likely from blog posts.

The following screenshot shows the results of a search for ‘What is machine learning?’

How do we influence Amazon Kendra to push whitepapers towards the top of its search results?

First, we want to tune the search results based on the content Type field.

  1. Open the Relevance tuning panel on the Amazon Kendra console.
  2. Under Custom fields, expand Type.
  3. Drag the Type field boost slider all the way to the right to set the relevancy of this field to 10.

We also want to boost the importance of a particular Type value, namely Whitepapers.

  1. Expand Advanced boosting and choose Add value.
  2. Whitepapers are indicated in our metadata by the field “Type”: “Whitepaper”, so enter a value of Whitepaper and set the value to 10.
  3. Choose Save.

The following screenshot shows the relevance tuning panel with type field boost applied to the ‘Whitepaper’ document type.

Wait for up to 10 seconds before you rerun your search. The top results should all be whitepapers, and blog post results should appear further down the list.

The following screenshot shows the results of a search for ‘What is machine learning?’ with type field boost applied.

  1. Return your Type field boost settings back to their normal values.

Boosting based on document freshness

You might have a large archive of documents spanning multiple decades, but the more recent answers are more useful. For example, if your users ask, “Where is the IT helpdesk?” you want to make sure they’re given the most up-to-date answer. To achieve this, you can give a freshness boost based on date attributes.

In this use case, we boost the search results to include more recent posts.

  1. On the Amazon Kendra search console, search for medical.

The first result is De-identify medical images with the help of Amazon Comprehend Medical and Amazon Rekognition, published March 19, 2019.

The following screenshot shows the results of a search for ‘medical’.

 

  1. Open the Relevance tuning panel again.
  2. On the Date tab, open Custom fields.
  3. Adjust the Freshness boost of PublishDate to 10.
  4. Search again for medical.

This time the first result is Enhancing speech-to-text accuracy of COVID-19-related terms with Amazon Transcribe Medical, published May 15, 2020.

The following screenshot shows the results of a search for ‘medical’ with freshness boost applied.

You can also expand Advanced boosting to boost results from a particular period of time. For example, if you release quarterly business results, you might want to set the sensitivity range to the last 3 months. This boosts documents released in the last quarter so users are more likely to find them.

The following screenshot shows the section of the relevance tuning panel related to freshness boost, showing the Sensitivity slider to capture range of sensitivity.

Boosting based on document popularity

The final scenario is tuning based on numerical values. In this use case, we assigned a random number to each post to represent the number of citations they received in scientific journals. (It’s important to reiterate that these are just random numbers, not actual citation numbers!) We want the most frequently cited posts to surface.

  1. Run a search for keras, which is the name of a popular library for ML.

You might see a suggested answer from Amazon Kendra, but the top results (and their synthetic citation numbers) are likely to include:

  1. On the Relevance tuning panel, on the Numeric tab, pull the slider for Citations all the way to 10.
  2. Select Ascending to boost the results that have more citations.

The following screenshot shows the relevance tuning panel with numeric boost applied to the Citations custom field.

  1. Search for keras again and see which results appear.

At the top of the search results are:

Amazon Kendra prioritized the results with more citations.

Conclusion

This post demonstrated how to use relevance tuning to adjust your users’ Amazon Kendra search results. We used a small and somewhat synthetic dataset to give you an idea of how relevance tuning works. Real datasets have a lot more complexity, so it’s important to work with your users to understand which types of search results they want prioritized. With relevance tuning, you can get the most value out of enterprise search with Amazon Kendra! For more information about Amazon Kendra, see AWS re:Invent 2019 – Keynote with Andy Jassy on YouTube, Amazon Kendra FAQs, and What is Amazon Kendra?

Thanks to Tapodipta Ghosh for providing the sample dataset and technical review. This post couldn’t have been written without his assistance.


About the Author

James Kingsmill is a Solution Architect in the Australian Public Sector team. He has a longstanding interest in helping public sector customers achieve their transformation, automation, and security goals. In his spare time, you can find him canyoning in the Blue Mountains near Sydney.

 

Read More

Using A/B testing to measure the efficacy of recommendations generated by Amazon Personalize

Using A/B testing to measure the efficacy of recommendations generated by Amazon Personalize

Machine learning (ML)-based recommender systems aren’t a new concept, but developing such a system can be a resource-intensive task—from data management during training and inference, to managing scalable real-time ML-based API endpoints. Amazon Personalize allows you to easily add sophisticated personalization capabilities to your applications by using the same ML technology used on Amazon.com for over 20 years. No ML experience required. Customers in industries such as retail, media and entertainment, gaming, travel and hospitality, and others use Amazon Personalize to provide personalized content recommendations to their users. With Amazon Personalize, you can solve the most common use cases: providing users with personalized item recommendations, surfacing similar items, and personalized re-ranking of items.

Amazon Personalize automatically trains ML models from your user-item interactions and provides an API to retrieve personalized recommendations for any user. A frequently asked question is, “How do I compare the performance of recommendations generated by Amazon Personalize to my existing recommendation system?” In this post, we discuss how to perform A/B tests with Amazon Personalize, a common technique for comparing the efficacy of different recommendation strategies.

You can quickly create a real-time recommender system on the AWS Management Console or the Amazon Personalize API by following these simple steps:

  1. Import your historical user-item interaction data.
  2. Based on your use case, start a training job using an Amazon Personalize ML algorithm (also known as recipes).
  3. Deploy an Amazon Personalize-managed, real-time recommendations endpoint (also known as a campaign).
  4. Record new user-item interactions in real time by streaming events to an event tracker attached to your Amazon Personalize deployment.

The following diagram represents which tasks Amazon Personalize manages.

This diagram represents which tasks Amazon Personalize manages

Metrics overview

You can measure the performance of ML recommender systems through offline and online metrics. Offline metrics allow you to view the effects of modifying hyperparameters and algorithms used to train your models, calculated against historical data. Online metrics are the empirical results observed in your user’s interactions with real-time recommendations provided in a live environment.

Amazon Personalize generates offline metrics using test datasets derived from the historical data you provide. These metrics showcase how the model recommendations performed against historical data. The following diagram illustrates a simple example of how Amazon Personalize splits your data at training time.

The following diagram illustrates a simple example of how Amazon Personalize splits your data at training time

Consider a training dataset containing 10 users with 10 interactions per user; interactions are represented by circles and ordered from oldest to newest based on their timestamp. In this example, Amazon Personalize uses 90% of the users’ interactions data (blue circles) to train your model, and the remaining 10% for evaluation. For each of the users in the evaluation data subset, 90% of their interaction data (green circles) is used as input for the call to the trained model, and the remaining 10% of their data (orange circle) is compared to the output produced by the model to validate its recommendations. The results are displayed to you as evaluation metrics.

Amazon Personalize produces the following metrics:

  • Coverage – This metric is appropriate if you’re looking for what percentage of your inventory is recommended
  • Mean reciprocal rank (at 25) – This metric is appropriate if you’re interested in the single highest ranked recommendation
  • Normalized discounted cumulative gain (at K) – The discounted cumulative gain is a measure of ranking quality; it refers to how well the recommendations are ordered
  • Precision (at K) – This metric is appropriate if you’re interested in how a carousel of size K may perform in front of your users

For more information about how Amazon Personalize calculates these metrics, see Evaluating a Solution Version.

Offline metrics are a great representation of how your hyperparameters and data features influence your model’s performance against historical data. To find empirical evidence of the impact of Amazon Personalize recommendations on your business metrics, such as click-through rate, conversion rate, or revenue, you should test these recommendations in a live environment, getting them in front of your customers. This exercise is important because a seemingly small improvement in these business metrics can translate into a significant increase in your customer engagement, satisfaction, and business outputs, such as revenue.

The following sections include an experimentation methodology and reference architecture in which you can identify the steps required to expose multiple recommendation strategies (for example, Amazon Personalize vs. an existing recommender system) to your users in a randomized fashion and measure the difference in performance in a scientifically sound manner (A/B testing).

Experimentation methodology

The data collected across experiments enables you to measure the efficacy of Amazon Personalize recommendations in terms of business metrics. The following diagram illustrates the experimentation methodology we suggest adhering to.

The following diagram illustrates the experimentation methodology we suggest adhering to

The process consists of five steps:

  • Research – The formulation of your questions and definition of the metrics to improve are solely based on the data you gather before starting your experiment. For example, after exploring your historical data, you might be interested in why you experience shopping cart abandonment or high bounce rates from leads generated by advertising.
  • Develop a hypothesis – You use the data gathered during the research phase to make observations and develop a change and effect statement. The hypothesis must be quantifiable. For example, providing user personalization through an Amazon Personalize campaign on the shopping cart page will translate into an increase of the average cart value by 10%.
  • Create variations based on the hypothesis – The variations of your experiment are based on the hypothesized behavior you’re evaluating. A newly created Amazon Personalize campaign can be considered the variation of your experiment when compared against an existing rule-based recommendation system.
  • Run an experiment – You can use several techniques to test your recommendation system; this post focuses on A/B testing. The metrics data gathered during the experiment help validate (or invalidate) the hypothesis. For example, a 10% increase on the average cart value after adding Amazon Personalize recommendations to the shopping cart page over 1 month compared to the average cart value keeping the current system’s recommendations.
  • Measure the results – In this step, you determine if there is statistical significance to draw a conclusion and select the best performing variation. Was the increase in your cart average value a result of the randomness of your user testing set, or did the newly created Amazon Personalize campaign influence this increase?

A/B testing your Amazon Personalize deployment

The following architecture showcases a microservices-based implementation of an A/B test between two Amazon Personalize campaigns. One is trained with one of the recommendation recipes provided by Amazon Personalize, and the other is trained with a variation of this recipe. Amazon Personalize provides various predefined ML algorithms (recipes); HRNN-based recipes enable you to provide personalized user recommendations.

The following architecture showcases a microservices-based implementation of an A/B test between two Amazon Personalize campaigns

This architecture compares two Amazon Personalize campaigns. You can apply the same logic when comparing an Amazon Personalize campaign against a custom rule-based or ML-based recommender system. For more information about campaigns, see Creating a Campaign.

The basic workflow of this architecture is as follows:

  1. The web application requests customer recommendations from the recommendations microservice.
  2. The microservice determines if there is an active A/B test. For this post, we assume your testing strategy settings are stored in Amazon DynamoDB.
  3. When the microservice identifies the group your customer belongs to, it resolves the Amazon Personalize campaign endpoint to query for recommendations.
  4. Amazon Personalize campaigns provide the recommendations for your users.
  5. The users interact with their respective group recommendations.
  6. The web application streams user interaction events to Amazon Kinesis Data Streams.
  7. The microservice consumes the Kinesis stream, which sends the user interaction event to both Amazon Personalize event trackers. Recording events is an Amazon Personalize feature that collects real-time user interaction data and provides relevant recommendations in real time.
  8. Amazon Kinesis Data Firehose ingests your user-item interactions stream and stores the interactions data in Amazon Simple Storage Service (Amazon S3) to use in future trainings.
  9. The microservice keeps track of your pre-defined business metrics throughout the experiment.

For instructions on running an A/B test, see the Retail Demo Store Experimentation Workshop section in the Github repo.

Tracking well-defined business metrics is a critical task during A/B testing; seeing improvements on these metrics is the true indicator of the efficacy of your Amazon Personalize recommendations. The metrics measured throughout your A/B tests need to be consistent across your variations (groups A and B). For example, an ecommerce site can evaluate the change (increase or decrease) on the click-through rate of a particular widget after adding Amazon Personalize recommendations (group A) compared to the click-through rate obtained using the current rule-based recommendations (group B).

An A/B experiment runs for a defined period, typically dictated by the number of users necessary to reach a statistically significant result. Tools such as Optimizely, AB Tasty, and Evan Miller’s Awesome A/B Tools can help you determine how large your sample size needs to be. A/B tests are usually active across multiple days or even weeks, in order to collect a large enough sample from your userbase. The following graph showcases the feedback loop between testing, adjusting your model, and rolling out new features on success.

The following graph showcases the feedback loop between testing, adjusting your model, and rolling out new features on success.

For an A/B test to be considered successful, you need to perform a statistical analysis of the data gathered from your population to determine if there is a statistically significant result. This analysis is based on the significance level you set for the experiment; a 5% significance level is considered the industry standard. For example, a significance level of 0.05 indicates a 5% risk of concluding that a difference exists when there is no actual difference. A lower significance level means that we need stronger evidence for a statistically significant result. For more information about statistical significance, see A Refresher on Statistical Significance.

The next step is to calculate the p-value. The p-value is the probability of seeing a particular result (or greater) from zero, assuming that the null hypothesis is TRUE. In other words, the p-value is the expected fluctuation in a given sample, similar to the variance. For example, imagine we ran an A/A test where we displayed the same variation to two groups of users. After such an experiment, we would expect the metrics results across groups to be very similar but not dramatically different: a p-value greater than your significance level. Therefore, in an A/B test, we hope to see a p-value that is less than our significance level so we can conclude the influence on the business metric was the result of your variance group. AWS Partners such as Amplitude or Optimizely provide A/B testing tools to facilitate the setup and analysis of your experiments.

A/B tests are statistical measures of the efficacy of your Amazon Personalize recommendations, allowing you to quantify the impact these recommendations have on your business metrics. Additionally, A/B tests allows you to gather organic user-item interactions that you can use to train subsequent Amazon Personalize implementations. We recommend spending less time on offline tests and getting your Amazon Personalize recommendations in front of your users as quickly as possible. This helps eliminate biases from existing recommender systems in your training dataset, which allows your Amazon Personalize deployments to learn from organic user-item interactions data.

Conclusion

Amazon Personalize is an easy-to-use, highly scalable solution that can help you solve some of the most popular recommendation use cases:

  • Personalized recommendations
  • Similar items recommendations
  • Personalized re-ranking of items

A/B testing provides invaluable information on how your customers interact with your Amazon Personalize recommendations. These results, measured according to well-defined business metrics, will give you a sense of the efficacy of these recommendations along with clues on how to further adjust your training datasets. After you iterate through this process multiple times, you will see an improvement on the metrics that matter most to improve customer engagement.

If this post helps you or inspires you to use A/B testing to improve your business metrics, please share your thoughts in the comments.

Additional resources

For more information about Amazon Personalize, see the following:


About the Author

Luis Lopez Soria is an AI/ML specialist solutions architect working with the AWS machine learning team. He works with AWS customers to help them adopt machine learning on a large scale. He enjoys playing sports, traveling around the world, and exploring new foods and cultures.

 

 

 

Read More

The fastest driver in Formula 1

The fastest driver in Formula 1

This blog post was co-authored, and includes an introduction, by Rob Smedley, Director of Data Systems at Formula 1

Formula 1 (F1) racing is the most complex sport in the world. It is the blended perfection of human and machine that create the winning formula. It is this blend that makes F1 racing, or more pertinently, the driver talent, so difficult to understand. How many races or Championships would Michael Schumacher really have won without the power of Benetton and later, Ferrari, and the collective technical genius that were behind those teams? Could we really have seen Lewis Hamilton win six World Championships if his career had taken a different turn and he was confined to back-of-the-grid machinery? Maybe these aren’t the best examples because they are two of the best drivers the world has ever seen. There are many examples, however, of drivers whose real talent has remained fairly well hidden throughout their career. Those that never got that “right place, right time” break into a winning car and, therefore, those that will be forever remembered as a midfield driver.

The latest F1 Insight powered by AWS is designed to build mathematical models and algorithms that can help us answer the perennial question: who is the fastest driver of all time? F1 and AWS scientists have spent almost a year building these models and algorithms to bring us that very insight. The output focuses solely on one element of a driver’s vast armory—the pure speed that is most evident on a Saturday afternoon during the qualifying hour. It doesn’t focus on racecraft or the ability to win races or drive at 200 mph while still having the bandwidth to understand everything going on around you (displayed so well by the likes of Michael Schumacher or Fernando Alonso). This ability, which transgresses speed alone, allowed them both, on many an occasion, to operate as master tacticians. For someone like myself, who has had the honor of watching those very skills in action from the pitwall, I cannot emphasize enough how important those skills are—they are the difference between the good and the great. It is important to point out that these skills are not included in this insight. This is about raw speed only and the ability to push the car to its very limits over one lap.

The output and the list of the fastest drivers of all time (based on the F1 Historic Data Repository information spanning from 1983 to present day) offers some great names indeed. Of course, there are the obvious ones that rank highly—Ayrton Senna, Michael Schumacher, Lewis Hamilton, all of whom emerge as the top five fastest drivers. However, there are some names that many may not think of as top 20 drivers on first glance. A great example I would cite is Heikki Kovalainen. Is that the Kovalainen that finished his career circling round at the back of the Grand Prix field in Caterham, I hear you ask? Yes in fact, it’s the very same. For those of us who watched Kovalainen throughout his F1 career, it comes as little surprise that he is so high up the list when we consider pure speed. Look at his years on the McLaren team against Lewis Hamilton. The qualifying speaks volumes, with the median difference of just 0.1 seconds per lap. Ask Kovalainen himself and he’ll tell you that he didn’t perform at the same level as Hamilton in the races for many reasons (this is a tough business, believe me). But in qualifying, his statistics speak for themselves—the model has ranked him so highly because of his consistent qualifying performances throughout his career. I, for one, am extremely happy to see Kovalainen get the data-driven recognition that he deserves for that raw talent that was always on display during qualifying. There are others in the list, too, and hopefully some of these are your favorites—drivers that you have been banging the drum about for the last 10, 20, 40 years; the ones that might never have gotten every break, but you were able to see just how talented they were.

— Rob Smedley


Fastest Driver

As part of F1’s 70th anniversary celebrations and to help fans better understand who are the fastest drivers in the sport’s history, F1 and the Amazon Machine Learning Solutions Lab teamed up to develop Fastest Driver, the latest F1 Insight powered by AWS.

Fastest Driver uses AWS machine learning (ML) to rank drivers using their qualifying sessions lap times from F1’s Historic Data Repository going back to 1983. In this post, we demonstrate how by using Amazon SageMaker, a fully managed service to build, train, and deploy ML models, the Fastest Driver insight can objectively determine the fastest drivers in F1.

Quantifying driver pace using qualifying data

We define pace as a driver’s lap time during qualifying sessions. Driver race performance depends on a large number of factors, such as weather conditions, car setup (such as tires), and race track. F1 qualifying sessions are split into three sessions: the first session eliminates cars that set a lap time in 16th position or lower, the second eliminates positions 11–15, and the final part determines the grid position of 1st (pole position) to 10th. We use all qualification data from the driver qualifying sessions to construct Fastest Driver.

Lap times from qualifying sessions are normalized to adjust for differences in race tracks, which enables us to pool lap times across different tracks. This normalization process equalizes driver lap time differences, helping us compare drivers across race tracks and eliminating the need to construct track-specific models to account for track alterations over time. Another important technique is that we compare qualifying data for drivers on the same race team (such as Aston Martin Red Bull Racing), where teammates have competed against each other in a minimum of five qualifying sessions. By holding the team constant, we get a direct performance comparison under the same race conditions while controlling for car effects.

Differences in race conditions (such as wet weather) and rule changes (such as rule impacts) leads to significant variations in driver performances. We identify and remove anomalous lap time outliers by using deviations from median lap times between teammates with a 2-second threshold. For example, let’s compare Daniel Ricciardo with Sebastian Vettel when they raced together for Red Bull in 2014. During that season, Ricciardo was, on average, 0.2 seconds faster than Vettel. However, the average lap time difference between Ricciardo and VetteI falls to 0.1 seconds if we exclude the 2014 US Grand Prix (GP), where Ricciardo was more than 2 seconds faster than Vettel on account of Vettel being penalized to comply with the 107% rule (which forced him to start from the pit lane).

Constructing Fastest Driver

Building a performant ML model starts with good data. Following the driver qualification data aggregation process, we construct a network of teammate comparisons over the years, with the goal of comparing drivers across all teams, seasons, and circuits. For example, Sebastian Vettel and Max Verstappen have never been on the same team, so we compare them through their respective connections with Daniel Ricciardo at Red Bull. Ricciardo was, on average, 0.18 seconds slower than Verstappen during the 2016–2018 seasons while they were at Red Bull. We remove outlier sessions, such as the 2018 GPs in Bahrain, where Ricciardo was quicker than Verstappen by large margins because Verstappen didn’t get past Q1 due to a crash. If each qualifying session is assumed to be equally important, a subset of our driver network including only Ricciardo, Vettel, and Verstappen yields Verstappen as the fastest driver: Verstappen was 0.18 seconds faster than Ricciardo, and Ricciardo 0.1 seconds faster than Vettel.

Using the full driver network, we can compare all driver pairings to determine the faster racers. Going back to Heikki Kovalainen, let’s look at his years in the McLaren team against Lewis Hamilton. The qualifying speaks volumes with the median difference of just 0.1 seconds per lap. Kovalainen doesn’t have the same number of World Championships as Hamilton, but his qualifying statistics speak for themselves—the model has ranked him high because of his consistent qualifying performance throughout his career.

An algorithm called the Massey’s method (a form of linear regression) is one of the core models behind the Insight. Fastest Driver uses Massey’s method to rank drivers by solving for a set of linear equations, where each driver’s rating is calculated as their average lap time difference against teammates. Additionally, when comparing ratings of teammates, the model uses features like driver strength of schedule normalized by the number of interactions with the driver. Overall, the model places high rankings to drivers who perform extraordinarily well against their teammates or perform well against strong opponents.

Our goal is to assign each driver a numeric rating to infer that a driver’s competitive advantage relative to other drivers assuming the expected margin of lap time difference in any race is proportional to the difference in a driver’s true intrinsic rating. For the more mathematically inclined reader: let xj represent each of all drivers and rj represent the true intrinsic driver ratings. For every race, we can predict the margin of the lap time advantage or disadvantage (yi) between any pair of two drivers as:

In this equation, xj is +1 for the winner and -1 for the loser, and ei is the error term due to unexplained variations. For a given set of m game observations and n drivers, we can formulate an (m * n) system of linear equations:

Driver ratings (r) is a solution to the normal equation via linear regression:

The following example code of Massey’s method and calculating driver rankings using Amazon SageMaker demonstrates the training process:

1.	import numpy as np
2.	import pandas as pd
3.	import statsmodels.api as sm
4.	from scipy.stats import norm 
5.	
6.	#example data comparing five drivers
7.	data = pd.DataFrame([[1, 0, 88, 90], 
8.	                     [2, 1, 87, 88],
9.	                     [2, 0, 87, 90],
10.	                     [3, 4, 88, 92],
11.	                     [3, 1, 88, 90],
12.	                     [1, 4, 90, 92]], columns=['Driver1', 'Driver2', 'Driver1_laptime', 'Driver2_laptime'])
13.	
14.	def init_linear_regressor_matrix(data, num_of_drivers, col_to_rank):
15.	    """initialize linear system matrix for regression"""
16.	    wins = np.zeros((data.shape[0], num_of_drivers))
17.	    score_diff = np.zeros(data.shape[0])
18.	
19.	    for index, row in data.iterrows():
20.	        idx1 = row["Driver1"]
21.	        idx2 = row["Driver2"]
22.	        if row['Driver1_laptime'] - row['Driver2_laptime'] > 0:
23.	            wins[(index)][(idx1)] = -1
24.	            wins[(index)][(idx2)] = 1
25.	            score_diff[(index)] = row['Driver1_laptime'] - row['Driver2_laptime']
26.	        else:
27.	            wins[(index)][(idx1)] = 1
28.	            wins[(index)][(idx2)] = -1
29.	            score_diff[(index)] = row['Driver2_laptime'] - row['Driver1_laptime']
30.	    wins_df = pd.DataFrame(wins)
31.	    wins_df[col_to_rank] = score_diff
32.	    return wins_df 	
33.	
34.	def massey(data, num_of_drivers, col_to_rank='delta'):
35.	    """Compute for each driver, adjacency matrix and aggregated scores, as input to the Massey Model"""
36.	
37.	    wins_df = init_linear_regressor_matrix(data, num_of_drivers, col_to_rank)
38.	    model = sm.OLS(
39.	        wins_df[col_to_rank], wins_df.drop(columns=[col_to_rank])
40.	    )
41.	    results = model.fit(cov_type='HC1')
42.	    rankings = pd.DataFrame(results.params)
43.	    rankings['std'] = np.sqrt(np.diag(results.cov_params()))
44.	    rankings['consistency'] = (norm.ppf(0.9)-norm.ppf(0.1))*rankings['std']
45.	    rankings = (
46.	        rankings
47.	        .sort_values(by=0, ascending=False)
48.	        .reset_index()
49.	        .rename(columns={"index": "Driver", 0: "massey"})
50.	    )
51.	    rankings = rankings.sort_values(by=["massey"], ascending=False)
52.	    rankings["massey_new"] = rankings["massey"].max() - rankings["massey"]
53.	    return rankings[['Driver', 'massey_new']]
54.	
55.	rankings = massey(data, 5)
56.	print(rankings)

The kings of the asphalt

Topping our list of rankings of fastest drivers are the esteemed Ayrton Senna, Michael Schumacher, Lewis Hamilton, Max Verstappen, and Fernando Alonso. This is delivered through the Fastest Driver insight, which produces a dataset ranking based on speed (or qualifying times) of all drivers from the present day back to 1983, by simply ranking drivers in descending order of Driver, Rank (integer), Gap to Best (milliseconds).

It’s important to note that to quantify a driver’s ability, we need to observe a minimum number of interactions. To factor this in, we only include teammates who have competed against each other in at least five qualifying sessions. A number of parameters and considerations have been put in place as an effective means of identifying various conditions with unfair comparisons, such as crashes, failures, age, career breaks, or weather conditions changing over qualifying sessions.

Furthermore, we noticed that if a driver re-joined F1 following a break of three years or more (such as Michael Schumacher in 2010, Pedro de la Rosa in 2010, Narain Karthikeyan in 2011, and Robert Kubica in 2019), this adds a 0.1 second advantage to driver relative pace. This is exemplified when drivers have a large age gap with their teammates, such as Mark Webber vs. Sebastian Vettel in 2013, Felipe Massa vs. Lance Stroll in 2017, and Kimi Räikkönen vs. Antonio Giovinazzi in 2019. From 1983–2019, we observe that competing against a teammate who is significantly older gives a 0.06-second advantage.

These rankings aren’t proposed as definitive, and there will no doubt be disagreement among fans. In fact, we encourage a healthy debate! Fastest Driver presents a scientific approach to driver ranking aimed at objectively assessing a driver’s performance controlling for car difference.

Lightweight and flexible deployment with Amazon SageMaker

To deliver the insights from Fastest Driver, we implemented Massey’s method on a Python web server. One complication was that the qualifying data consumed by the model is updated with fresh lap times after every race weekend. To handle this, in addition to the standard request to the web server for the rankings, we implemented a refresh request that instructs the server to download new qualifying data from Amazon Simple Storage Service (Amazon S3).

We deployed our model web server to an Amazon SageMaker model endpoint. This makes sure that our endpoint is highly available, because multi-instance Amazon SageMaker model endpoints are distributed across multiple Availability Zones by default, and have automatic scaling capabilities built in. As an additional benefit, the endpoints integrate with other Amazon SageMaker features, such as Amazon SageMaker Model Monitor, which automatically monitors model drift in an endpoint. Using a fully-managed service like Amazon SageMaker means our final architecture is very lightweight. To complete the deployment, we added an API layer around our endpoint using Amazon API Gateway and AWS Lambda. The following diagram shows this architecture in action.

The architecture includes the following steps:

  1. The user makes a request to API Gateway.
  2. API Gateway passes the request to a Lambda function.
  3. The Lambda function makes a request to the Amazon SageMaker model endpoint. If the request is for rankings, the endpoint computes the UDC rankings using the currently available qualifying data and returns the result. If the request is to refresh, the endpoint downloads the new qualifying data from Amazon S3.

Summary

In this post, we described how F1 and the Amazon ML Solutions Lab scientists collaborated to create Fastest Driver, the first objective and data-driven model to determine who might be the fastest driver ever. This collaborative work between F1 and AWS has provided a unique view of one of the sport’s most enduring questions by looking back at its history on its 70th anniversary. Although F1 is the first to employ ML in this way, you can apply the technology to answer complex questions in sports, or even settle age-old disputes with fans of rival teams. This F1 season, fans will have many opportunities to see Fastest Driver in action and launch into their own debates about the sport’s all-time fastest drivers.

Sports leagues around the world are using AWS machine learning technology to transform the fan experience. The Guinness Six Nations Rugby Championship competition and Germany’s Bundesliga use AWS to bring fans closer to the action of the game and deliver deeper insights. In America, the NFL uses AWS to bring advanced stats to fans, players, and the league to improve player health and safety initiatives using AI and ML.

If you’d like help accelerating your use of ML in your products and processes, please contact the Amazon ML Solutions Lab program.


About the Authors

Rob Smedley has over 20 years of experience in the world of motorsport, having spent time at multiple F1 teams including Jordan, as a Race Engineer at Ferrari and most recently as Head of Vehicle Performance at Williams. He is now Director of Data systems at Formula 1, and oversees the F1 Insights program from a technical data side.

 

Colby Wise is a Data Scientist and manager at the Amazon ML Solutions Lab, where he helps AWS customers across numerous industries accelerate their AI and cloud adoption.

 

 

 

Delger Enkhbayar is a data scientist in the Amazon ML Solutions Lab. She has worked on a wide range of deep learning use cases in sports analytics, public sector and healthcare. Her background is in mechanism design and econometrics.

 

 

 

Guang Yang is a data scientist at the Amazon ML Solutions Lab where he works with customers across various verticals and applies creative problem solving to generate value for customers with state-of-the-art ML/AI solutions.

 

 

 

Ryan Cheng is a Deep Learning Architect in the Amazon ML Solutions Lab. He has worked on a wide range of ML use cases from sports analytics to optical character recognition. In his spare time, Ryan enjoys cooking.

 

 

 

George Price is a Deep Learning Architect at the Amazon ML Solutions Lab where he helps build models and architectures for AWS customers. Previously, he was a software engineer working on Amazon Alexa.

 

Read More

Amazon Personalize can now create up to 50% better recommendations for fast changing catalogs of new products and fresh content

Amazon Personalize can now create up to 50% better recommendations for fast changing catalogs of new products and fresh content

Amazon Personalize now makes it easier to create personalized recommendations for fast-changing catalogs of books, movies, music, news articles, and more, improving recommendations by up to 50% (measured by click-through rate) with just a few clicks in the AWS console. Without needing to change any application code, Amazon Personalize enables customers to include completely new products and fresh content in their usual recommendations, so that the best new products and content is discovered, clicked, purchased, or consumed by end-users an order of magnitude more quickly than other recommendation systems.

Many catalogs are fast moving with new products and fresh content being continuously added, and it is crucial for businesses to help their users discover and engage with these products or content. For example, users on a news website expect to see latest personalized news, users consuming media via video-on-demand services expect to be recommended the latest series and episodes they might like. Meeting these expectations by showcasing new products and content to users helps keep the user experience fresh, and aids in sales either through direct conversion, or through subscriber conversion and retention. However, there are usually way too many new products in fast moving catalogs to make it feasible to showcase each of them to every user. It makes much more sense to personalize the user experience by matching these new products with users, based on their interests and preferences. Personalization of new products is inherently hard due to absence of data about past views, clicks, purchases, and subscriptions for these products. In such a scenario, most recommender systems only make recommendations for products they have sufficient past data about, and ignore the products that are new to the catalog.

With today’s launch, Amazon Personalize can help customers create personalized recommendations for new products and fresh content for their users, in matter of a few clicks. Amazon Personalize does this by recommending new products to users who have positively engaged (clicked, purchased, etc.) with similar products in the past. If users positively engage with the recommended new products, Personalize further recommends them to more users with similar interests. At Amazon, this capability has been in use since many years for creating product recommendations, and has resulted in 21% higher conversions compared to recommendations that do not include new products. This capability is now available in Amazon Personalize at no additional cost as part of its existing deep learning based algorithms that have been perfected over years of development and use at Amazon. It’s a win-win situation for customers, as they can benefit from this new capability at no extra cost, without losing out on the highly relevant recommendations that they already create through Amazon Personalize.

Amazon Personalize makes it easy for customers to develop applications with a wide array of personalization use cases, including real time product recommendations and customized direct marketing. Amazon Personalize brings the same machine learning technology used by Amazon.com to everyone for use in their applications – with no machine learning experience required. Amazon Personalize customers pay for what they use, with no minimum fees or upfront commitment. You can start using Amazon Personalize with a simple three step process, which only takes a few clicks in the AWS console, or a set of simple API calls. First, point Amazon Personalize to user data, catalog data, and activity stream of views, clicks, purchases, etc. in Amazon S3 or upload using a simple API call. Second, with a single click in the console or an API call, train a custom private recommendation model for your data (CreateSolution). Third, retrieve personalized recommendations for any user by creating a campaign, and using the GetRecommendations API.

The rest of this post walks you through this process in greater detail and discusses the recommended best practices.

Adding your data to Personalize

For this post, we create a dataset group with an interaction dataset and item dataset (item metadata). For instructions on creating a dataset group, see Getting Started (Console).

Creating an interaction dataset

To create an interaction dataset, use the following schema and import the file bandits-demo-interactions.csv, which is a synthetic movie rating dataset:

{
    "type": "record",
    "name": "Interactions",
    "namespace": "com.amazonaws.personalize.schema",
    "fields": [
        {
            "name": "USER_ID",
            "type": "string"
        },
        {
            "name": "ITEM_ID",
            "type": "string"
        },
        {
            "name": "EVENT_TYPE",
            "type": "string"
        },
        {
            "name": "EVENT_VALUE",
            "type": ["null","float"]
        },
        {
            "name": "TIMESTAMP",
            "type": "long"
        },
        {
            "name": "IMPRESSION",
            "type": "string"
        }
    ],
    "version": "1.0"
}

You can now optionally add impression information to Amazon Personalize. Impressions are the list of items that were visible to the user when they interacted with a particular item. The following screenshot shows some interactions with impression data.

The impression is represented as an ordered list of item IDs that are pipe separated. The first row of the data in the preceding screenshot shows that when user_id 1 rated item_id 1270, they had items 1270, 1...9 in that order visible in the UX. The contrast between which items were recommended to the user and which they interacted with helps us generate better recommendations.

Amazon Personalize has two modes to input impression information:

  • Explicit impressions – Impressions that you manually record and send to Personalize. The preceding example pertains to explicit impressions.
  • Implicit impressions – The list of items recommended for recommendations users receive from Amazon Personalize

Amazon Personalize now returns a RecommendationID for each set of recommendations from the service. If you do not change the order or content of the recommendations when generating you user experience you can reference the impression through the RecommendationID without needing to send a list of ItemIDs (explicit impressions). If you provide both explicit and implicit impressions for an interaction, the explicit impression takes precedence. You can also send both implicit and explicit recommendations via the putEvents API. Please see our documentation for more details.

Creating an item dataset

You follow similar steps to create an item dataset and import your data using bandits-demo-items.csv, which has metadata for each movies. We use an optional reserved keyword CREATION_TIMESTAMP for the item dataset, which helps Amazon Personalize compute the age of the item and adjust recommendations accordingly. When using your own data to model provide the timestamp when the item was first available to your user in this field. We infer the age of an item from the reference point of the latest interaction timestamp in your dataset.

If you don’t provide the CREATION_TIMESTAMP, the model infers this information from the interaction dataset and uses the timestamp of the item’s earliest interaction as its corresponding release date. If an item doesn’t have an interaction, its release date is set as the timestamp of the latest interaction in the training set and it is considered a new item with age 0.

Our dataset for this post has 1,931 movies, of which 191 have a creation timestamp marked as the latest timestamp in the interaction dataset. These newest 191 items are considered cold items and have a label number higher than 1800 in the dataset. The schema of the item dataset is as follows:

{
    "type": "record",
    "name": "Items",
    "namespace": "com.amazonaws.personalize.schema",
    "fields": [
        {
            "name": "ITEM_ID",
            "type": "string"
        },
        {
            "name": "GENRES",
            "type": ["null","string"],
            "categorical": true
        },
        {
            "name": "TITLE",
            "type": "string"
        },
        {
            "name": "CREATION_TIMESTAMP",
            "type": "long"
        }
    ],
    "version": "1.0"
}

Training a model

After the dataset import jobs are complete, you’re ready to train your model.

  1. On the Solutions tab, choose Create solution.
  2. Choose the new aws-user-personalization recipe.

This new recipe effectively combines deep learning models (RNNs) with bandits to provide you more accurate user modeling (high relevance) and effective exploration.

  1. Leave the Solution configuration section at its default values, and choose

  1. On the Create solution version page, choose Finish to start training.

When the training is complete, you can navigate to the Solution Version Overview page to see the offline metrics. In certain situations, you might see a slight drop on accuracy metrics (such as mrr or precision@k) and on coverage compared to models trained on the HRNN-Metadata recipe. This is because recommendation made by the new aws-user-personalization recipe isn’t solely based on exploitation, and it may sacrifice short-term interest for the long-term reward. The offline metrics are computed using the default values of parameters (explorationWeight, explorationItemAgeCutoff), which impacts item exploration. You can find more details on these in the following section.

After several rounds of retraining, you should see the accuracy metrics and item coverage increase, and the new aws-user-personalization recipe should outperform the exploitation-based HRNN-Metadata recipe.

Creating a campaign

In Amazon Personalize, you use a campaign to make recommendations for your users. In this step, you create two campaigns using the solution you created in the previous step and demonstrate the impact of different amounts of exploration.

To create a new campaign, complete the following steps:

  1. On the Campaigns tab, choose Create Campaign.
  2. For Campaign name, enter a name.
  3. For Solution, choose user-personalization-solution.
  4. For Solution version ID, choose the solution version that uses the aws-user-personalization recipe.

You now have the option of setting additional configuration for the campaign, which allows you to adjust the exploration Amazon Personalize does for the item recommendations and therefore adjust the results. These settings are only available if you’re creating a campaign whose solution version uses the user-personalization recipe. The configuration options are as follows:

  • explorationWeight – Higher values for explorationWeight signify higher exploration; new items with low impressions are more likely to be recommended. A value of 0 signifies that there is no exploration and results are ranked according to relevance. You can set this parameter in a range of [0,1] and its default value is 0.3.
  • explorationItemAgeCutoff – This is the maximum duration in days relative to the latest interaction(event) timestamp in the training data. For example, if you set explorationItemAgeCutoff to 7, the items with an age over or equal to 7 days aren’t considered cold items and there is no exploration on these items. You may still see some items older than or equal to 7 days in the recommendation list because they’re relevant to the user’s interests and are of good quality even without the help of the exploration. The default value for this parameter is 30, and you can set it to any value over 0.

To demonstrate the effect of exploration, we create two campaigns.

  1. For the first campaign, set Exploration weight to 0.
  2. Leave Exploration item age cut off at its default of 30.0.
  3. Choose Create campaign.

Repeat the preceding steps to create a second campaign, but give it a different name and change the exploration weight to 1.

Getting recommendations

After you create or update your campaign, you can get recommended items for a user, similar items for an item, or a reranked list of input items for a user.

  1. On the Campaigns detail page, enter the user ID for your user personalization campaign.

The following screenshot shows the campaign detail page with results from a GetRecommendations call that include the recommended items and the recommendation ID, which you can use as an implicit impression. The service interprets the recommendation ID in training.

  1. Enter a user ID that has interactions in the interactions dataset. For this post, we get recommendations for user ID 1.
  2. On the campaign detail page of the campaign that has an exploration weight of 0, choose the Detail
  3. For User ID, enter 1.
  4. Choose Get recommendations.

The following image is for campaigns with an exploration weight of 0; we can see that the recommendation items are old items, and users have already seen or rated those movies.

The next image shows recommendation results for the same user but for a campaign where we set the exploration weight to 1. This results in a higher proportion of movies that were recently added and that few users have rated being recommended. Furthermore, the trade-off between the relevance (exploitation) and exploration is adjusted automatically depending on the coldness of the new items and as new feedback from users is leveraged.

Retraining and updating campaigns

New interactions against explored items hold important feedback on the quality of the item, which you can use to update exploration on the items. We recommend updating the model hourly to adjust the future item exploration.

To update a model(solutionVersion), you can call the createSolutionVersion API with trainingMode set to UPDATE. This updates the model with the latest item information and adjusts the exploration according to implicit feedback from the users. This is not equivalent to training a model, which you can do by setting trainingMode to FULL. You should perform full training less frequently, typically one time every 1–5 days. When the new updated solutionVersion is created, you can update the campaign to get recommendations using it.

The following code walks you through these steps:

#Updating the solutionVersion (model) and Campaign

import time

def wait_for_solution_version(solution_version_arn):
    status = None
    max_time = time.time() + 60*60 # 1 hour
    while time.time() < max_time:
        describe_solution_version_response = personalize.describe_solution_version(
            solutionVersionArn = solution_version_arn
        )
        status = describe_solution_version_response["solutionVersion"]["status"]
        print("SolutionVersion: {}".format(status))

        if status == "ACTIVE" or status == "CREATE FAILED":
            break
        time.sleep(60) 
        
def update_campaign(solution_arn, campaign_arn):
    create_solution_version_response = personalize.create_solution_version(
        solutionArn = solution_arn, 
        trainingMode='UPDATE')
    new_solution_version_arn = create_solution_version_response['solutionVersionArn']
    print("Creating solution version: {}".format(new_solution_version_arn))
    wait_for_solution_version(new_solution_version_arn)
    personalize.update_campaign(campaignArn=campaign_arn, solutionVersionArn=new_solution_version_arn)
    print("Updating campaign...")

# Update the campaign every hour
while True:
    dt = time.time() + 60*60
    try:
        solution_arn = <your solution arn>
        campaign_arn = <your campaign arn>
        update_campaign(solution_arn, campaign_arn)
    except Exception as e:
        print("Not able to update the campaign: {}".format(str(e)))
    while time.time() < dt:
        time.sleep(1)

Best practices

That wraps our post. As you use the new ‘aws-user-personalization’ recipe please keep the following best practices in mind.

  1. Don’t forget to do retraining. Retraining, with ‘UPDATE’ mode is essential to learn about “cold” items. During inference the model will recommend “cold” items to the user and collect user feedback, and retraining will let the model discover the “cold” item properties via collected feedback. Without retraining, the model will never learn more about the “cold” items besides their item metadata, and it will be not be useful to do continued exploration on “cold” items.
  2. Provide good item metadata. Even with the exploration, the item metadata is still crucial for recommending relevant cold items. The model learns item properties from two resources: interactions and item metadata, and since the “cold” items don’t have any interactions, the model can only learn from the item metadata before exploration.
  3. Provide accurate item release date via ‘CREATION_TIMESTAMP’ in the item dataset. This information is used to model the time effect on the item, so that we do not explore on old items.

Conclusion

The new aws-user-personalization recipe from Amazon Personalize effectively mitigates the item cold start problem by also recommending new items with few interactions and learning their properties through user feedback during retraining. For more information about optimizing your user experience with Amazon Personalize, see What Is Amazon Personalize?


About the Authors

Hao Ding is an Applied Scientist at AWS AI Labs and is working on developing next generation recommender system for Amazon Personalize. His research interests include Recommender System, Deep Learning, and Graph Mining

 

 

 

 

Yen Su is a software development engineer in Amazon Personalize team. After work, she enjoys hiking and exploring new restaurants.

 

 

 

 

Vaibhav Sethi is the lead Product Manager for Amazon Personalize. He focuses on delivering products that make it easier to build machine learning solutions. In his spare time, he enjoys hiking and reading.

 

 

 

 

 

Read More

How Citibot’s chatbot search engine uses AI to find more answers

How Citibot’s chatbot search engine uses AI to find more answers

This is a guest blog post by Francisco Zamora and Nicholas Burden at TensorIoT and Bratton Riley at Citibot. In their own words, “TensorIoT is an AWS Advanced Consulting Partner with competencies in IoT, Machine Learning, Industrial IoT and Retail. Founded by AWS alums, they have delivered end-to-end IoT and Machine Learning solutions to customers across the globe. Citibot provides tools for citizens and their governments to use for efficient and effective communication and civic change.”

Citibot is a technology company that builds AI-powered chat solutions for local governments from Fort Worth, Texas to Arlington, Virginia. With Citibot, local residents can quickly get answers to city-related questions, report issues, and receive real-time alerts via text responses. To power these interactions, Citibot uses Amazon Lex, a service for building conversational interfaces for text and voice applications. Citibot built a chatbot to handle basic call queries, which allows government employees to allocate more time to higher-impact community actions.

The challenges imposed by the COVID-19 pandemic surfaced the need for public organizations to have scalable, self-service tools that can quickly provide reliable information to its constituents. With COVID-19, Citibot call centers saw a dramatic uptick in wait times and call abandonments as citizens tried to get information about virus prevention and unemployment insurance. To increase the flexibility and robustness of their chatbot to new query types, Citibot looked to add a general search capability. Citibot wanted a solution that could outperform third-party solutions and effectively use curated FAQ content and recently published data from multiple websites such as the CDC and federal, state, and local government.

The following image shows screenshots of sample Citibot conversations.

To design this general search solution, Citibot chose TensorIoT, an AWS Advanced Consulting Partner that specializes in serverless application development. TensorIot developed a solution that included TensorIoT’s Web Connector Tool and Amazon Kendra, an enterprise search service. TensorIoT’s Web Connector Tool, built natively on AWS, enabled Amazon Kendra to index the content of target web pages and be a fallback search intent when Amazon Lex intents can’t provide an answer.

This new chatbot search solution helped local citizens quickly find the answers they needed and reduced wait times by up to 90%. This in turn decreased the volume of interactions handled by city officials, eased uncertainty within communities, and allowed municipal governments to focus on keeping their communities safe. As offices closed due to the pandemic, this solution provided a contactless way for residents without internet access to search for information on government websites at any time through their phones.

The following diagram illustrates the architecture for Citibot’s general search solution.

How it all came together

First, TensorIoT deployed a custom Amazon Lex search intent that is triggered when the chatbot receives a question or utterance it can’t answer. The team used AWS Lambda to develop the intent’s dialog and fulfillment code hooks to manage the conversation flow and fulfillment APIs. This new search intent was developed, tested, and merged into the dev version of Citibot to ensure all the original intents worked properly.

Second, TensorIoT needed to create a search query index. They choose Amazon Kendra because it can integrate a variety of data sources and data types into Citibot’s existing technology stack. The TensorIoT and Citibot development teams determined a target group of government data sources, including the CDC website for COVID-19 data and multiple city websites for municipal data, that are checked on a routine basis. This helps the chatbot access the most recent guidelines about the virus and social distancing.

The following diagram illustrates the data sources used for Citibot’s general search solution.

Next, the teams researched the optimal format type and data storage containers for saving information and connecting to Amazon Kendra. TensorIoT knew that Amazon Kendra is trained to systematically process and index data sources to derive meaning from a variety of data formats, such as .pdf, .csv, and .html files. To increase the processing efficiency of Amazon Kendra, the TensorIoT team intelligently partitioned the data into queryable information chunks that could be relayed back to the users. The TensorIoT approach used a combination of .csv, .pdf, and .html files to provide complete data, giving a solid foundation for product build and development.

The TensorIoT team then developed a versatile Web Connector using NodeJS and the Javascript library Cheerio to crawl trusted websites and deposit that information into the data stores. Because COVID-19-related information changes frequently, TensorIoT created an Amazon DynamoDB table to store all the websites to routinely index for updated information.

With the additional information from the targeted websites, the TensorIoT and Citibot teams decided to use Amazon Simple Storage Service (Amazon S3) buckets for data storage. Amazon Kendra provides machine learning (ML)-powered search capabilities for all unstructured data stored in AWS and offers easy-to-use native connectors for popular sources like Amazon S3, SharePoint, Salesforce, ServiceNow, RDS databases, and OneDrive. By unifying the extracted .html pages and .pdf files from the CDC website in the same S3 bucket, the development team could sync the index to the data source, providing readily available data. They also used Amazon Kendra to extract metadata files from the scraped .html pages, which provided additional file attributes such as city names to further improve answer results.

The following image shows an example of the attributes that Citibot could use to tune search results.

Without any model training, TensorIoT and Citibot could point Amazon Kendra at their content stores and start receiving specific answers to natural language queries (such as, “How can I protect myself from Covid-19?”) by extracting the answer from the most relevant document.

To test the solution, the engineers ran sample event scripts with test inputs that allowed them to verify if all the sample questions were being answered successfully. TensorIoT tested and confirmed that each question or utterance returned an answer with a valid text excerpt and link. Additionally, the team used a negative feedback API that flagged answers users had downvoted and gave Citibot the ability to revisit the search answers that were voted as unhelpful. This data helps drive continuous improvement around the answers provided by the index for specific questions.

For curated content search, the developers could also upload a .csv file of FAQs to provide direct answers to the most commonly asked questions. For Citibot, TensorIoT used this feature to fill in the specific answers for municipal information questions, and added a .csv file with relevant questions and answers (Q&A) that required a complete search engine microservice. Using these features brings numerous benefits, including accuracy, simplicity, and connectivity.

In just a few weeks, TensorIoT also built and added custom query logic and feedback submission APIs to the Amazon Lex bot, giving users better answers without requiring human interaction or extensive searching. Amazon Kendra exposes their services via API, such as the submit feedback API, which allows end-users to interact with search results. The team used the custom Amazon Lex intent and Lambda to handle the incoming queries and create a powerful search service.

The following image shows how the solution uses Amazon Lex and Lambda.

The TensorIoT solution was designed so Citibot can effortlessly add new cities to the service and disseminate information to their respective communities. The next challenge for the TensorIoT team was using city-specific information to provide more relevant search results. Combined with the additional session and request attributes of Amazon Lex, TensorIoT provided Amazon Kendra with search filters to refine the data query with specific city information. If no city was stated, the system defaulted to the call location of the user. With TensorIoT’s custom search intent deployed, search filter in place, data sources filled, and APIs built, the team started to integrate this search engine into the existing chatbot product.

Deployment

To deploy this TensorIot solution, the development teams integrated the new Amazon Lex custom search intent with Citibot and tested the bot’s ability to successfully answer queries. Using a sample phone number provided by Citibot through Twilio, TensorIoT used SMS to validate the returned results for each utterance.

With Amazon Kendra, the TensorIoT team eliminated the need for a third-party search engine and could focus on creating an automated solution for gathering information. After the chatbot was updated, the team redeployed the service with a version upgrade of the software development kit. The upgraded chatbot now uses the search power of Amazon Kendra to answer more questions for users based on the curation of document content. The resulting informational Citibot stands above the prior tools the cities had used.

Storing information in a curated content form is especially useful when combining Amazon Lex and Amazon Kendra. Amazon Kendra is perfect for customized information retrieval that is ultimately communicated to the end-user through agentless voice interactions of Amazon Lex.

Conclusion

This use case demonstrates how TensorIot used multiple AWS services to add value in solution development. Beyond COVID-19, cities can continue to utilize the Amazon Kendra-powered chatbot to provide fast access to information about public facility hours, road closures, and events. Depending on your use case, you can easily customize the subject matter of the AWS Kendra index to provide information for emerging user needs.

The TensorIoT search engine proved to be a powerful solution to a modern-day problem, allowing communities to stay informed and connected through text. Although the primary purpose of this application was to enhance customer support services, the solution is applicable to searching internal knowledge bases for schools, banks, local businesses, and non-profit organizations. With AWS and TensorIoT, companies like Citibot can use new and powerful technologies such as Amazon Kendra to improve their existing chatbot solutions.

 


About the Authors

Francisco Zamora is a Software Engineer at TensorIoT.

Nicholas Burden is a Technical Evangelist at at TensorIoT.

Bratton Riley is the CEO at Citibot.

Read More