In today’s world, social media has become a place where customers share their experiences with services that they consume. Every telecom provider wants to have the ability to understand their customer pain points as soon as possible and to do this carriers frequently establish a social media team within their NOC (network operation center). This team manually reviews social media messages, such as tweets, trying to identify patterns of customer complaints or issues that might suggest that there is a specific problem in the carrier’s network .
Unhappy customers are more likely to change provider, so operators look to improve their customers’ experience and proactively approach dissatisfied customers who are reporting issues with their services .
Of course, social media operates at a vast scale and our telecom customers are telling us that trying to uncover customer issues from social media data manually is extremely challenging.
This post shows how to classify tweets in real time so telecom companies can identify outages and proactively engage with customers by using Amazon Comprehend custom multi-class classification.
Solution overview
Telecom customers not only post about outages on social media, but also comment on the service they get or compare the company to a competitor.
Your company can benefit from targeting those types of tweets separately. One option is customer feedback, in which care agents respond to the customer. For outages, you need to collect information and open a ticket in an external system so an engineer can specify the problem.
The solution for this post extends the AI-Driven Social Media Dashboard solution. The following diagram illustrates the solution architecture.
AI-Driven Social Media Dashboard Solutions Implementation architecture
This solution deploys an Amazon Elastic Compute Cloud (Amazon EC2) instance running in an Amazon Virtual Private Cloud (Amazon VPC) that ingests tweets from Twitter. An Amazon Kinesis Data Firehose delivery stream loads the streaming tweets into the raw prefix in the solution’s Amazon Simple Storage Service (Amazon S3) bucket. Amazon S3 invokes an AWS Lambda function to analyze the raw tweets using Amazon Translate to translate non-English tweets into English, and Amazon Comprehend to use natural-language-processing (NLP) to perform entity extraction and sentiment analysis.
A second Kinesis Data Firehose delivery stream loads the translated tweets and sentiment values into the sentiment prefix in the Amazon S3 bucket. A third delivery stream loads entities in the entities prefix using in the Amazon S3 bucket.
The solution also deploys a data lake that includes AWS Glue for data transformation, Amazon Athena for data analysis, and Amazon QuickSight for data visualization. AWS Glue Data Catalog contains a logical database which is used to organize the tables for the data on Amazon S3. Athena uses these table definitions to query the data stored on Amazon S3 and return the information to an Amazon QuickSight dashboard.
You can extend this solution by building Amazon Comprehend custom classification to detect outages, customer feedback, and comparisons to competitors.
Creating the dataset
The solution uses raw data from tweets. In the original solution, you deploy an AWS CloudFormation template that defines a comma-delimited list of terms for the solution to monitor. As an example, this post focuses on tweets that contain the word “BT” (BT Group in the UK), but equally this could be any network provider.
To get started, launch the AI-driven Social Media Dashboard solution. On the Specify Stack Details page, replace the default TwitterTermList with your terms. For this example, 'BT','bt'
. After you click on Create Stack, wait 15 minutes for the deployment to complete. You will now begin capturing tweets.
For more information about available attributes and data types, see Appendix B: Auto-generated Data Model.
The tweet data is stored in Amazon Simple Storage Service (Amazon S3), which you can query with Amazon Athena. The following screenshot shows an example query.
SELECT id,text FROM "ai_driven_social_media_dashboard"."tweets" limit 10;
Because you captured every tweet that contains the keyword BT
or bt
, you have a lot of tweets that aren’t referring to British Telecom; for example, tweets that misspell the word “but.”
Additionally, the tweets in your dataset are global, but for this post, you want to focus on the United Kingdom, so the tweets are even more likely to refer to British Telecom (and therefore your dataset is more accurate). You can modify this solution for use cases in other countries, for example, defining the keyword as KPN
and narrowing the dataset to focus only on the Netherlands.
In the existing solution, the coordinates
and geo
types look relevant, but those usually aren’t populated—tweets don’t include the poster’s location by default due to privacy requirements, unless the user allows it.
The user
type contains relevant user data that comes from the user profile. You can use the location data from the user profile to narrow down tweets to your target country or region.
To look at the user
type, you can use the Athena CREATE TABLE AS SELECT
(CTAS) query. For more information, see Creating a Table from Query Results (CTAS). The following screenshot shows the Create table from query option in the Create drop-down menu.
SELECT text,user.location from tweets
You can create a table that consists of the tweet text and the user location, which gives you the ability to look only at tweets that originated in the UK. The following screenshot shows the query results.
SELECT * FROM "ai_driven_social_media_dashboard"."location_text_02"
WHERE location like '%UK%' or location like '%England%' or location like '%Scotland%' or location like '%Wales%'
Now that you have a dataset with your target location and tweet keywords, you can train your custom classifier.
Amazon Comprehend custom classification
You train your model in multi-class mode. For this post, you label three different classes:
- Outage – People who are experiencing or reporting an outage in their provider network
- Customer feedback – Feedback regarding the service they have received from the provider
- Competition – Tweets about the competition and the provider itself
You can export the dataset from Athena and train it to use the custom classifier.
You first look at the dataset and start labeling the different tweets. Because you have a large number of tweets, it can take manual effort and perhaps several hours to review the data and label it. We recommend that you train the model with at least 50 documents per label.
In the dataset, customers reported an outage, which resulted in 71 documents with the outage
label. Competition and customer feedback had under 50 labels.
After you gather sufficient data, you can always improve your accuracy by training a new model.
The following screenshot shows some of the entries in the final training CSV file.
As a future enhancement to remove the manual effort of labeling tweets, you can automate the process with Amazon SageMaker Ground Truth. Ground Truth offers easy access to labelers through Amazon Mechanical Turk and provides built-in workflows and interfaces for common labeling tasks.
When the labeling work is complete, upload the CSV file to your S3 bucket.
Now that the training data is in Amazon S3, you can train your custom classifier. Complete the following steps:
- On the Amazon Comprehend console, choose Custom classification.
- Choose Train classifier.
- For Name, enter a name for your classifier; for example,
TweetsBT
. - For Classifier mode, select Using multi-class mode.
- For S3 location, enter the location of your CSV file.
- Choose Train classifier.
The status of the classifier changes from Submitted
to Training
. When the job is finished, the status changes to Trained
.
After you train the custom classifier, you can analyze documents in either asynchronous or synchronous operations. You can analyze a large number of documents at the same time by using the asynchronous operation. The resulting analysis returns in a separate file. When you use the synchronous operation, you can only analyze a single document, but you can get results in real time.
For this use case, you want to analyze tweets in real time. When a tweet lands in Amazon S3 via Amazon Kinesis Data Firehose, it triggers an AWS Lambda function. The function triggers the custom classifier endpoint to run an analysis on the tweet and determine if it’s in regards to an outage, customer feedback, or referring to a competitor.
Testing the training data
After you train the model, Amazon Comprehend uses approximately 10% of the training documents to test the custom classifier model. Testing the model provides you with metrics that you can use to determine if the model is trained well enough for your purposes. These metrics are displayed in the Classifier performance section of the Classifier details page on the Amazon Comprehend console. See the following screenshot.
They’re also returned in the Metrics
fields returned by the DescribeDocumentClassifier operation.
Creating an endpoint
To create an endpoint, complete the following steps:
- On the Amazon Comprehend console, choose Custom classification.
- From the Actions drop-down menu, choose Create endpoint.
- For Endpoint name, enter a name; for example,
BTtweetsEndpoint
. - For Inference units¸ enter the number to assign to an endpoint.
Each unit represents a throughput of 100 characters per second for up to two documents per second. You can assign up to 10 inference units per endpoint. This post assigns 1.
- Choose Create endpoint.
When the endpoint is ready, the status changes to Ready
.
Triggering the endpoint and customizing the existing Lambda function
You can use the existing Lambda function from the original solution and extend it to do the following:
- Trigger the Amazon Comprehend custom classifier endpoint per tweet
- Determine which class has the highest confidence score
- Create an additional Firehose delivery stream so the results land back in Amazon S3
For more information about the original Lambda function, see the GitHub repo.
To make the necessary changes to the function, complete the following steps:
- On the Lambda console, select the function that contains the string Tweet-SocialMediaAnalyticsLambda.
Before you start adding code, make sure you understand how the function reads the tweets coming in, calls the Amazon Comprehend API, and stores the responses on a Firehose delivery stream so it writes the data to Amazon S3.
- Call the custom classifier endpoint (see the following code example).
The first two calls use the API on the tweet text to detect sentiment and entities; those both come out-of-the-box with the officinal solution.
The following code uses the ClassifyDocument API:
sentiment_response = comprehend.detect_sentiment(
Text=comprehend_text,
LanguageCode='en'
)
#print(sentiment_response)
entities_response = comprehend.detect_entities(
Text=comprehend_text,
LanguageCode='en'
)
#we will create a 'custom_response' using the ClassifyDocument API call
custom_response = comprehend.classify_document(
#point to the relevant Custom classifier endpoint ARN
EndpointArn= "arn:aws:comprehend:us-east-1:12xxxxxxx91:document-classifier-endpoint/BTtweets-endpoint",
#this is where we use comprehend_text which is the original tweet text
Text=comprehend_text
)
The following code is the returned result:
{"File": "all_tweets.csv", "Line": "23", "Classes": [{"Name": "outage", "Score": 0.9985}, {"Name": "Competition", "Score": 0.0005}, {"Name": "Customer feedback", "Score": 0.0005}]}
You now need to iterate over to the array, which contains the classes and confidence scores. For more information, see DocumentClass.
Because you’re using the multi-class approach, you can pick the class with the highest score and add some simple code that iterates over the array and takes the biggest score and class.
You also take tweet[‘id’]
because you can join it with the other tables that the solution generates to relate the results to the original tweet.
- Enter the following code:
score=0 for classs in custom_response['Classes']: if score<classs['Score']: score=classs['Score'] custom_record = { 'tweetid': tweet['id'], 'classname':classs['Name'], 'classscore':classs['Score'] }
After you create the custom_record
, you can decide if you want to define a certain threshold for your class score (the level of confidence for the results you want to store in Amazon S3). For this use case, you choose to only define classes with a confidence score of at least 70%.
To put the result on a Firehose delivery stream (which you need to create in advance), use the PutRecord API. See the following code:
if custom_record['classscore']>0.7:
print('we are in')
response = firehose.put_record(
DeliveryStreamName=os.environ['CUSTOM_STREAM'],
Record={
'Data': json.dumps(custom_record) + 'n'
}
)
You now have a dataset in Amazon S3 based on your Amazon Comprehend custom classifier output.
Exploring the output
You can now explore the output from your custom classifier in Athena. Complete the following steps:
- On the Athena console, run a
SELECT
query to see the following:- tweetid – You can use this to join the original tweet table to get the tweet text and additional attributes.
- classname – This is the class that the custom classifier identified the tweet as with the highest level of confidence.
- classscore – This is the level of confidence.
- Stream partitions – These help you know the time when the data was written to Amazon S3:
Partition_0
(month)Partition_1
(day)Partition_2
(hour)
The following screenshot shows your query results.
SELECT * FROM "ai_driven_social_media_dashboard"."custom2020" where classscore>0.7 limit 10;
- Join your table using the
tweetid
with the following:- The original tweet table to get the actual tweet text.
- A sentiment table that Amazon Comprehend generated in the original solution.
The following screenshot shows your results. One of the tweets contains negative feedback, and other tweets identify potential outages.
SELECT classname,classscore,tweets.text,sentiment FROM "ai_driven_social_media_dashboard"."custom2020"
left outer join tweets on custom2020.tweetid=tweets.id
left outer join tweet_sentiments on custom2020.tweetid=tweet_sentiments.tweetid
where classscore>0.7
limit 10;
Preparing the data for visualization
To prepare the data for visualization, first create a timestamp field by concatenating the partition fields.
You can use timestamp field for various visualizations, such as outages in a certain period or customer feedback on a specific day. To do so, use AWS Glue notebooks and write a piece of code in PySpark.
You can use the PySpark code to not only prepare your data but also transform the data from CSV to Apache Parquet format. For more information, see the GitHub repo.
You should now have a new dataset that contains a timestamp field in Parquet format, which is more efficient and cost-effective to query.
For this use case, you can determine the outages reported on a map using geospacial charts in Amazon QuickSight. To get the location of the tweet, you can use the following:
- Longitude and latitude coordinates in the original
tweets
dataset. Unfortunately, coordinates aren’t usually present due to privacy defaults. - Amazon Comprehend
entity
dataset, which can identify locations as entities within the tweet text.
For this use case, you can create a new dataset combining the tweets
, custom2020
(your new dataset based on the custom classifier output, and tweetsEntities
datasets.
The following screenshot shows the query results, which returned tweets with locations that also identify outages.
SELECT distinct classname,final,text,entity FROM "ai_driven_social_media_dashb
oard"."custom2020"."quicksight_with_lat_lang"
where type='LOCATION' and classname='outage'
order by final asc
You have successfully identified outages in a specific window and determined their location.
To get the geolocation of a specific location, you could choose from a variety of publicly available datasets to upload to Amazon S3 and join with your data. This post uses the World Cities Database, which has a free option. You can join it with your existing data to get the location coordinates.
Visualizing outage locations in Amazon QuickSight
To visualize your outage locations in Amazon QuickSight, complete the following steps:
- To add the dataset you created in Athena, on the Amazon QuickSight console, choose Manage data.
- Choose New dataset.
- Choose Athena.
- Select your database or table.
- Choose Save & Visualize.
- Under Visual types, choose the Points on map
- Drag the lng and lat fields to the field wells.
The following screenshot shows the outages on a UK map.
To see the text of a specific tweet, hover over one of the dots on the map.
You have many different options when analyzing your data and can further enhance this solution. For example, you can enrich your dataset with potential contributors and drill down on a specific outage location for more details.
Conclusion
We have now the ability to detect outages which customers are reporting upon, we can also leverage the solution to look on customer feedback and competition. We are now able to identify key trends on the social media at scale. In the blog post we have showed an example which is relevant for telecom companies, but this solution can be customized and leveraged by every company that has customers using the social media.
In the near feature, we would like to extend this solution, and create an end to end flow , where the customer reporting an outage ,will automatically receive a reply in tweeter from an Amazon Lex chat bot, which can ask for more information from the customer who complained via a secured channel and send this info to a call center agent via an integration with Amazon Connect or create a ticket in an external ticket system for an engineer to work on the problem .
Give the solution a try, see if you can extend it further, and share your feedback and questions in the comments.
About the Author
Guy Ben-Baruch is a Senior solution architect in the news & communications team in AWS UKIR. Since Guy joined AWS in March 2016, he has worked closely with enterprise customers, focusing on the telecom vertical, supporting their digital transformation and their cloud adoption. Outside of work, Guy likes doing BBQ and playing football with his kids in the park when the British weather allows it.