Bundesliga Match Facts Shot Speed – Who fires the hardest shots in the Bundesliga?

Bundesliga Match Facts Shot Speed – Who fires the hardest shots in the Bundesliga?

There’s a kind of magic that surrounds a soccer shot so powerful, it leaves spectators, players, and even commentators in a momentary state of awe. Think back to a moment when the sheer force of a strike left an entire Bundesliga stadium buzzing with energy. What exactly captures our imagination with such intensity? While there are many factors that contribute to an iconic goal, there’s a particular magnetism to shots that blaze through the air, especially those taken from a distance.

Over the years, the Bundesliga has witnessed players who’ve become legends, not just for their skill but for their uncanny ability to unleash thunderbolts. Bernd Nickel, a standout figure from Frankfurt’s illustrious squad in the 1970s and 1980s, earned the title “Dr. Hammer” from ardent fans. Over his illustrious career, he netted 141 times in 426 matches.

Beyond his shooting prowess, another feat of Nickel’s that stands out is his ability to directly score from corner kicks. In fact, he holds the unique distinction of having scored from all four corner positions at Frankfurt’s Waldstadion. An example was witnessed by Frankfurt’s fans in May 1971, during a high-stakes game against Kickers Offenbach when he unveiled a masterclass.

Nickel scored a stunning goal in the 17th minute, which eventually led Frankfurt to a 2:0 victory. What made this goal even more memorable was the manner in which it was executed—a spectacular sideways scissors-kick from the penalty spot, fitting perfectly into the top corner. This goal would later be recognized as the “Goal of the Month” for May 1971. Nickl’s impact on the field was undeniable, and during the time he represented Eintracht Frankfurt, the club won the DFB-Pokal three times (in 1974, 1975, and 1981) and the UEFA Cup once in 1980.

Similarly, Thomas “the Hammer” Hitzlsberger has etched his name into Bundesliga folklore with his stunning left-footed rockets. His 2009 strike against Leverkusen at a speed of 125 km/h is one that is vividly remembered because the sheer velocity of Hitzlsperger’s free-kick was enough to leave Germany’s number one goalkeeper, René Adler, seemingly petrified.

Struck during the fifty-first minute of the game from a distance of 18 meters, the ball soared past Adler, leaving him motionless, and bulged the net, making the score 2:0. This remarkable goal not only showcased Hitzlsperger’s striking ability but also demonstrated the awe-inspiring power that such high-velocity goals can have on a match.

Historical data has shown us a few instances where the ball’s velocity exceeded the 130 km/h mark in Bundesliga, with the all-time record being a jaw-dropping shot at 137 km/h by Bayern’s Roy Makaay.

With all this in mind, it becomes even clearer why the speed and technique behind every shot matters immensely. Although high shooting speed excites soccer fans, it has not been measured regularly in the Bundesliga until now. Recognizing this, we are excited to introduce the new Bundesliga Match Facts: Shot Speed. This new metric aims to shed light on the velocity behind these incredible goals, enhancing our understanding and appreciation of the game even further.

How it works

Have you ever wondered just how fast a shot from your favorite Bundesliga player travels? The newly introduced Bundesliga Match Facts (BMF) Shot Speed now allows fans to satisfy their curiosity by providing insights into the incredible power and speed behind shots. Shot speed is more than just a number; it’s a window into the awe-inspiring athleticism and skill of the Bundesliga players.

Shot speed has a captivating effect on fans, igniting debates about which player possesses the most potent shot in the league and who consistently delivers lightning-fast strikes. Shot speed data is the key to resolving these questions.

Besides that, the new BMF helps to highlight memorable moments. The fastest shots often result in spectacular goals that live long in the memory of fans. Shot speed helps immortalize these moments, allowing fans to relive the magic of those lightning-fast strikes.

But how does this work? Let’s delve into the details.

Data collection process

A foundation of shot speed calculation lies in an organized data collection process. This process comprises two key components: event data and optical tracking data.

Event data collection entails gathering the fundamental building blocks of the game. Shots, goals, assists, fouls, and substitutions provide vital context for understanding what happens on the pitch. In our specific case, we focus on shots, their variations, and the players responsible for them.

On the flip side, optical tracking data is collected using advanced camera systems. These systems record player movements and ball positions, offering a high level of precision. This data serves as the bedrock for comprehensive analysis of player performance, tactical intricacies, and team strategies. When it comes to calculating shot speed, this data is essential for tracking the velocity of the ball.

These two streams of data originate from distinct sources, and their synchronization in time is not guaranteed. For the precision needed in shot speed calculations, we must ensure that the ball’s position aligns precisely with the moment of the event. This eliminates any discrepancies that might arise from the manual collection of event data. To achieve this, our process uses a synchronization algorithm that is trained on a labeled dataset. This algorithm robustly associates each shot with its corresponding tracking data.

Shot speed calculation

The heart of determining shot speed lies in a precise timestamp given by our synchronization algorithm. Imagine a player getting ready to take a shot. Our event gatherers are ready to record the moment, and cameras closely track the ball’s movement. The magic happens exactly when the player decides to pull the trigger.

An accurate timestamp measurement helps us figure out how fast the shot was right from the start. We measure shot speed for shots that end up as goals, those that hit the post, or get saved. To make sure we’re consistent, we don’t include headers or shots that get blocked. These can get a bit tricky due to deflections.

Let’s break down how we transform the collected data into the shot speed you see:

  1. Extracting shot trajectory – After recording the event and tracking the ball’s movement, we extract the trajectory of the shot. This means we map out the path the ball takes from the moment it leaves the player’s foot.
  2. Smoothing velocity curve – The data we get is detailed but can sometimes have tiny variations due to factors like camera sensitivity. To ensure accuracy, we smooth out the velocity curve. This means we remove any minor bumps or irregularities in the data to get a more reliable speed measurement.
  3. Calculating maximum speed – With a clean velocity curve in hand, we then calculate the maximum speed the ball reaches during its flight. This is the key number that represents the shot’s speed and power.

We analyzed around 215 matches from the Bundesliga 2022–2023 season. The following plot shows the number of fast shots (>100 km/h) by player. The 263 players with at least one fast shot (>100 km/h) have, on average, 3.47 fast shots. As the graph shows, some players have a frequency way above average, with around 20 fast shots.

Let’s look at some examples from the current season (2023–2024)

The following videos show examples of measured shots that achieved top-speed values.

Example 1

Measured with top shot speed 118.43 km/h with a distance to goal of 20.61 m

Example 2

Measured with top shot speed 123.32 km/h with a distance to goal of 21.19 m

Example 3

Measured with top shot speed 121.22 km/h with a distance to goal of 25.44 m

Example 4

Measured with top shot speed 113.14 km/h with a distance to goal of 24.46 m

How it’s implemented

In our quest to accurately determine shot speed during live matches, we’ve implemented a cutting-edge solution using Amazon Managed Streaming for Apache Kafka (Amazon MSK). This robust platform serves as the backbone for seamlessly streaming positional data at a rapid 25 Hz sampling rate, enabling real-time updates of shot speed. Through Amazon MSK, we’ve established a centralized hub for data streaming and messaging, facilitating seamless communication between containers for sharing a wealth of Bundesliga Match Facts.

The following diagram outlines the entire workflow for measuring shot speed from start to finish.

Match-related data is gathered and brought into the system via DFL’s DataHub. To process match metadata, we use an AWS Lambda function called MetaDataIngestion, while positional data is brought in using an AWS Fargate container known as MatchLink. These Lambda functions and Fargate containers then make this data available for further use in the appropriate MSK topics.

At the heart of the BMF Shot Speed lies a dedicated Fargate container named BMF ShotSpeed. This container is active throughout the duration of the match, continuously pulling in all the necessary data from Amazon MSK. Its algorithm responds instantly to every shot taken during the game, calculating the shot speed in real time. Moreover, we have the capability to recompute shot speed should any underlying data undergo updates.

Once the shot speeds have undergone their calculations, the next phase in our data journey is the distribution. The shot speed metrics are transmitted back to DataHub, where they are made available to various consumers of Bundesliga Match Facts.

Simultaneously, the shot speed data finds its way to a designated topic within our MSK cluster. This allows other components of Bundesliga Match Facts to access and take advantage of this metric. We’ve implemented an AWS Lambda function with the specific task of retrieving the calculated shot speed from the relevant Kafka topic. Once the Lambda function is triggered, it stores the data in an Amazon Aurora Serverless database. This database houses the shot speed data, which we then use to create interactive, near real-time visualizations using Amazon QuickSight.

Beyond this, we have a dedicated component specifically designed to calculate a seasonal ranking of shot speeds. This allows us to keep track of the fastest shots throughout the season, ensuring that we always have up-to-date information about the fastest shots and their respective rankings after each shot is taken.

Summary

In this blog post, we’re excited to introduce the all-new Bundesliga Match Facts: Shot Speed, a metric that allows us to quantify and objectively compare the velocity of shots taken by different Bundesliga players. This statistic will provide commentators and fans with valuable insights into the power and speed of shots on goal.

The development of the Bundesliga Match Facts is the result of extensive analysis conducted by a collaborative team of soccer experts and data scientists from the Bundesliga and AWS. Notable shot speeds will be displayed in real time on the live ticker during matches, accessible through the official Bundesliga app and website. Additionally, this data will be made readily available to commentators via the Data Story Finder and visually presented to fans at key moments during broadcasts.

We’re confident that the introduction of this brand-new Bundesliga Match Fact will enhance your understanding of the game and add a new dimension to your viewing experience. To delve deeper into the partnership between AWS and Bundesliga, please visit Bundesliga on AWS!

We’re eagerly looking forward to the insights you uncover with this new Shot Speed metric. Share your findings with us on X: @AWScloud, using the hashtag #BundesligaMatchFacts.


About the Authors

Tareq Haschemi is a consultant within AWS Professional Services. His skills and areas of expertise include application development, data science, and machine learning (ML). He supports customers in developing data-driven applications within the AWS Cloud. Prior to joining AWS, he was also a consultant in various industries, such as aviation and telecommunications. He is passionate about enabling customers on their data and artificial intelligence (AI) journey to the cloud.

Jean-Michel Lourier is a Senior Data Scientist within AWS Professional Services. He leads teams implementing data-driven applications side-by-side with AWS customers to generate business value out of their data. He’s passionate about diving into tech and learning about AI, ML, and their business applications. He is also an enthusiastic cyclist, taking long bike-packing trips.

Luc Eluère is a Data Scientist within Sportec Solutions AG. His mission is to develop and provide valuable KPIs to the soccer industry. At university, he learned the statistical theory with a goal: to apply its concepts to the beautiful game. Even though he was promised a nice career in table soccer, his passion for data science took over, and he chose computers as a career path.

Javier Poveda-Panter is a Senior Data and Machine Learning Engineer for EMEA sports customers within the AWS Professional Services team. He enables customers in the area of spectator sports to innovate and capitalize on their data, delivering high-quality user and fan experiences through ML, data science, and analytics. He follows his passion for a broad range of sports, music, and AI in his spare time.

Read More

Deploy ML models built in Amazon SageMaker Canvas to Amazon SageMaker real-time endpoints

Deploy ML models built in Amazon SageMaker Canvas to Amazon SageMaker real-time endpoints

Amazon SageMaker Canvas now supports deploying machine learning (ML) models to real-time inferencing endpoints, allowing you take your ML models to production and drive action based on ML-powered insights. SageMaker Canvas is a no-code workspace that enables analysts and citizen data scientists to generate accurate ML predictions for their business needs.

Until now, SageMaker Canvas provided the ability to evaluate an ML model, generate bulk predictions, and run what-if analyses within its interactive workspace. But now you can also deploy the models to Amazon SageMaker endpoints for real-time inferencing, making it effortless to consume model predictions and drive actions outside the SageMaker Canvas workspace. Having the ability to directly deploy ML models from SageMaker Canvas eliminates the need to manually export, configure, test, and deploy ML models into production, thereby saving reducing complexity and saving time. It also makes operationalizing ML models more accessible to individuals, without the need to write code.

In this post, we walk you through the process to deploy a model in SageMaker Canvas to a real-time endpoint.

Overview of solution

For our use case, we are assuming the role of a business user in the marketing department of a mobile phone operator, and we have successfully created an ML model in SageMaker Canvas to identify customers with the potential risk of churn. Thanks to the predictions generated by our model, we now want to move this from our development environment to production. To streamline the process of deploying our model endpoint for inference, we directly deploy ML models from SageMaker Canvas, thereby eliminating the need to manually export, configure, test, and deploy ML models into production. This helps reduce complexity, saves time, and also makes operationalizing ML models more accessible to individuals, without the need to write code.

The workflow steps are as follows:

  1. Upload a new dataset with the current customer population into SageMaker Canvas. For the full list of supported data sources, refer to Import data into Canvas.
  2. Build ML models and analyze their performance metrics. For instructions, refer to Build a custom model and Evaluate Your Model’s Performance in Amazon SageMaker Canvas.
  3. Deploy the approved model version as an endpoint for real-time inferencing.

You can perform these steps in SageMaker Canvas without writing a single line of code.

Prerequisites

For this walkthrough, make sure that the following prerequisites are met:

  1. To deploy model versions to SageMaker endpoints, the SageMaker Canvas admin must give the necessary permissions to the SageMaker Canvas user, which you can manage in the SageMaker domain that hosts your SageMaker Canvas application. For more information, refer to Permissions Management in Canvas.
  2. Implement the prerequisites mentioned in Predict customer churn with no-code machine learning using Amazon SageMaker Canvas.

You should now have three model versions trained on historical churn prediction data in Canvas:

  • V1 trained with all 21 features and quick build configuration with a model score of 96.903%
  • V2 trained with all 19 features (removed phone and state features) and quick build configuration and improved accuracy of 97.403%
  • V3 trained with standard build configuration with 97.103% model score

Use the customer churn prediction model

Enable Show advanced metrics on the model details page and review the objective metrics associated with each model version so that you can select the best-performing model for deploying to SageMaker as an endpoint.

Based on the performance metrics, we select version 2 to be deployed.

Configure the model deployment settings—deployment name, instance type, and instance count.

As a starting point, Canvas will automatically recommend the best instance type and the number of instances for your model deployment. You can change it as per your workload needs.

You can test the deployed SageMaker inference endpoint directly from within SageMaker Canvas.

You can change input values using the SageMaker Canvas user interface to infer additional churn prediction.

Now let’s navigate to Amazon SageMaker Studio and check out the deployed endpoint.

Open a notebook in SageMaker Studio and run the following code to infer the deployed model endpoint. Replace the model endpoint name with your own model endpoint name.

import boto3, sys
import pandas as pd

endpoint_name = "canvas-customer-churn-prediction-model"
sm_rt = boto3.Session().client('runtime.sagemaker')

payload = [['PA',163,806,403-2562, 'no', 'yes', 300, 8.16, 3, 7.57,3.93,4,6.5,4.07,100,5.11,4.92,6,5.67,3]]
body = pd.DataFrame(payload).to_csv(header=False, index=False).encode("utf-8")

response = sm_rt.invoke_endpoint(EndpointName=endpoint_name, Body=body, ContentType="text/csv",Accept="application/json")

response = response['Body'].read().decode("utf-8")
print(response)

Our original model endpoint is using an ml.m5.xlarge instance and 1 instance count. Now, let’s assume you expect the number of end-users inferencing your model endpoint will increase and you want to provision more compute capacity. You can accomplish this directly from within SageMaker Canvas by choosing Update configuration.

Clean up

To avoid incurring future charges, delete the resources you created while following this post. This includes logging out of SageMaker Canvas and deleting the deployed SageMaker endpoint. SageMaker Canvas bills you for the duration of the session, and we recommend logging out of SageMaker Canvas when you’re not using it. Refer to Logging out of Amazon SageMaker Canvas for more details.

Conclusion

In this post, we discussed how SageMaker Canvas can deploy ML models to real-time inferencing endpoints, allowing you take your ML models to production and drive action based on ML-powered insights. In our example, we showed how an analyst can quickly build a highly accurate predictive ML model without writing any code, deploy it on SageMaker as an endpoint, and test the model endpoint from SageMaker Canvas, as well as from a SageMaker Studio notebook.

To start your low-code/no-code ML journey, refer to Amazon SageMaker Canvas.

Special thanks to everyone who contributed to the launch: Prashanth Kurumaddali, Abishek Kumar, Allen Liu, Sean Lester, Richa Sundrani, and Alicia Qi.


About the Authors

Janisha Anand is a Senior Product Manager in the Amazon SageMaker Low/No Code ML team, which includes SageMaker Canvas and SageMaker Autopilot. She enjoys coffee, staying active, and spending time with her family.

Indy Sawhney is a Senior Customer Solutions Leader with Amazon Web Services. Always working backward from customer problems, Indy advises AWS enterprise customer executives through their unique cloud transformation journey. He has over 25 years of experience helping enterprise organizations adopt emerging technologies and business solutions. Indy is an area of depth specialist with AWS’s Technical Field Community for AI/ML, with specialization in generative AI and low-code/no-code Amazon SageMaker solutions.

Read More

Develop generative AI applications to improve teaching and learning experiences

Develop generative AI applications to improve teaching and learning experiences

Recently, teachers and institutions have looked for different ways to incorporate artificial intelligence (AI) into their curriculums, whether it be teaching about machine learning (ML) or incorporating it into creating lesson plans, grading, or other educational applications. Generative AI models, in particular large language models (LLMs), have dramatically sped up AI’s impact on education. Generative AI and natural language programming (NLP) models have great potential to enhance teaching and learning by generating personalized learning content and providing engaging learning experiences for students.

In this post, we create a generative AI solution for teachers to create course materials and for students to learn English words and sentences. When students provide answers, the solution provides real-time assessments and offers personalized feedback and guidance for students to improve their answers.

Specifically, teachers can use the solution to do the following:

  • Create an assignment for students by generating questions and answers from a prompt
  • Create an image from the prompt to represent the assignment
  • Save the new assignment to a database
  • Browse existing assignments from the database

Students can use the solution to do the following:

  • Select and review an assignment from the assignment database
  • Answer the questions of the selected assignment
  • Check the grading scores of the answers in real time
  • Review the suggested grammatical improvements to their answers
  • Review the suggested sentence improvements to their answers
  • Read the recommended answers

We walk you through the steps of creating the solution using Amazon Bedrock, Amazon Elastic Container Service (Amazon ECS), Amazon CloudFront, Elastic Load Balancing (ELB), Amazon DynamoDB, Amazon Simple Storage Service (Amazon S3), and AWS Cloud Development Kit (AWS CDK).

Solution overview

The following diagram shows the resources and services used in the solution.

The solution runs as a scalable service. Teachers and students use their browsers to access the application. The content is served through an Amazon CloudFront distribution with an Application Load Balancer as its origin. It saves the generated images to an S3 bucket, and saves the teacher’s assignments and the students’ answers and scores to separate DynamoDB tables.

The solution uses Amazon Bedrock to generate questions, answers, assignment images and to grade students’ answers. Amazon Bedrock is a fully managed service that makes foundation models from leading AI startups and Amazon available via easy-to-use API interfaces. The solution also uses the grammatical error correction API and the paraphrase API from AI21 to recommend word and sentence corrections.

You can find the implementation details in the following sections. The source code is available in the GitHub repository.

Prerequisites

You should have some knowledge of generative AI, ML, and the services used in this solution, including Amazon Bedrock, Amazon ECS, Amazon CloudFront, Elastic Load Balancing, Amazon DynamoDB and Amazon S3

We use AWS CDK to build and deploy the solution. You can find the setup instructions in the readme file.

Create assignments

Teachers can create an assignment from an input text using the following GUI page. An assignment comprises an input text, the questions and answers generated from the text, and an image generated from the input text to represent the assignment.

For our example, a teacher inputs the Kids and Bicycle Safety guidelines from the United States Department of Transportation. For the input text, we use the file bike.safe.riding.tips.txt.

The following is the generated image output.

The following are the generated questions and answers:

"question": "What should you always wear when riding a bicycle?",
"answer": "You should always wear a properly fitted bicycle helmet when riding a bicycle. A helmet protects your brain and can save your life in a crash."

"question": "How can you make sure drivers can see you when you are bicycling?",
"answer": "To make sure drivers can see you, wear bright neon or fluorescent colors. Also use reflective tape, markings or flashing lights so you are visible."

"question": "What should you do before riding your bicycle?",
"answer": "Before riding, you should inspect your bicycle to make sure all parts are secure and working properly. Check that tires are inflated, brakes work properly, and reflectors are in place."

"question": "Why is it more dangerous to ride a bicycle at night?",
"answer": "It is more dangerous to ride at night because it is harder for other people in vehicles to see you in the dark."

"question": "How can you avoid hazards while bicycling?",
"answer": "Look ahead for hazards like potholes, broken glass, and dogs. Point out and yell about hazards to bicyclists behind you. Avoid riding at night when it is harder to see hazards."

The teacher expects the students to complete the assignment by reading the input text and then answering the generated questions.

The portal uses Amazon Bedrock to create questions, answers, and images. Amazon Bedrock speeds up the development of generative AI solutions by exposing the foundation models through API interfaces. You can find the source code in the file 1_Create_Assignments.py.

The portal invokes two foundation models:

  • Stable Diffusion XL to generate images using the function query_generate_image_endpoint
  • Anthropic Claude v2 to generate questions and answers using the function query_generate_questions_answers_endpoint

The portal saves generated images to an S3 bucket using the function load_file_to_s3. It creates an assignment based on the input text, the teacher ID, the generated questions and answers, and the S3 bucket link for the loaded image. It saves the assignment to the DynamoDB table assignments using the function insert_record_to_dynamodb.

You can find the AWS CDK code that creates the DynamoDB table in the file cdk_stack.py.

Show assignments

Teachers can browse assignments and the generated artifacts using the following GUI page.

The portal uses the function get_records_from_dynamodb to retrieve the assignments from the DynamoDB table assignments. It uses the function download_image to download an image from the S3 bucket. You can find the source code in the file 2_Show_Assignments.py.

Answer questions

A student selects and reads a teacher’s assignment and then answers the questions of the assignment.

The portal provides an engaging learning experience. For example, when the student provides the answer “I should waer hat protect brain in crash” the portal grades the answer in real time by comparing the answer with the correct answer. The portal also ranks all students’ answers to the same question and shows the top three scores. You can find the source code in the file 3_Complete_Assignments.py.

The portal saves the student’s answers to a DynamoDB table called answers. You can find the AWS CDK code that creates the DynamoDB table in the file cdk_stack.py.

To grade a student’s answer, the portal invokes the Amazon Titan Embeddings model to translate the student’s answer and the correct answer into numerical representations and then compute their similarity as a score. You can find the solution in the file 3_Complete_Assignments.py.

The portal generates suggested grammatical corrections and sentence improvements for the student’s answer. Finally, the portal shows the correct answer to the question.

The portal uses the grammatical error correction API and the paraphrase API from AI21 to generate the recommended grammatical and sentence improvements. The AI21 paraphrase model is available as a foundation model in SageMaker. You can deploy the AI21 paraphrase model as an inference point in SageMaker and invoke the inference point to generate sentence improvements.

The functions generate_suggestions_sentence_improvements and generate_suggestions_word_improvements in the file 3_Complete_Assignments.py show an alternative way of using the AI21 REST API endpoints. You need to create an AI21 account and find the API key associated with your account to invoke the APIs. You will have to pay for the invocations after the trial period.

Conclusion

This post showed you how to use an AI-assisted solution to improve the teaching and learning experience by using multiple generative AI and NLP models. You can use the same approach to develop other generative AI prototypes and applications.

If you’re interested in the fundamentals of generative AI and how to work with foundation models, including advanced prompting techniques, check out the hands-on course Generative AI with LLMs. It’s an on-demand, 3-week course for data scientists and engineers who want to learn how to build generative AI applications with LLMs. It’s a good foundation to start building with Amazon Bedrock. Visit the Amazon Bedrock Features page and sign up to learn more about Amazon Bedrock.


About the Authors

Jeff Li is a Senior Cloud Application Architect with the Professional Services team at AWS. He is passionate about diving deep with customers to create solutions and modernize applications that support business innovations. In his spare time, he enjoys playing tennis, listening to music, and reading.

Isaac Privitera is a Senior Data Scientist at the Generative AI Innovation Center, where he develops bespoke generative AI based solutions to address customers’ business problems. He works primarily on building responsible AI systems using retrieval augmented generation (RAG) and chain of thought reasoning. In his spare time he enjoys golf, football, and walking with his dog Barry.

Harish Vaswani is a Principal Cloud Application Architect at Amazon Web Services. He specializes in architecting and building cloud native applications and enables customers with best practices in their cloud transformation journey. Outside of work, Harish and his wife, Simin, are award-winning independent short film producers and love spending their time with their 5-year old son, Karan.

Read More

Dialogue-guided visual language processing with Amazon SageMaker JumpStart

Dialogue-guided visual language processing with Amazon SageMaker JumpStart

Visual language processing (VLP) is at the forefront of generative AI, driving advancements in multimodal learning that encompasses language intelligence, vision understanding, and processing. Combined with large language models (LLM) and Contrastive Language-Image Pre-Training (CLIP) trained with a large quantity of multimodality data, visual language models (VLMs) are particularly adept at tasks like image captioning, object detection and segmentation, and visual question answering. Their use cases span various domains, from media entertainment to medical diagnostics and quality assurance in manufacturing.

Key strengths of VLP include the effective utilization of pre-trained VLMs and LLMs, enabling zero-shot or few-shot predictions without necessitating task-specific modifications, and categorizing images from a broad spectrum through casual multi-round dialogues. Augmented by Grounded Segment Anything, VLP exhibits prowess in visual recognition, with object detection and segmentation being particularly notable. The potential exists to fine-tune VLMs and LLMs further using domain-specific data, aiming to boost precision and mitigate hallucination. However, like other nascent technologies, obstacles remain in managing model intricacy, harmonizing diverse modalities, and formulating uniform evaluation metrics.

Courtesy of NOMIC for OBELICS, HuggingFaceM4 for IDEFICS, Charles Bensimon for Gradio and Amazon Polly for TTS

In this post, we explore the technical nuances of VLP prototyping using Amazon SageMaker JumpStart in conjunction with contemporary generative AI models. Through multi-round dialogues, we highlight the capabilities of instruction-oriented zero-shot and few-shot vision language processing, emphasizing its versatility and aiming to capture the interest of the broader multimodal community. The demo implementation code is available in the following GitHub repo.

Solution overview

The proposed VLP solution integrates a suite of state-of-the-art generative AI modules to yield accurate multimodal outputs. Central to the architecture are the fine-tuned VLM and LLM, both instrumental in decoding visual and textual data streams. The TGI framework underpins the model inference layer, providing RESTful APIs for robust integration and effortless accessibility. Supplementing our auditory data processing, the Whisper ASR is also furnished with a RESTful API, enabling streamlined voice-to-text conversions. Addressing complex challenges like image-to-text segmentation, we use the containerized Grounded Segment Anything module, synergizing with the Grounded DINO and Segment Anything Model (SAM) mechanism for text-driven object detection and segmentation. The system is further refined with DistilBERT, optimizing our dialogue-guided multi-class classification process. Orchestrating these components is the LangChain processing pipeline, a sophisticated mechanism proficient in dissecting text or voice inputs, discerning user intentions, and methodically delegating sub-tasks to the relevant services. The synthesis of these operations produces aggregated outputs, delivering pinpoint and context-aware multimodal answers.

The following diagram illustrates the architecture of our dialogue-guided VLP solution.

Text Generation Inference

Text Generation Inference (TGI) is an open-source toolkit developed by Hugging Face for deploying LLMs as well as VLMs for inference. It enables high-performance text generation using tensor parallelism, model parallelism, and dynamic batching supporting some leading open-source LLMs such as Falcon and Llama V2, as well as VLMs like IDEFICS. Utilizing the latest Hugging Face LLM modules on Amazon SageMaker, AWS customers can now tap into the power of SageMaker deep learning containers (DLCs). This allows for the seamless deployment of LLMs from the Hugging Face hubs via pre-built SageMaker DLCs supporting TGI. This inference setup not only offers exceptional performance but also eliminates the need for managing the heavy lifting GPU infrastructure. Additionally, you benefit from advanced features like auto scaling of inference endpoints, enhanced security, and built-in model monitoring.

TGI offers text generation speeds up to 100 times faster than traditional inference methods and scales efficiently to handle increased requests. Its design ensures compatibility with various LLMs and, being open-source, democratizes advanced features for the tech community. TGI’s versatility extends across domains, enhancing chatbots, improving machine translations, summarizing texts, and generating diverse content, from poetry to code. Therefore, TGI emerges as a comprehensive solution for text generation challenges. TGI is implemented in Python and uses the PyTorch framework. It’s open-source and available on GitHub. It also supports PEFT with QLoRA for faster performance and logits warping to control generated text attributes, such as determining its length and diversity, without modifying the underlying model.

You can build a customized TGI Docker container directly from the following Dockerfile and then push the container image to Amazon Elastic Container Registry (ECR) for inference deployment. See the following code:

%%sh
# Define docker image name and container's Amazon Reource Name on ECR
container_name="tgi1.03"
region=`aws configure get region`
account=`aws sts get-caller-identity --query "Account" --output text`
full_name="${account}.dkr.ecr.${region}.amazonaws.com/${container_name}:latest"

# Get the login command from ECR and execute it directly
aws ecr get-login-password --region ${region}|docker login --username AWS 
    --password-stdin ${account}.dkr.ecr.${region}.amazonaws.com

# Build the TGI docker image locally
docker build . -f Dockerfile -t ${container_name}
docker tag ${container_name} ${full_name}
docker push ${full_name}

LLM inference with TGI

The VLP solution in this post employs the LLM in tandem with LangChain, harnessing the chain-of-thought (CoT) approach for more accurate intent classification. CoT processes queries to discern intent and trigger-associated sub-tasks to meet the query’s goals. Llama-2-7b-chat-hf (license agreement) is the streamlined version of the Llama-2 line, designed for dialogue contexts. The inference of Llama-2-7b-chat-hf is powered by the TGI container image, making it available as an API-enabled service.

For Llama-2-7b-chat-hf inference, a g5.2xlarge (24G VRAM) is recommended to achieve peak performance. For applications necessitating a more robust LLM, the Llama-v2-13b models fit well with a g5.12xlarge (96G VRAM) instance. For the Llama-2-70b models, consider either the GPU [2xlarge] – 2x Nvidia A100 utilizing bitsandbytes quantization or the g5.48xlarge. Notably, employing bitsandbytes quantization can reduce the required inference GPU VRAM by 50%.

You can use SageMaker DLCs with the TGI container image detailed earlier to deploy Llama-2-7b-chat-hf for inference (see the following code). Alternatively, you can stand up a quick local inference for a proof of concept on a g5.2xlarge instance using a Docker container.

import json
from time import gmtime, strftime
from sagemaker.huggingface import get_huggingface_llm_image_uri
from sagemaker.huggingface import HuggingFaceModel
from sagemaker import get_execution_role

# Prerequisite:create an unique model name
model_name = 'Llama-7b-chat-hf' + strftime("%Y-%m-%d-%H-%M-%S", gmtime())

# retrieve the llm image uri of SageMaker pre-built DLC TGI v1.03  
tgi_image_ecr_uri = get_huggingface_llm_image_uri(
  "huggingface",
  version="1.0.3"
)


# Define Model and Endpoint configuration parameter
hf_config = {
  'HF_MODEL_ID': "meta-research/Llama-2-7b-chat-hf", # Matching model_id on Hugging Face Hub
  'SM_NUM_GPUS': json.dumps(number_of_gpu), 
  'MAX_TOTAL_TOKENS': json.dumps(1024), 
  'HF_MODEL_QUANTIZE': "bitsandbytes", # Use quantization for less vram requirement, commet it if no needed.
}

# create HuggingFaceModel with the SageMaker pre-built DLC TGI image uri
sm_llm_model = HuggingFaceModel(
  role=get_execution_role(),
  image_uri=tgi_image_ecr_uri,
  env=hf_config
)

# Deploy the model
llm = sm_llm_model.deploy(
  initial_instance_count=1,
  instance_type="ml.g5.2xlarge",
  container_startup_health_check_timeout=300, # in sec. Allow 5 minutes to be able to load the model
)

# define inference payload
prompt="""<|prompter|>How to select a right LLM for your generative AI project?<|endoftext|><|assistant|>"""

# hyperparameters for llm
payload = {
  "inputs": prompt,
  "parameters": {
    "best_of": 1,
    "decoder_input_details": true,
    "details": true,
    "do_sample": true,
    "max_new_tokens": 20,
    "repetition_penalty": 1.03,
    "return_full_text": false,
    "seed": null,
    "stop": [
      "photographer"
    ],
    "temperature": 0.5,
    "top_k": 10,
    "top_p": 0.95,
    "truncate": null,
    "typical_p": 0.95,
    "watermark": true
  },
  "stream": false
}

# send request to endpoint
response = llm.predict(payload)

Fine-tune and customize your LLM

SageMaker JumpStart offers numerous notebook samples that demonstrate the use of Parameter Efficient Fine Tuning (PEFT), including QLoRA for training and fine-tuning LLMs. QLoRA maintains the pre-trained model weights in a static state and introduces trainable rank decomposition matrices into each layer of the Transformer structure. This method substantially decreases the number of trainable parameters needed for downstream tasks.

Alternatively, you can explore Direct Preference Optimization (DPO), which obviates the necessity for setting up a reward model, drawing samples during fine-tuning from the LLM, or extensive hyperparameter adjustments. Recent research has shown that DPO’s fine-tuning surpasses RLHF in managing sentiment generation and enhances the quality of summaries and single-conversation responses, all while being considerably easier to set up and educate. There are three main steps to the DPO training process (refer to the GitHub repo for details):

  1. Perform supervised fine-tuning of a pre-trained base LLM to create a fine-tuned LLM.
  2. Run the DPO trainer using the fine-tuned model to create a reinforcement learning model.
  3. Merge the adaptors from DPO into the base LLM model for text generation inference.

You can deploy the merged model for inference using the TGI container image.

Visual language model

Visual Language Models (VLM) which combine both the vision and language modalities have been showing their improving effectiveness in generalization, leading to various practical use cases with zero-shot prompts or few-shot prompts with instructions. A VLM typically consists of three key elements: an image encoder, a text encoder, and a strategy to fuse information from the two encoders. These key elements are tightly coupled together because the loss functions are designed around both the model architecture and the learning strategy. Many state-of-the-art VLMs use CLIP/ViT (such as OpenCLIP) and LLMs (such as Llama-v1) and are trained on multiple publicly available datasets such as Wikipedia, LAION, and Public Multimodal Dataset.

This demo used a pre-trained IDEFICS-9b-instruct model developed by HuggingFaceM4, a fine-tuned version of IDEFICS-9b, following the training procedure laid out in Flamingo by combining the two pre-trained models (laion/CLIP-ViT-H-14-laion2B-s32B-b79K and huggyllama/llama-7b) with modified Transformer blocks. The IDEFICS-9b was trained on OBELIC, Wikipedia, LAION, and PMD multimodal datasets with a total 150 billion tokens and 1.582 billion images with 224×224 resolution each. The IDEFICS-9b was based on Llama-7b with a 1.31 million effective batch size. The IDEFICS-9b-instruct was then fine-tuned with 6.8 million multimodality instruction datasets created from augmentation using generative AI by unfreezing all the parameters (vision encoder, language model, cross-attentions). The fine-tuning datasets include the pre-training data with the following sampling ratios: 5.1% of image-text pairs and 30.7% of OBELICS multimodal web documents.

The training software is built on top of Hugging Face Transformers and Accelerate, and DeepSpeed ZeRO-3 for training, plus WebDataset and Image2DataSets for data loading. The pre-training of IDEFICS-9b took 350 hours to complete on 128 Nvidia A100 GPUs, whereas fine-tuning of IDEFICS-9b-instruct took 70 hours on 128 Nvidia A100 GPUs, both on AWS p4.24xlarge instances.

With SageMaker, you can seamlessly deploy IDEFICS-9b-instruct on a g5.2xlarge instance for inference tasks. The following code snippet illustrates how to launch a tailored deep learning local container integrated with the customized TGI Docker image:

%%sh
llm_model='HuggingFaceM4/idefics-9b-instruct'
docker_rt_name='idefics-9b-instruct'
docker_image_name='tgi1.03'
docker run --gpus="1,2,3,4" --shm-size 20g -p 8080:80 --restart unless-stopped --name ${docker_rt_name} ${docker_image_name} --model-id ${llm_model}

# Test the LLM API using curl
curl -X 'POST'   'http://<hostname_or_ip>:8080/'   
    -H 'accept: application/json'   
    -H 'Content-Type: application/json'   
    -d '{  
        "inputs": "User:![](http://<image_url>/image.png)Which device produced this image? Please explain the main clinical purpose of such image?Can you write a radiology report based on this image?<end_of_utterance>", 
        "parameters": {    
            "best_of": 1,    "decoder_input_details": true,   
            "details": true,    "do_sample": true,    "max_new_tokens": 20,  
            "repetition_penalty": 1.03,    "return_full_text": false,    
            "seed": null,    "stop": [      "photographer"    ],    
            "temperature": 0.5,    "top_k": 10,    "top_p": 0.95,   
            "truncate": null,    "typical_p": 0.95,    "watermark": true  },  
        "stream": false 
        }'

You can fine-tune IDEFICS or other VLMs including Open Flamingo with your own domain-specific data with instructions. Refer to the following README for multimodality dataset preparation and the fine-tuning script for further details.

Intent classification with chain-of-thought

A picture is worth a thousand words, therefore VLM requires guidance to generate an accurate caption from a given image and question. We can use few-shot prompting to enable in-context learning, where we provide demonstrations in the prompt to steer the model to better performance. The demonstrations serve as conditioning for subsequent examples where we would like the model to generate a response.

Standard few-shot prompting works well for many tasks but is still not a perfect technique, especially when dealing with more complex reasoning tasks. The few-shot prompting template is not enough to get reliable responses. It might help if we break the problem down into steps and demonstrate that to the model. More recently, chain-of-thought (CoT) prompting has been popularized to address more complex arithmetic, common sense, and symbolic reasoning tasks

CoT eliminate manual efforts by using LLMs with a “Let’s think step by step” prompt to generate reasoning chains for demonstrations one by one. However, this automatic process can still end up with mistakes in generated chains. To mitigate the effects of the mistakes, the diversity of demonstrations matter. This post proposes Auto-CoT, which samples questions with diversity and generates reasoning chains to construct the demonstrations. CoT consists of two main stages:

  • Question clustering – Partition questions of a given dataset into a few clusters
  • Demonstration sampling – Select a representative question from each cluster and generate its reasoning chain using zero-shot CoT with simple heuristics

See the following code snippet:

from langchain.llms import HuggingFaceTextGenInference
from langchain import PromptTemplate, LLMChain

inference_server_url_local = <Your_local_url_for_llm_on_tgi:port>

llm_local = HuggingFaceTextGenInference(
    inference_server_url=inference_server_url_local,
    max_new_tokens=512,
    top_k=10,
    top_p=0.95,
    typical_p=0.95,
    temperature=0.1,
    repetition_penalty=1.05,
    
 
template = """Use the following pieces of context to answer the question at the end. If you don't know the answer, just say that you don't know, don't try to make up an answer. 
Use ten five maximum and keep the answer as subtle as possible. List all actionable sub-tasks step by step in detail. Be cautious to avoid phrasing that might replicate previous 
inquiries. This will help in obtaining an accurate and detailed answer. Avoid repetition for clarity.

Question: {question}
Answer: Understand the intent of the question then break down the {question} in to sub-tasks. """

prompt = PromptTemplate(
    template=template, 
    input_variables= ["question"]
)

llm_chain_local = LLMChain(prompt=prompt, llm=llm_local)
llm_chain_local("Can you describe the nature of this image? Do you think it's real??")

Automatic Speech Recognition

The VLP solution incorporates Whisper, an Automatic Speech Recognition (ASR) model by OpenAI, to handle audio queries. Whisper can be effortlessly deployed via SageMaker JumpStart using its template. SageMaker JumpStart, known for its straightforward setup, high performance, scalability, and dependability, is ideal for developers aiming to craft exceptional voice-driven applications. The following GitHub repo demonstrates how to harness SageMaker real-time inference endpoints to fine-tune and host Whisper for instant audio-to-text transcription, showcasing the synergy between SageMaker hosting and generative models.

Alternatively, you can directly download the Dockerfile.gpu from GitHub developed by ahmetoner, which includes a pre-configured RESTful API. You can then construct a Docker image and run the container on a GPU-powered Amazon Elastic Compute Cloud (EC2) instance for a quick proof of concept. See the following code:

%%sh
docker_iamge_name = 'whisper-asr-webservice-gpu'
docker build -f Dockerfile.gpu -t ${docker_iamge_nam}
docker run -d --gpus all -p 8083:9000 --restart unless-stopped -e ASR_MODEL=base ${docker_iamge_nam}

curl -X 'POST'   'http://<asr_api_hostname>:<port>/asr?task=transcribe&encode=true&output=txt'   
    -H 'accept: application/json'   
    -H 'Content-Type: multipart/form-data'   
    -F 'audio_file=@dgvlp_3_5.mp3;type=audio/mpeg'

In the provided example, port 8083 is selected to host the Whisper API, with inbound network security rules activated. To test, direct a web browser to http://<IP_or_hostname>:8083/docs and initiate a POST request test to the ASR endpoint. As an alternative, run the given command or employ the whisper-live module to verify API connectivity.

!pip install whisper-live
from whisper_live.client import TranscriptionClient
client = TranscriptionClient("<whisper_hostname_or_IP>", 8083, is_multilingual=True, lang="zh", translate=True)
client(audio_file_path) # Use sudio file
client() # Use microphone for transcribe

Multi-class text classification and keyword extraction

Multi-class classification plays a pivotal role in text prompt-driven object detection and segmentation. The distilbert-base-uncased-finetuned-sst-2-english model is a refined checkpoint of DistilBERT-base-uncased, optimized on the Stanford Sentiment Treebank (SST2) dataset by Hugging Face. This model achieves a 91.3% accuracy on the development set, while its counterpart bert-base-uncased boasts an accuracy of 92.7%. The Hugging Face Hub provides access to over 1,000 pre-trained text classification models. For those seeking enhanced precision, SageMaker JumpStart provides templates to fine-tune DistilBERT using custom annotated datasets for more tailored classification tasks.

import torch
from transformers import pipeline

def mclass(text_prompt, top_k=3, topics = ['Mask creation', 'Object  detection', 
        'Inpainting', 'Segmentation', 'Upscaling', 'Creating an image from another one', 'Generating:q an image from text'], 
        model='distilbert-base-uncased-finetuned-sst-2-english'):
        
    device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
    # Define a german hypothesis template and the potential candidates for entailment/contradiction
    template_de = 'The topic is {}'
    # Pipeline abstraction from hugging face
    pipe = pipeline(task='zero-shot-classification', model=model, tokenizer=model, device=device)
    # Run pipeline with a test case
    prediction = pipe(text_prompt, topics, hypothesis_template=template_de)
    # Top 3 topics as predicted in zero-shot regime
    return zip(prediction['labels'][0:top_k], prediction['scores'][0:top_k])

top_3_intend = mclass(text_prompt=user_prompt_str, topics=['Others', 'Create image mask', 'Image segmentation'], top_k=3) 

The keyword extraction process employs the KeyBERT module, a streamlined and user-friendly method that harnesses BERT embeddings to generate keywords and key phrases closely aligned with a document—in this case, the objects specified in the query:

# Keyword extraction
from keybert import KeyBERT
kw_model = KeyBERT()
words_list = kw_model.extract_keywords(docs=<user_prompt_str>, keyphrase_ngram_range=(1,3))

Text prompt-driven object detection and classification

The VLP solution employs dialogue-guided object detection and segmentation by analyzing the semantic meaning of the text and identifying the action and objects from text prompt. Grounded-SAM is an open-source package created by IDEA-Research to detect and segment anything from a given image with text inputs. It combines the strengths of Grounding DINO and Segment Anything in order to build a very powerful pipeline for solving complex problems.

The following figure illustrates how Grounded-SAM can detect objects and conduct instance segmentation by comprehending textual input.

SAM stands out as a robust segmentation model, though it requires prompts, such as bounding boxes or points, to produce high-quality object masks. Grounding DINO excels as a zero-shot detector, adeptly creating high-quality boxes and labels using free-form text prompts. When these two models are combined, they offer the remarkable capability to detect and segment any object purely through text inputs. The Python utility script dino_sam_inpainting.py was developed to integrate Grounded-SAM methods:

!pip install git+https://github.com/facebookresearch/segment-anything.git
import dino_sam_inpainting as D

def dino_sam(image_path, text_prompt, text_threshold=0.4, box_threshold=0.5, output_dir='/temp/gradio/outputs'):
    config_file = 'GroundingDINO/groundingdino/config/GroundingDINO_SwinT_OGC.py'  # change the path of the model config file
    grounded_checkpoint = './models/groundingdino_swint_ogc.pth'  # change the path of the model
    sam_checkpoint = './models/sam_vit_h_4b8939.pth'
    sam_hq_checkpoint = '' #if to use high quality, like sam_hq_vit_h.pth
    use_sam_hq = ''
    output_dir = '/tmp/gradio/outputs'
    device = 'cuda'

    # make dir
    os.makedirs(output_dir, exist_ok=True)
    # load image
    image_pil, image = D.load_image(image_path)
    # load model
    model = D.load_model(config_file, grounded_checkpoint, device=device)

    output_file_name = f'{format(os.path.basename(image_path))}'

    # visualize raw image
    image_pil.save(os.path.join(output_dir, output_file_name))

    # run grounding dino model
    boxes_filt, pred_phrases = D.get_grounding_output(
        model, image, text_prompt, box_threshold, text_threshold, device=device
    )
    
    # initialize SAM
    if use_sam_hq:
        predictor = D.SamPredictor(D.build_sam_hq(checkpoint=sam_hq_checkpoint).to(device))
    else:
        predictor = D.SamPredictor(D.build_sam(checkpoint=sam_checkpoint).to(device))
    image = cv2.imread(image_path)
    image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)
    predictor.set_image(image)


    size = image_pil.size
    H, W = size[1], size[0]
    for i in range(boxes_filt.size(0)):
        boxes_filt[i] = boxes_filt[i] * torch.Tensor([W, H, W, H])
        boxes_filt[i][:2] -= boxes_filt[i][2:] / 2
        boxes_filt[i][2:] += boxes_filt[i][:2]

    boxes_filt = boxes_filt.cpu()
    transformed_boxes = predictor.transform.apply_boxes_torch(boxes_filt, image.shape[:2]).to(device)

    masks, _, _ = predictor.predict_torch(
        point_coords = None,
        point_labels = None,
        boxes = transformed_boxes.to(device),
        multimask_output = False,
    )

    # draw output image
    plt.figure(figsize=(10, 10))
    plt.imshow(image)
    for mask in masks:
        D.show_mask(mask.cpu().numpy(), plt.gca(), random_color=True)
    for box, label in zip(boxes_filt, pred_phrases):
        D.show_box(box.numpy(), plt.gca(), label)

    output_file_name = f'{format(os.path.basename(image_path))}'
    plt.axis('off')
    plt.savefig(
        os.path.join(output_dir, f'grounded_sam_{output_file_name}'),
        bbox_inches="tight", dpi=300, pad_inches=0.0
    )

    D.save_mask_data(output_dir, masks, boxes_filt, pred_phrases)
    return f'grounded_sam_{output_file_name}'
    
filename = dino_sam(image_path=<image_path_str>, text_prompt=<object_name_str>, output_dir=<output_image_filename_path_str>, box_threshold=0.5, text_threshold=0.55)

You can choose HQ-SAM to upgrade SAM for high-quality zero-shot segmentation. Refer to the following paper and code sample on GitHub for more details.

VLP processing pipeline

The main objective of the VLP processing pipeline is to combine the strengths of different models, creating a sophisticated workflow specialized for VLP. It’s important to highlight that this setup prioritizes the integration of top-tier models across visual, text, and voice domains. Each segment of the pipeline is modular, facilitating either standalone use or combined operation. Furthermore, the design ensures flexibility, enabling the replacement of components with more advanced models yet to come, while supporting multithreading and error handling with reputable implementation.

The following figure illustrates a VLP pipeline data flow and service components.

In our exploration of the VLP pipeline, we design one which can process both text prompts from open text format and casual voice inputs from microphones. The audio processing is facilitated by Whisper, capable of multilingual speech recognition and translation. The transcribed text is then channeled to an intent classification module, which discerns the semantic essence of the prompts. This works in tandem with a LangChain driven CoT engine, dissecting the main intent into finer sub-tasks for more detailed information retrieval and generation. If image processing is inferred from the input, the pipeline commences a keyword extraction process, selecting the top N keywords by cross-referencing objects detected in the original image. Subsequently, these keywords are routed to the Grounded-SAM engine, which generates bounding boxes. These bounding boxes are then supplied to the SAM model, which crafts precise segmentation masks, pinpointing each unique object instance in the source image. The final step involves overlaying the masks and bounding boxes onto the original image, yielding a processed image that is presented as a multimodal output.

When the input query seeks to interpret an image, the pipeline engages the LLM to organize the sub-tasks and refine the query with targeted goals. Subsequently, the outcome is directed to the VLM API, accompanied by few-shot instructions, the URL of the input image, and the rephrased text prompt. In response, the VLM provides the textual output. The VLP pipeline can be implemented using a Python-based workflow pipeline or alternative orchestration utilities. Such pipelines operate by chaining a sequential set of sophisticated models, culminating in a structured modeling procedure sequentially. The pipeline integrates with the Gradio engine for demonstration purposes:

def vlp_text_pipeline(str input_text, str original_image_path, chat_history):
   intent_class = intent_classification(input_text)
   key_words = keyword_extraction(input_text)
   image_caption = vlm(input_text, original_image_path)
   chat_history.append(image_caption)
   if intent_class in {supported intents}:
        object_bounding_box = object_detection(intent_class, key_words, original_image_path)
        mask_image_path = image_segmentation(object_bounding_box, key_words, original_image_path)
        chat_history.append(mask_image_path)
   return chat_history
    
def vlp_voice_pipeline(str audio_file_path, str original_image_path, chat_history):
   asr_text = whisper_transcrib(audio_file_path)
   chat_history.append(asr_text, original_image_path, chat_history)
   return chat_history
    
chat_history = map(vlp_pipelines, input_text, original_image_path, chat_history) 
               if (audio_file_path is None) 
               else map(vlp_voice_pipelines, original_image_path, chat_history)

Limitations

Using pre-trained VLM models for VLP has demonstrated promising potential for image understanding. Along with language-based object detection and segmentation, VLP can produce useful outputs with reasonable quality. However, VLP still suffers from inconsistent results, missing details from pictures, and it might even hallucinate. Moreover, models might produce factually incorrect texts and should not be relied on to produce factually accurate information. Since none of the referenced pre-trained VLM, SAM, or LLM models has been trained or fine-tuned for domain-specific production-grade applications, this solution is not designed for mission-critical applications that might impact livelihood or cause material losses

With prompt engineering, the IDEFICS model sometimes can recognize extra details after a text hint; however, the result is far from consistent and reliable. It can be persistent in maintaining inaccuracies and may be unable or unwilling to make corrections even when users highlight those during a conversation. Enhancing the backbone model by integrating Swin-ViT and fusing it with CNN-based models like DualToken-ViT, along with training using more advanced models like Llama-v2, could potentially address some of these limitations.

Next steps

The VLP solution is poised for notable progress. As we look ahead, there are several key opportunities to advance VLP solutions:

  • Prioritize integrating dynamic prompt instructions and few-shot learning hints. These improvements will enable more accurate AI feedback.
  • Intent classification teams should focus efforts on refining the classifier to pick up on nuanced, domain-specific intents from open prompts. Being able to understand precise user intents will be critical.
  • Implement an agent tree of thoughts model into the reasoning pipeline. This structure will allow for explicit reasoning steps to complete sub-tasks.
  • Pilot fine-tuning initiatives on leading models. Tailoring VLM, LLM, and SAM models to key industries and use cases through fine-tuning will be pivotal.

Acknowledgment

The authors extend their gratitude to Vivek Madan and Ashish Rawat for their insightful feedback and review of this post.


About the authors

Alfred Shen is a Senior AI/ML Specialist at AWS. He has been working in Silicon Valley, holding technical and managerial positions in diverse sectors including healthcare, finance, and high-tech. He is a dedicated applied AI/ML researcher, concentrating on CV, NLP, and multimodality. His work has been showcased in publications such as EMNLP, ICLR, and Public Health.

Dr. Li Zhang is a Principal Product Manager-Technical for Amazon SageMaker JumpStart and Amazon SageMaker built-in algorithms, a service that helps data scientists and machine learning practitioners get started with training and deploying their models, and uses reinforcement learning with Amazon SageMaker. His past work as a principal research staff member and master inventor at IBM Research has won the test of time paper award at IEEE INFOCOM.

Dr. Changsha Ma is an AI/ML Specialist at AWS. She is a technologist with a PhD in Computer Science, a master’s degree in Education Psychology, and years of experience in data science and independent consulting in AI/ML. She is passionate about researching methodological approaches for machine and human intelligence. Outside of work, she loves hiking, cooking, hunting food, mentoring college students for entrepreneurship, and spending time with friends and families.

Xin HuangXin Huang is a Senior Applied Scientist for Amazon SageMaker JumpStart and Amazon SageMaker built-in algorithms. He focuses on developing scalable machine learning algorithms. His research interests are in the area of natural language processing, explainable deep learning on tabular data, and robust analysis of non-parametric space-time clustering. He has published many papers in ACL, ICDM, KDD conferences, and Royal Statistical Society: Series A.

Read More

How Reveal’s Logikcull used Amazon Comprehend to detect and redact PII from legal documents at scale

How Reveal’s Logikcull used Amazon Comprehend to detect and redact PII from legal documents at scale

Today, personally identifiable information (PII) is everywhere. PII is in emails, slack messages, videos, PDFs, and so on. It refers to any data or information that can be used to identify a specific individual. PII is sensitive in nature and includes various types of personal data, such as name, contact information, identification numbers, financial information, medical information, biometric data, date of birth, and so on.

Finding and redacting PII is essential to safeguarding privacy, ensuring data security, complying with laws and regulations, and maintaining trust with customers and stakeholders. It’s a critical component of modern data management and cybersecurity practices. But finding PII among the morass of electronic data can present challenges for an organization. These challenges arise due to the vast volume and variety of data, data fragmentation, encryption, data sharing, dynamic content, false positives and negatives, contextual understanding, legal complexities, resource constraints, evolving data, user-generated content, and adaptive threats. However, failure to accurately detect and redact PII can lead to severe consequences for organizations. Consequences might encompass legal penalties, lawsuits, reputation damage, data breach costs, regulatory probes, operational disruption, trust erosion, and sanctions.

In the legal system, discovery is the legal process governing the right to obtain and the obligation to produce non-privileged matter relevant to any party’s claims or defenses in litigation. Electronic discovery also known as eDiscovery is the electronic aspect of identifying, collecting, and producing electronically stored information (ESI) in response to a request for production in a lawsuit or investigation. In the legal domain, it’s often required to identify, collect, and produce ESI during a lawsuit or investigation. If organizations are dealing with eDiscovery for litigations on subpoena responses, they’re probably concerned about accidentally sharing PII. Many organizations including government agencies, school districts, and legal professionals face the challenge of detecting and redacting PII accurately at scale. Especially if they’re part of a government group, redacting PII through the Freedom of Information Act and Digital Services Act is crucial for protecting individual privacy, ensuring compliance with data protection laws, preventing identity theft, and maintaining trust and transparency in government and digital services. It strikes a balance between transparency and privacy while mitigating legal and security risks.

Organizations can search for PII using methods such as keyword searches, pattern matching, data loss prevention tools, machine learning (ML), metadata analysis, data classification software, optical character recognition (OCR), document fingerprinting, and encryption.

Now a part of Reveal’s AI-powered eDiscovery platform, Logikcull is a self-service solution that allows legal professionals to process, review, tag, and produce electronic documents as part of a lawsuit or investigation. This unique offering helps attorneys discover valuable information related to the matter in hand while reducing costs, speeding up resolutions, and mitigating risks.

In this post, Reveal experts showcase how they used Amazon Comprehend in their document processing pipeline to detect and redact individual pieces of PII. Amazon Comprehend is a fully managed and continuously trained natural language processing (NLP) service that can extract insight about the content of a document or text. You can use Amazon Comprehend ML capabilities to detect and redact PII in customer emails, support tickets, product reviews, social media, and more.

Overview of solution

The overarching goal for the engineering team is to detect and redact PII from millions of legal documents for their customers. Using Reveal’s Logikcull solution, the engineering team implemented two processes, namely first pass PII detection and second pass PII detection and redaction. This two pass solution was made possible by using the ContainsPiiEntities and DetectPiiEntities APIs.

First pass PII detection

The goal of first pass PII detection is to find the documents that might contain PII.

  1. Users upload the files on which they would like to perform PII detection and redaction through Logikcull’s public website into a project folder. These files can be in the form of office documents, .pdf files, emails, or a .zip file containing all the supported file types.
  2. Logikcull stores these project folders securely inside an Amazon Simple Storage Service (Amazon S3) bucket. The files then pass through Logikcull’s massively parallel processing pipeline hosted on Amazon Elastic Compute Cloud (Amazon EC2), which processes the files, extracts the metadata, and generates artifacts in text format for data review. Logikcull’s processing pipeline supports text extraction for a wide variety of forms and files, including audio and video files.
  3. After the files are available in text format, Logikcull passes the input text along with the language model, which is English, through Amazon Comprehend by making the ContainsPiiEntities API call. The processing pipeline servers hosted on Amazon EC2 make the Amazon Comprehend ContainsPiiEntities API call by passing the request parameters as text and language code. The ContainsPiiEntities API call analyzes input text for the presence of PII and returns the labels of identified PII entity types, such as name, address, bank account number, or phone number. The API response also includes a confidence score which indicates the level of confidence that Amazon Comprehend has assigned to the detection accuracy. The confidence score has a value between 0 and 1, with 1 signifying 100 percent confidence. Logikcull uses this confidence score to assign the tag PII Detected to the documents. Logikcull only assigns this tag to documents that have a confidence score of over 0.75.
  4. PII Detected tagged documents are fed into Logikcull’s search index cluster for their users to quickly identify documents that contain PII entities.

Second pass PII detection and redaction

The first pass PII detection process narrows down the scope of the dataset by identifying which documents contain PII information. This speeds up the PII detection process and also reduces the overall cost. The goal of the second pass PII detection is to identify the individual instances of PII and redact them from the tagged documents in the first pass.

  1. Users search for documents through the Logikcull’s website that contains PII using Logikcull’s advanced search filters feature.
  2. The request is handled by Logikcull’s application servers hosted on Amazon EC2 and the servers communicates with the search index cluster to find the documents.
  3. The Logikcull applications servers are able to identify the individual instances of PII by making the DetectPiiEntities API call. The servers make the API call by passing the text and language of input documents. The DetectPiiEntities API action inspects the input text for entities that contain PII. For each entity, the response provides the entity type, where the entity text begins and ends, and the level of confidence that Amazon Comprehend has in its detection.
  4. The users then select the specific entities that they want to redact using Logikcull’s web interface. The applications server sends these requests to Logikcull’s processing pipeline. The following is a screenshot of a PDF that was uploaded to Logikcull’s application. From the below screenshot, you can see that different PII entities such as name, address, phone number, email address, and so on, have been highlighted.

  1. The PII redaction is safely applied inside the Logikcull’s processing pipeline using custom business logic. From the screenshot that follows, you can see that users can select either specific PII entity types  or all PII entity types that they want to redact and then, with a click of a single button, redact all the PII information.

Results

Logikcull, a Reveal technology, is currently processing over 20 million documents each week and was able to narrow down the scope of detection using the ContainsPiiEntities API and display individual instances of PII entities to their customers by using the DetectPiiEntities API.

“With Amazon Comprehend, Logikcull has been able to rapidly deploy powerful NLP capabilities in a fraction of the time a custom-built solution would have required.”

– Steve Newhouse, VP of Product for Logikcull.

Conclusion

Amazon Comprehend allows Reveal’s Logikcull technology to run PII detection at large scale for relatively low cost using Amazon Comprehend. The ContainsPiiEntities API is used to do an initial scan of millions of documents. The DetectPiiEntities API is used to run a detailed analysis of thousands of documents and identify individual pieces of PII in their documents.

Take a look at all the Amazon Comprehend features. Give the features a try and send us feedback either through the AWS forum  for Amazon Comprehend or through your usual AWS support contacts.


About the Authors

Aman Tiwari is a General Solutions Architect working with Worldwide Commercial Sales at AWS. He works with customers in the Digital Native Business segment and helps them design innovative, resilient, and cost-effective solutions using AWS services. He holds a master’s degree in Telecommunications Networks from Northeastern University. Outside of work, he enjoys playing lawn tennis and reading books.

Jeff Newburn is a Senior Software Engineering Manager leading the Data Engineering team at Logikcull – A Reveal Technology.  He oversees the company’s data initiatives, including data warehouses, visualizations, analytics, and machine learning.  With experience spanning development and management in areas from ride sharing to data systems, he enjoys leading teams of brilliant engineers to exciting products.

Søren Blond Daugaard is a Staff Engineer in the Data Engineering team at Logikcull – A Reveal Technology. He implements highly scalable AI and ML solutions into the Logikcull product, enabling our customers to do their work more efficiently and with higher precision. His expertise spans data pipelines, web-based systems, and machine learning systems.

Kevin Lufkin is a Senior Software Engineer on the Search Engineering team at Logikcull – A Reveal Technology, where he focuses on developing customer facing and search-related features. His extensive expertise in UI/UX is complemented by a background in full-stack web development, with a strong focus on bringing product visions to life.

Read More

Schneider Electric leverages Retrieval Augmented LLMs on SageMaker to ensure real-time updates in their ERP systems

Schneider Electric leverages Retrieval Augmented LLMs on SageMaker to ensure real-time updates in their ERP systems

This post was co-written with Anthony Medeiros, Manager of Solutions Engineering and Architecture for North America Artificial Intelligence, and Blake Santschi, Business Intelligence Manager, from Schneider Electric. Additional Schneider Electric experts include Jesse Miller, Somik Chowdhury, Shaswat Babhulgaonkar, David Watkins, Mark Carlson and Barbara Sleczkowski. 

Enterprise Resource Planning (ERP) systems are used by companies to manage several business functions such as accounting, sales or order management in one system. In particular, they are routinely used to store information related to customer accounts. Different organizations within a company might use different ERP systems and merging them is a complex technical challenge at scale which requires domain-specific knowledge.

Schneider Electric is a leader in digital transformation of energy management and industrial automation. To best serve their customers’ needs, Schneider Electric needs to keep track of the links between related customers’ accounts in their ERP systems. As their customer base grows, new customers are added daily, and their account teams have to manually sort through these new customers and link them to the proper parent entity.

The linking decision is based on the most recent information available publicly on the Internet or in the media, and might be affected by recent acquisitions, market news or divisional re-structuring. An example of account linking would be to identify the relationship between Amazon and its subsidiary, Whole Foods Market [source].

Schneider Electric is deploying large language models for their capabilities in answering questions in various knowledge specific domains, the date the model has been trained is limiting its knowledge. They addressed that challenge by using a Retriever-Augmented Generation open source large language model available on Amazon SageMaker JumpStart to process large amounts of external knowledge pulled and exhibit corporate or public relationships among ERP records.

In early 2023, when Schneider Electric decided to automate part of its accounts linking process using artificial intelligence (AI), the company partnered with the AWS Machine Learning Solutions Lab (MLSL). With MLSL’s expertise in ML consulting and execution, Schneider Electric was able to develop an AI architecture that would reduce the manual effort in their linking workflows, and deliver faster data access to their downstream analytics teams.

Generative AI

Generative AI and large language models (LLMs) are transforming the way business organizations are able to solve traditionally complex challenges related to natural language processing and understanding. Some of the benefits offered by LLMs include the ability to comprehend large portions of text and answer related questions by producing human-like responses. AWS makes it easy for customers to experiment with and productionize LLM workloads by making many options available via Amazon SageMaker JumpStart, Amazon Bedrock, and Amazon Titan.

External Knowledge Acquisition

LLMs are known for their ability to compress human knowledge and have demonstrated remarkable capabilities in answering questions in various knowledge specific domains, but their knowledge is limited by the date the model has been trained. We address that information cutoff by coupling the LLM with a Google Search API to deliver a powerful Retrieval Augmented LLM (RAG) that addresses Schneider Electric’s challenges. The RAG is able to process large amounts of external knowledge pulled from the Google search and exhibit corporate or public relationships among ERP records.

See the following example:

Question: Who is the parent company of One Medical?
Google query: “One Medical parent company” → information → LLM
Answer: One Medical, a subsidiary of Amazon…

The preceding example (taken from the Schneider Electric customer database) concerns an acquisition that happened in February 2023 and thus would not be caught by the LLM alone due to knowledge cutoffs. Augmenting the LLM with Google search guarantees the most up-to-date information.

Flan-T5 model

In that project we used Flan-T5-XXL model from the Flan-T5 family of models.

The Flan-T5 models are instruction-tuned and therefore are capable of performing various zero-shot NLP tasks. In our downstream task there was no need to accommodate a vast amount of world knowledge but rather to perform well on question answering given a context of texts provided through search results, and therefore, the 11B parameters T5 model performed well.

JumpStart provides convenient deployment of this model family through Amazon SageMaker Studio and the SageMaker SDK. This includes Flan-T5 Small, Flan-T5 Base, Flan-T5 Large, Flan-T5 XL, and Flan-T5 XXL. Furthermore, JumpStart provides a few versions of Flan-T5 XXL at different levels of quantization. We deployed Flan-T5-XXL to an endpoint for inference using Amazon SageMaker Studio Jumpstart.

Path to Flan-T5 SageMaker JumpStart

Retrieval Augmented LLM with LangChain

LangChain is popular and fast growing framework allowing development of applications powered by LLMs. It is based on the concept of chains, which are combinations of different components designed to improve the functionality of LLMs for a given task. For instance, it allows us to customize prompts and integrate LLMs with different tools like external search engines or data sources. In our use-case, we used Google Serper component to search the web, and deployed the Flan-T5-XXL model available on Amazon SageMaker Studio Jumpstart. LangChain performs the overall orchestration and allows the search result pages be fed into the Flan-T5-XXL instance.

The Retrieval-Augmented Generation (RAG) consists of two steps:

  1. Retrieval of relevant text chunks from external sources
  2. Augmentation of the chunks with context in the prompt given to the LLM.

For Schneider Electric’ use-case, the RAG proceeds as follows:

  1. The given company name is combined with a question like “Who is the parent company of X”, where X is the given company) and passed to a google query using the Serper AI
  2. The extracted information is combined with the prompt and original question and passed to the LLM for an answer.

The following diagram illustrates this process.

RAG Workflow

Use the following code to create an endpoint:

# Spin FLAN-T5-XXL Sagemaker Endpoint
llm = SagemakerEndpoint(...)

Instantiate search tool:

search = GoogleSerperAPIWrapper()
search_tool = Tool(
	name="Search",
	func=search.run,
	description="useful for when you need to ask with search",
	verbose=False)

In the following code, we chain together the retrieval and augmentation components:

my_template = """
Answer the following question using the information. n
Question : {question}? n
Information : {search_result} n
Answer: """
prompt_template = PromptTemplate(
	input_variables=["question", 'search_result'],
	template=my_template)
question_chain = LLMChain(
	llm=llm,
	prompt=prompt_template,
	output_key="answer")

def search_and_reply_company(company):
	# Retrieval
	search_result = search_tool.run(f"{company} parent company")
	# Augmentation
	output = question_chain({
		"question":f"Who is the parent company of {company}?",
		"search_result": search_result})
	return output["answer"]

search_and_reply_company("Whole Foods Market")
"Amazon"

The Prompt Engineering

The combination of the context and the question is called the prompt. We noticed that the blanket prompt we used (variations around asking for the parent company) performed well for most public sectors (domains) but didn’t generalize well to education or healthcare since the notion of parent company is not meaningful there. For education, we used “X” while for healthcare we used “Y”.

To enable this domain specific prompt selection, we also had to identify the domain a given account belongs to. For this, we also used a RAG where a multiple choice question “What is the domain of {account}?” as a first step, and based on the answer we inquired on the parent of the account using the relevant prompt as a second step. See the following code:

my_template_options = """
Answer the following question using the information. n
Question :  {question}? n
Information : {search_result} n
Options :n {options} n
Answer:
"""

prompt_template_options = PromptTemplate(
input_variables=["question", 'search_result', 'options'],
template=my_template_options)
question_chain = LLMChain(
	llm=llm,
	prompt=prompt_template_options,
	output_key="answer")
	
my_options = """
- healthcare
- education
- oil and gas
- banking
- pharma
- other domain """

def search_and_reply_domain(company):
search_result = search_tool.run(f"{company} ")
output = question_chain({
	"question":f"What is the domain of {company}?",
	"search_result": search_result,
	"options":my_options})
return output["answer"]

search_and_reply_domain("Exxon Mobil")
"oil and gas"

The sector specific prompts have boosted the overall performance from 55% to 71% of accuracy. Overall, the effort and time invested to develop effective prompts appear to significantly improve the quality of LLM response.

RAG with tabular data (SEC-10k)

The SEC 10K filings is another reliable source of information for subsidiaries and subdivisions filed annually by a publicly traded companies. These filings are available directly on SEC EDGAR or through  CorpWatch API.

We assume the information is given in tabular format. Below is a pseudo csv dataset that mimics the original format of the SEC-10K dataset. It is possible to merge multiple csv data sources into a combined pandas dataframe:

# A pseudo dataset similar by schema to the CorpWatch API dataset
df.head()

index	relation_id		source_cw_id	target_cw_id	parent		subsidiary
  1		90				22569           37				AMAZON		WHOLE FOODS MARKET
873		1467			22569			781				AMAZON		TWITCH
899		1505			22569			821				AMAZON		ZAPPOS
900		1506			22569			821				AMAZON		ONE MEDICAL
901		1507			22569			821				AMAZON		WOOT!

The LangChain provides an abstraction layer for pandas through create_pandas_dataframe_agent.  There are two key advantages to using LangChain/LLMs for this task:

  1. Once spun up, it allows a downstream consumer to interact with the dataset in natural language rather than code
  2. It is more robust to misspellings and different ways of naming accounts.

We spin the endpoint as above and create the agent:

# Create pandas dataframe agent
agent = create_pandas_dataframe_agent(llm, df, varbose=True)

In the following code, we query for the parent/subsidiary relationship and the agent translates the query into pandas language:

# Example 1
query = "Who is the parent of WHOLE FOODS MARKET?"
agent.run(query)

#### output
> Entering new AgentExecutor chain...
Thought: I need to find the row with WHOLE FOODS MARKET in the subsidiary column
Action: python_repl_ast
Action Input: df[df['subsidiary'] == 'WHOLE FOODS MARKET']
Observation:
source_cw_id	target_cw_id	parent		subsidiary
22569			37				AMAZON		WHOLE FOODS MARKET
Thought: I now know the final answer
Final Answer: AMAZON
> Finished chain.
# Example 2
query = "Who are the subsidiaries of Amazon?"
agent.run(query)
#### output
> Entering new AgentExecutor chain...
Thought: I need to find the row with source_cw_id of 22569
Action: python_repl_ast
Action Input: df[df['source_cw_id'] == 22569]
...
Thought: I now know the final answer
Final Answer: The subsidiaries of Amazon are Whole Foods Market, Twitch, Zappos, One Medical, Woot!...> Finished chain.
'The subsidiaries of Amazon are Whole Foods Market, Twitch, Zappos, One Medical, Woot!.'

Conclusion

In this post, we detailed how we used building blocks from LangChain to augment an LLM with search capabilities, in order to uncover relationships between Schneider Electric’s customer accounts. We extended the initial pipeline to a two-step process with domain identification before using a domain specific prompt for higher accuracy.

In addition to the Google Search query, datasets that detail corporate structures such as the SEC 10K filings can be used to further augment the LLM with trustworthy information. Schneider Electric team will also be able to extend and design their own prompts mimicking the way they classify some public sector accounts, further improving the accuracy of the pipeline. These capabilities will enable Schneider Electric to maintain up-to-date and accurate organizational structures of their customers, and unlock the ability to do analytics on top of this data.


About the Authors

Anthony Medeiros is a Manager of Solutions Engineering and Architecture at Schneider Electric. He specializes in delivering high-value AI/ML initiatives to many business functions within North America. With 17 years of experience at Schneider Electric, he brings a wealth of industry knowledge and technical expertise to the team.

Blake Sanstchi is a Business Intelligence Manager at Schneider Electric, leading an analytics team focused on supporting the Sales organization through data-driven insights.

Joshua LevyJoshua Levy is Senior Applied Science Manager in the Amazon Machine Learning Solutions lab, where he helps customers design and build AI/ML solutions to solve key business problems.

Kosta Belz is a Senior Applied Scientist with AWS MLSL with focus on Generative AI and document processing. He is passionate about building applications using Knowledge Graphs and NLP. He has around 10 years of experience in building Data & AI solutions to create value for customers and enterprises.

Aude Genevay is an Applied Scientist in the Amazon GenAI Incubator, where she helps customers solve key business problems through ML and AI. She previously was a researcher in theoretical ML and enjoys applying her knowledge to deliver state-of-the-art solutions to customers.

Md Sirajus Salekin is an Applied Scientist at AWS Machine Learning Solution Lab. He helps AWS customers to accelerate their business by building AI/ML solutions. His research interests are multimodal machine learning, generative AI, and ML applications in healthcare.

Zichen Wang, PhD, is a Senior Applied Scientist in AWS. With several years of research experience in developing ML and statistical methods using biological and medical data, he works with customers across various verticals to solve their ML problems.

Anton Gridin is a Principal Solutions Architect supporting Global Industrial Accounts, based out of New York City. He has more than 15 years of experience building secure applications and leading engineering teams.

Read More

Use AWS PrivateLink to set up private access to Amazon Bedrock

Use AWS PrivateLink to set up private access to Amazon Bedrock

Amazon Bedrock is a fully managed service provided by AWS that offers developers access to foundation models (FMs) and the tools to customize them for specific applications. It allows developers to build and scale generative AI applications using FMs through an API, without managing infrastructure. You can choose from various FMs from Amazon and leading AI startups such as AI21 Labs, Anthropic, Cohere, and Stability AI to find the model that’s best suited for your use case. With the Amazon Bedrock serverless experience, you can quickly get started, easily experiment with FMs, privately customize them with your own data, and seamlessly integrate and deploy them into your applications using AWS tools and capabilities.

Customers are building innovative generative AI applications using Amazon Bedrock APIs using their own proprietary data. When accessing Amazon Bedrock APIs, customers are looking for mechanism to set up a data perimeter without exposing their data to internet so they can mitigate potential threat vectors from internet exposure. The Amazon Bedrock VPC endpoint powered by AWS PrivateLink allows you to establish a private connection between the VPC in your account and the Amazon Bedrock service account. It enables VPC instances to communicate with service resources without the need for public IP addresses.

In this post, we demonstrate how to set up private access on your AWS account to access Amazon Bedrock APIs over VPC endpoints powered by PrivateLink to help you build generative AI applications securely with your own data.

Solution overview

You can use generative AI to develop a diverse range of applications, such as text summarization, content moderation, and other capabilities. When building such generative AI applications using FMs or base models, customers want to generate a response without going over the public internet or based on their proprietary data that may reside in their enterprise databases.

In the following diagram, we depict an architecture to set up your infrastructure to read your proprietary data residing in Amazon Relational Database Service (Amazon RDS) and augment the Amazon Bedrock API request with product information when answering product-related queries from your generative AI application. Although we use Amazon RDS in this diagram for illustration purposes, you can test the private access of the Amazon Bedrock APIs end to end using the instructions provided in this post.

The workflow steps are as follows:

  1. AWS Lambda running in your private VPC subnet receives the prompt request from the generative AI application.
  2. Lambda makes a call to proprietary RDS database and augments the prompt query context (for example, adding product information) and invokes the Amazon Bedrock API with the augmented query request.
  3. The API call is routed to the Amazon Bedrock VPC endpoint that is associated to the VPC endpoint policy with Allow permissions to Amazon Bedrock APIs.
  4. The Amazon Bedrock service API endpoint receives the API request over PrivateLink without traversing the public internet.
  5. You can change the Amazon Bedrock VPC endpoint policy to Deny permissions to validate that Amazon Bedrock APIs calls are denied.
  6. You can also privately access Amazon Bedrock APIs over the VPC endpoint from your corporate network through an AWS Direct Connect gateway.

Prerequisites

Before you get started, make sure you have the following prerequisites:

  • An AWS account
  • An AWS Identity and Access Management (IAM) federation role with access to do the following:
    • Create, edit, view, and delete VPC network resources
    • Create, edit, view and delete Lambda functions
    • Create, edit, view and delete IAM roles and policies
    • List foundation models and invoke the Amazon Bedrock foundation model
  • For this post, we use the us-east-1 Region
  • Request foundation model access via the Amazon Bedrock console

Set up the private access infrastructure

In this section, we set up the infrastructure such as VPC, private subnets, security groups, and Lambda function using an AWS CloudFormation template.

Use the following template to create the infrastructure stack Bedrock-GenAI-Stack in your AWS account.

The CloudFormation template creates the following resources on your behalf:

  • A VPC with two private subnets in separate Availability Zones
  • Security groups and routing tables
  • IAM role and policies for use by Lambda, Amazon Bedrock, and Amazon Elastic Compute Cloud (Amazon EC2)

Set up the VPC endpoint for Amazon Bedrock

In this section, we use Amazon Virtual Private Cloud (Amazon VPC) to set up the VPC endpoint for Amazon Bedrock to facilitate private connectivity from your VPC to Amazon Bedrock.

  1. On the Amazon VPC console, under Virtual private cloud in the navigation pane, choose Endpoints.
  2. Choose Create endpoint.
  3. For Name tag, enter bedrock-vpce.
  4. Under Services, search for bedrock-runtime and select com.amazonaws.<region>.bedrock-runtime.
  5. For VPC, specify the VPC Bedrock-GenAI-Project-vpc that you created through the CloudFormation stack in the previous section.
  6. In the Subnets section, and select the Availability Zones and choose the corresponding subnet IDs from the drop-down menu.
  7. For Security groups, select the security group with the group name Bedrock-GenAI-Stack-VPCEndpointSecurityGroup- and description Allow TLS for VPC Endpoint.

A security group acts as a virtual firewall for your instance to control inbound and outbound traffic. Note that this VPC endpoint security group only allows traffic originating from the security group attached to your VPC private subnets, adding a layer of protection.

  1. Choose Create endpoint.
  2. In the Policy section, select Custom and enter the following least privilege policy to ensure only certain actions are allowed on the specified foundation model resource, arn:aws:bedrock:*::foundation-model/anthropic.claude-instant-v1 for a given principal (such as Lambda function IAM role).
    {
    	"Version": "2012-10-17",
    	"Statement": [
    		{
    		    "Action": [
    		        "bedrock:InvokeModel"
    		        ],
    		    "Resource": [
    		        "arn:aws:bedrock:*::foundation-model/anthropic.claude-instant-v1"
    		        ],
    		    "Effect": "Allow",
    		    "Principal": {
                    "AWS": "arn:aws:iam::<accountid>:role/GenAIStack-Bedrock"
                }
    		}
    	]
    }

It may take up to 2 minutes until the interface endpoint is created and the status changes to Available. You can refresh the page to check the latest status.

Set up the Lambda function over private VPC subnets

Complete the following steps to configure the Lambda function:

  1. On the Lambda console, choose Functions in the navigation pane.
  2. Choose the function gen-ai-lambda-stack-BedrockTestLambdaFunction-XXXXXXXXXXXX.
  3. On the Configuration tab, choose Permissions in the left pane.
  4. Under Execution role¸ choose the link for the role gen-ai-lambda-stack-BedrockTestLambdaFunctionRole-XXXXXXXXXXXX.

You’re redirected to the IAM console.

  1. In the Permissions policies section, choose Add permissions and choose Create inline policy.
  2. On the JSON tab, modify the policy as follows:
    {
        "Version": "2012-10-17",
        "Statement": [
            {
                "Sid": "eniperms",
                "Effect": "Allow",
                "Action": [
                    "ec2:CreateNetworkInterface",
                    "ec2:DescribeNetworkInterfaces",
                    "ec2:DeleteNetworkInterface",
                    "ec2:*VpcEndpoint*"
                ],
                "Resource": "*"
            }
        ]
    }

  3. Choose Next.
  4. For Policy name, enter enivpce-policy.
  5. Choose Create policy.
  6. Add the following inline policy (provide your source VPC endpoints) for restricting Lambda access to Amazon Bedrock APIs only via VPC endpoints:
    {
        "Id": "lambda-bedrock-sourcevpce-access-only",
        "Version": "2012-10-17",
        "Statement": [
            {
                "Effect": "Allow",
                "Action": [
    		   "bedrock:ListFoundationModels",
                    "bedrock:InvokeModel"
                ],
                "Resource": "*",
                "Condition": {
                    "ForAnyValue:StringEquals": {
                        "aws:sourceVpce": [
                            "vpce-<bedrock-runtime-vpce>"
                        ]
                    }
                }
            }
        ]
    } 

  7. On Lambda function page, on the Configuration tab, choose VPC in the left pane, then choose Edit.
  8. For VPC, choose Bedrock-GenAI-Project-vpc.
  9. For Subnets, choose the private subnets.
  10. For Security groups, choose gen-ai-lambda-stack-SecurityGroup- (the security group for the Amazon Bedrock workload in private subnets).
  11. Choose Save.

Test private access controls

Now you can test the private access controls (Amazon Bedrock APIs over VPC endpoints).

  1. On the Lambda console, choose Functions in the navigation pane.
  2. Choose the function gen-ai-lambda-stack-BedrockTestLambdaFunction-XXXXXXXXXXXX.
  3. On the Code tab, choose Test.

You should see the following response from the Amazon Bedrock API call (Status: Succeeded).

  1. To deny access to Amazon Bedrock APIs over VPC endpoints, navigate to the Amazon VPC console.
  2. Under Virtual private cloud in the navigation pane, choose Endpoints.
  3. Choose your policy and navigate to the Policy tab.

Currently, the VPC endpoint policy is set to Allow.

  1. To deny access, choose Edit Policy.
  2. Change Allow to Deny and choose Save.

It may take up to 2 minutes for the policy for the VPC endpoint to update.

{
	"Version": "2012-10-17",
	"Statement": [
		{
		    "Action": [
		        "bedrock:InvokeModel"
		        ],
		    "Resource": [
		        "arn:aws:bedrock:*::foundation-model/anthropic.claude-instant-v1"
		        ],
		    "Effect": "Deny",
		    "Principal": {
                "AWS": "arn:aws:iam::<accountid>:role/GenAIStack-Bedrock"
            }
		}
	]
}
  1. Return to the Lambda function page and on the Code tab, choose Test.

As shown in the following screenshot, the access request to Amazon Bedrock over the VPC endpoint was denied (Status: Failed).

Through this testing process, we demonstrated how traffic from your VPC to the Amazon Bedrock API endpoint is traversing over the PrivateLink connection and not through the internet connection.

Clean up

Follow these steps to avoid incurring future charges:

  1. Clean up the VPC endpoints.
  2. Clean up the VPC.
  3. Delete the CloudFormation stack.

Conclusion

In this post, we demonstrated how to set up and operationalize a private connection between a generative AI workload deployed on your customer VPC and Amazon Bedrock using an interface VPC endpoint powered by PrivateLink. When using the architecture discussed in this post, the traffic between your customer VPC and Amazon Bedrock will not leave the Amazon network, ensuring your data is not exposed to the public internet and thereby helping with your compliance requirements.

As a next step, try the solution out in your account and share your feedback.


About the Authors

Ram Vittal is a Principal ML Solutions Architect at AWS. He has over 3 decades of experience architecting and building distributed, hybrid, and cloud applications. He is passionate about building secure and scalable AI/ML and big data solutions to help enterprise customers with their cloud adoption and optimization journey to improve their business outcomes. In his spare time, he rides his motorcycle and walks with his 3-year-old Sheepadoodle!

Ray Khorsandi is an AI/ML specialist at AWS, supporting strategic customers with AI/ML best practices. With an M.Sc. and Ph.D. in Electrical Engineering and Computer Science, he leads enterprises to build secure, scalable AI/ML and big data solutions to optimize their cloud adoption. His passions include computer vision, NLP, generative AI, and MLOps. Ray enjoys playing soccer and spending quality time with family.

Michael Daniels is an AI/ML Specialist at AWS. His expertise lies in building and leading AI/ML and generative AI solutions for complex and challenging business problems, which is enhanced by his Ph.D. from the Univ. of Texas and his M.Sc. in Computer Science specialization in Machine Learning from the Georgia Institute of Technology. He excels in applying cutting-edge cloud technologies to innovate, inspire, and transform industry-leading organizations, while also effectively communicating with stakeholders at any level or scale. In his spare time, you can catch Michael skiing or snowboarding in the mountains.

Read More

Deploy and fine-tune foundation models in Amazon SageMaker JumpStart with two lines of code

Deploy and fine-tune foundation models in Amazon SageMaker JumpStart with two lines of code

We are excited to announce a simplified version of the Amazon SageMaker JumpStart SDK that makes it straightforward to build, train, and deploy foundation models. The code for prediction is also simplified. In this post, we demonstrate how you can use the simplified SageMaker Jumpstart SDK to get started with using foundation models in just a couple of lines of code.

For more information about the simplified SageMaker JumpStart SDK for deployment and training, refer to Low-code deployment with the JumpStartModel class and Low-code fine-tuning with the JumpStartEstimator class, respectively.

Solution overview

SageMaker JumpStart provides pre-trained, open-source models for a wide range of problem types to help you get started with machine learning (ML). You can incrementally train and fine-tune these models before deployment. JumpStart also provides solution templates that set up infrastructure for common use cases, and executable example notebooks for ML with Amazon SageMaker. You can access the pre-trained models, solution templates, and examples through the SageMaker JumpStart landing page in Amazon SageMaker Studio or use the SageMaker Python SDK.

To demonstrate the new features of the SageMaker JumpStart SDK, we show you how to use the pre-trained Flan T5 XL model from Hugging Face for text generation for summarization tasks. We also showcase how, in just a few lines of code, you can fine-tune the Flan T5 XL model for summarization tasks. You can use any other model for text generation like Llama2, Falcon, or Mistral AI.

You can find the notebook for this solution using Flan T5 XL in the GitHub repo.

Deploy and invoke the model

Foundation models hosted on SageMaker JumpStart have model IDs. For the full list of model IDs, refer to Built-in Algorithms with pre-trained Model Table. For this post, we use the model ID of the Flan T5 XL text generation model. We instantiate the model object and deploy it to a SageMaker endpoint by calling its deploy method. See the following code:

from sagemaker.jumpstart.model import JumpStartModel

# Replace with larger model if needed
pretrained_model = JumpStartModel(model_id="huggingface-text2text-flan-t5-base")
pretrained_predictor = pretrained_model.deploy()

Next, we invoke the model to create a summary of the provided text using the Flan T5 XL model. The new SDK interface makes it straightforward for you to invoke the model: you just need to pass the text to the predictor and it returns the response from the model as a Python dictionary.

text = """Summarize this content - Amazon Comprehend uses natural language processing (NLP) to extract insights about the content of documents. It develops insights by recognizing the entities, key phrases, language, sentiments, and other common elements in a document. Use Amazon Comprehend to create new products based on understanding the structure of documents. For example, using Amazon Comprehend you can search social networking feeds for mentions of products or scan an entire document repository for key phrases. 
You can access Amazon Comprehend document analysis capabilities using the Amazon Comprehend console or using the Amazon Comprehend APIs. You can run real-time analysis for small workloads or you can start asynchronous analysis jobs for large document sets. You can use the pre-trained models that Amazon Comprehend provides, or you can train your own custom models for classification and entity recognition. """
query_response = pretrained_predictor.predict(text)
print(query_response["generated_text"])

The following is the output of the summarization task:

Understand how Amazon Comprehend works. Use Amazon Comprehend to analyze documents.

Fine-tune and deploy the model

The SageMaker JumpStart SDK provides you with a new class, JumpStartEstimator, which simplifies fine-tuning. You can provide the location of fine-tuning data and optionally pass validations datasets as well. After you fine-tune the model, use the deploy method of the Estimator object to deploy the fine-tuned model:

from sagemaker.jumpstart.estimator import JumpStartEstimator

estimator = JumpStartEstimator(
    model_id=model_id,
)
estimator.set_hyperparameters(instruction_tuned="True", epoch="3", max_input_length="1024")
estimator.fit({"training": train_data_location})
finetuned_predictor = estimator.deploy()

Customize the new classes in the SageMaker SDK

The new SDK makes it straightforward to deploy and fine-tune JumpStart models by defaulting many parameters. You still have the option to override the defaults and customize the deployment and invocation based on your requirements. For example, you can customize input payload format type, instance type, VPC configuration, and more for your environment and use case.

The following code shows how to override the instance type while deploying your model:

finetuned_predictor = estimator.deploy(instance_type='ml.g5.2xlarge')

The SageMaker JumpStart SDK deploy function will automatically select a default content type and serializer for you. If you want to change the format type of the input payload, you can use serializers and content_types objects to introspect the options available to you by passing the model_id of the model you are working with. In the following code, we set the payload input format as JSON by setting JSONSerializer as serializer and application/json as content_type:

from sagemaker import serializers
from sagemaker import content_types

serializer_options = serializers.retrieve_options(model_id=model_id, model_version=model_version)
content_type_options = content_types.retrieve_options(model_id=model_id, model_version=model_version)

pretrained_predictor.serializer = serializers.JSONSerializer()
pretrained_predictor.content_type = 'application/json'

Next, you can invoke the Flan T5 XL model for the summarization task with a payload of the JSON format. In the following code, we also pass inference parameters in the JSON payload for making responses more accurate:

from sagemaker import serializers

input_text= """Summarize this content - Amazon Comprehend uses natural language processing (NLP) to extract insights about the content of documents. It develops insights by recognizing the entities, key phrases, language, sentiments, and other common elements in a document. Use Amazon Comprehend to create new products based on understanding the structure of documents. For example, using Amazon Comprehend you can search social networking feeds for mentions of products or scan an entire document repository for key phrases.
You can access Amazon Comprehend document analysis capabilities using the Amazon Comprehend console or using the Amazon Comprehend APIs. You can run real-time analysis for small workloads or you can start asynchronous analysis jobs for large document sets. You can use the pre-trained models that Amazon Comprehend provides, or you can train your own custom models for classification and entity recognition. """

parameters = {
    "max_length": 600,
    "num_return_sequences": 1,
    "top_p": 0.01,
    "do_sample": False,
}

payload = {"text_inputs": input_text, **parameters} #JSON Input format

pretrained_predictor.serializer = serializers.JSONSerializer()
query_response = pretrained_predictor.predict(payload)
print(query_response["generated_texts"][0])

If you’re looking for more ways to customize the inputs and other options for hosting and fine-tuning, refer to the documentation for the JumpStartModel and JumpStartEstimator classes.

Conclusion

In this post, we showed you how you can use the simplified SageMaker JumpStart SDK for building, training, and deploying task-based and foundation models in just a few lines of code. We demonstrated the new classes like JumpStartModel and JumpStartEstimator using the Hugging Face Flan T5-XL model as an example. You can use any of the other SageMaker JumpStart foundation models for use cases such as content writing, code generation, question answering, summarization, classification, information retrieval, and more. To see the whole list of models available with SageMaker JumpStart, refer to Built-in Algorithms with pre-trained Model Table. SageMaker JumpStart also supports task-specific models for many popular problem types.

We hope the simplified interface of the SageMaker JumpStart SDK will help you get started quickly and enable you to deliver faster. We look forward to hearing how you use the simplified SageMaker JumpStart SDK to create exciting applications!


About the authors

Evan Kravitz is a software engineer at Amazon Web Services, working on SageMaker JumpStart. He is interested in the confluence of machine learning with cloud computing. Evan received his undergraduate degree from Cornell University and master’s degree from the University of California, Berkeley. In 2021, he presented a paper on adversarial neural networks at the ICLR conference. In his free time, Evan enjoys cooking, traveling, and going on runs in New York City.

Rachna Chadha is a Principal Solution Architect AI/ML in Strategic Accounts at AWS. Rachna is an optimist who believes that ethical and responsible use of AI can improve society in the future and bring economic and social prosperity. In her spare time, Rachna likes spending time with her family, hiking, and listening to music.

Jonathan Guinegagne is a Senior Software Engineer with Amazon SageMaker JumpStart at AWS. He got his master’s degree from Columbia University. His interests span machine learning, distributed systems, and cloud computing, as well as democratizing the use of AI. Jonathan is originally from France and now lives in Brooklyn, NY.

Dr. Ashish Khetan is a Senior Applied Scientist with Amazon SageMaker built-in algorithms and helps develop machine learning algorithms. He got his PhD from University of Illinois Urbana-Champaign. He is an active researcher in machine learning and statistical inference, and has published many papers in NeurIPS, ICML, ICLR, JMLR, ACL, and EMNLP conferences.

Read More

Elevate your marketing solutions with Amazon Personalize and generative AI

Elevate your marketing solutions with Amazon Personalize and generative AI

Generative artificial intelligence is transforming how enterprises do business. Organizations are using AI to improve data-driven decisions, enhance omnichannel experiences, and drive next-generation product development. Enterprises are using generative AI specifically to power their marketing efforts through emails, push notifications, and other outbound communication channels. Gartner predicts that “by 2025, 30% of outbound marketing messages from large organizations will be synthetically generated.” However, generative AI alone isn’t enough to deliver engaging customer communication. Research shows that the most impactful communication is personalized—showing the right message to the right user at the right time. According to McKinsey, “71% of consumers expect companies to deliver personalized interactions.” Customers can use Amazon Personalize and generative AI to curate concise, personalized content for marketing campaigns, increase ad engagement, and enhance conversational chatbots.

Developers can use Amazon Personalize to build applications powered by the same type of machine learning (ML) technology used by Amazon.com for real-time personalized recommendations. With Amazon Personalize, developers can improve user engagement through personalized product and content recommendations with no ML expertise required. Using recipes (algorithms prepared to support specific uses cases) provided by Amazon Personalize, customers can deliver a wide array of personalization, including specific product or content recommendations, personalized ranking, and user segmentation. Additionally, as a fully managed artificial intelligence service, Amazon Personalize accelerates customers’ digital transformations with ML, making it easier to integrate personalized recommendations into existing websites, applications, email marketing systems, and so on.

In this post, we illustrate how you can elevate your marketing campaigns using Amazon Personalize and generative AI with Amazon Bedrock. Together, Amazon Personalize and generative AI help you tailor your marketing to individual consumer preferences.

How exactly do Amazon Personalize and Amazon Bedrock work together to achieve this? Imagine as a marketer that you want to send tailored emails to users recommending movies they would enjoy based on their interactions across your platform. Or perhaps you want to send targeted emails to a segment of users promoting a new shoe they might be interested in. The following use cases use generative AI to enhance two common marketing emails.

Use Case 1: Use generative AI to deliver targeted one-to-one personalized emails

With Amazon Personalize and Amazon Bedrock, you can generate personalized recommendations and create outbound messages with a personal touch tailored to each of your users.

The following diagram illustrates the architecture and workflow for delivering targeted personalized emails powered by generative AI.

First, import your dataset of users’ interactions into Amazon Personalize for training. Amazon Personalize automatically trains a model using the Top Picks for You recipe. As an output, Amazon Personalize provides recommendations that align with the users’ preferences.

You can use the following code to identify recommended items for users:

get_recommendations_response = personalize_runtime.get_recommendations(
                            recommenderArn = workshop_recommender_top_picks_arn,
                            userId = str(user_id),
                            numResults = number_of_movies_to_recommend)

For more information, see the Amazon Personalize API reference.

The provided output of recommendations is ingested by Amazon Bedrock using a prompt, which includes your user preferences, demographics, and Amazon Personalize recommended items.

For example, a marketer who wants to create a personalized email that is charming and fun for a user might use the following prompt:

Create a personalized email which is charming and fun so that the user is engaged. The user has recently watched family-friendly films. These are the recommended items – The Little Mermaid, Encanto, Spider-Man: Into the Spider-Verse.

By invoking one of the foundation models (FM) provided in Amazon Bedrock, such as Claude 2, with the prompt and sample code that follows, you can create a personalized email for a user:

personalized_email_response = bedrock_client.invoke_model(
                            body = prompt,
                            modelId = identifier_of_the_model)

For more information, see the Amazon Bedrock API reference.

Amazon Bedrock returns a personalized email for the user:

Subject: Fall in love with this recommended selection for movie night!

Dear <user name>,

Desiring the cozy feel of fall? No problem! Check our top three recommendations for movies that will have you cozy on the couch with your loved ones:

1. The Little Mermaid: This classic Disney movie is all about a mermaid princess named Ariel, who dreams of the human world. Because of her fascination, she makes a deal with the sea witch Ursula and learns a major lesson.

2. Encanto: This Disney movie is about the Madrigals, a Colombian family who lives in a magical house. Each member of the family has a unique gift, except for young Maribel who must help save her family.

3. Spider-Man: Into the Spider-Verse: This animated superhero movie is a must-see action movie. Spider-man, a Brooklyn teen named Miles Morales, teams up with other spider-powered people to save the multiverse.

With lovable characters, catchy tunes, and moving stories, you really can’t go wrong with any of these three. Grab the popcorn because you’re in for a treat!

Use case 2: Use generative AI to elevate one-to-many marketing campaigns

When it comes to one-to-many email marketing, generic content can result in low engagement (that is, low open rates and unsubscribes). One way companies circumvent this outcome is to manually craft variations of outbound messages with compelling subjects. This can lead to inefficient use of time. By integrating Amazon Personalize and Amazon Bedrock into your workflow, you can quickly identify the interested segment of users and create variations of email content with greater relevance and engagement.

The following diagram illustrates the architecture and workflow for elevating marketing campaigns powered by generative AI.

To compose one-to-many emails, first import your dataset of users’ interactions into Amazon Personalize for training. Amazon Personalize trains the model using the user segmentation recipe. With the user segmentation recipe, Amazon Personalize automatically identifies the individual users that demonstrate a propensity for the chosen items as the target audience.

To identify the target audience and retrieve metadata for an item you can use the following sample code:

create_batch_segment_response = personalize.create_batch_segment_job(
        jobName = job_name,
        solutionVersionArn = solution_version_arn,
        numResults = number_of_users_to_recommend
        jobInput =  {
            "s3DataSource": {
                "path": batch_input_path
            }
        },
        jobOutput = {
            "s3DataDestination": {
            "path": batch_output_path
            }
        }
)

For more information, see the Amazon Personalize API reference.

Amazon Personalize delivers a list of recommended users to target for each item to batch_output_path. You can then invoke the user segment into Amazon Bedrock using one of the FMs along with your prompt.

For this use case, you might want to market a newly released sneaker through email. An example prompt might include the following:

For the user segment “sneaker heads”, create a catchy email that promotes the latest sneaker “Ultra Fame II”. Provide users with discount code FAME10 to save 10%.

Similar to the first use case, you’ll use the following code in Amazon Bedrock:

personalized_email_response = bedrock_client.invoke_model(
                                body = prompt,
                                modelId = identifier_of_the_model)

For more information, see the Amazon Bedrock API reference.

Amazon Bedrock returns a personalized email based on the items chosen for each user as shown:

Subject: <<name>>, your ticket to the Hall of Fame awaits

Hey <<name>>,

The wait is over. Check out the new Ultra Fame II! It’s the most innovative and comfortable Ultra Fame shoe yet. Its new design will have you turning heads with every step. Plus, you’ll get a mix of comfort, support, and style that’s just enough to get you into the Hall of Fame.

Don’t wait until it’s too late. Use the code FAME10 to save 10% on your next pair.

To test and determine the email that leads to the highest engagement, you can use Amazon Bedrock to generate a variation of catchy subject lines and content in a fraction of the time it would take to manually produce test content.

Conclusion

By integrating Amazon Personalize and Amazon Bedrock, you are enabled to deliver personalized promotional content to the right audience.

Generative AI powered by FMs is changing how businesses build hyper-personalized experiences for consumers. AWS AI services, such as Amazon Personalize and Amazon Bedrock, can help recommend and deliver products, content, and compelling marketing messages personalized to your users. For more information on working with generative AI on AWS, see to Announcing New Tools for Building with Generative AI on AWS.


About the Authors

Ba’Carri Johnson is a Sr. Technical Product Manager working with AWS AI/ML on the Amazon Personalize team. With a background in computer science and strategy, she is passionate about product innovation. In her spare time, she enjoys traveling and exploring the great outdoors.

Ragini Prasad is a Software Development Manager with the Amazon Personalize team focused on building AI-powered recommender systems at scale. In her spare time, she enjoys art and travel.

Jingwen Hu is a Sr. Technical Product Manager working with AWS AI/ML on the Amazon Personalize team. In her spare time, she enjoys traveling and exploring local food.

Anna Grüebler is a Specialist Solutions Architect at AWS focusing on artificial intelligence. She has more than 10 years of experience helping customers develop and deploy machine learning applications. Her passion is taking new technologies and putting them in the hands of everyone and solving difficult problems by taking advantage of using AI in the cloud.

Tim Wu Kunpeng is a Sr. AI Specialist Solutions Architect with extensive experience in end-to-end personalization solutions. He is a recognized industry expert in e-commerce and media and entertainment, with expertise in generative AI, data engineering, deep learning, recommendation systems, responsible AI, and public speaking.

Read More