The awards support four research projects exploring the intersection of fashion and AI.Read More
Anomaly detection for graph-based data
Diffusion modeling within the representational space of a variational autoencoder enables state-of-the-art results.Read More
The journey of PGA TOUR’s generative AI virtual assistant, from concept to development to prototype
This is a guest post co-written with Scott Gutterman from the PGA TOUR.
Generative artificial intelligence (generative AI) has enabled new possibilities for building intelligent systems. Recent improvements in Generative AI based large language models (LLMs) have enabled their use in a variety of applications surrounding information retrieval. Given the data sources, LLMs provided tools that would allow us to build a Q&A chatbot in weeks, rather than what may have taken years previously, and likely with worse performance. We formulated a Retrieval-Augmented-Generation (RAG) solution that would allow the PGA TOUR to create a prototype for a future fan engagement platform that could make its data accessible to fans in an interactive fashion in a conversational format.
Using structured data to answer questions requires a way to effectively extract data that’s relevant to a user’s query. We formulated a text-to-SQL approach where by a user’s natural language query is converted to a SQL statement using an LLM. The SQL is run by Amazon Athena to return the relevant data. This data is again provided to an LLM, which is asked to answer the user’s query given the data.
Using text data requires an index that can be used to search and provide relevant context to an LLM to answer a user query. To enable quick information retrieval, we use Amazon Kendra as the index for these documents. When users ask questions, our virtual assistant rapidly searches through the Amazon Kendra index to find relevant information. Amazon Kendra uses natural language processing (NLP) to understand user queries and find the most relevant documents. The relevant information is then provided to the LLM for final response generation. Our final solution is a combination of these text-to-SQL and text-RAG approaches.
In this post we highlight how the AWS Generative AI Innovation Center collaborated with the AWS Professional Services and PGA TOUR to develop a prototype virtual assistant using Amazon Bedrock that could enable fans to extract information about any event, player, hole or shot level details in a seamless interactive manner. Amazon Bedrock is a fully managed service that offers a choice of high-performing foundation models (FMs) from leading AI companies like AI21 Labs, Anthropic, Cohere, Meta, Stability AI, and Amazon via a single API, along with a broad set of capabilities you need to build generative AI applications with security, privacy, and responsible AI.
Development: Getting the data ready
As with any data-driven project, performance will only ever be as good as the data. We processed the data to enable the LLM to be able to effectively query and retrieve relevant data.
For the tabular competition data, we focused on a subset of data relevant to the greatest number of user queries and labelled the columns intuitively, such that they would be easier for LLMs to understand. We also created some auxiliary columns to help the LLM understand concepts it might otherwise struggle with. For example, if a golfer shoots one shot less than par (such as makes it in the hole in 3 shots on a par 4 or in 4 shots on a par 5), it is commonly called a birdie. If a user asks, “How many birdies did player X make in last year?”, just having the score and par in the table is not sufficient. As a result, we added columns to indicate common golf terms, such as bogey, birdie, and eagle. In addition, we linked the Competition data with a separate video collection, by joining a column for a video_id
, which would allow our app to pull the video associated with a particular shot in the Competition data. We also enabled joining text data to the tabular data, for example adding biographies for each player as a text column. The following figures shows the step-by-step procedure of how a query is processed for the text-to-SQL pipeline. The numbers indicate the series of step to answer a query.
In the following figure we demonstrate our end-to-end pipeline. We use AWS Lambda as our orchestration function responsible for interacting with various data sources, LLMs and error correction based on the user query. Steps 1-8 are similar to what is shown in the proceeding figure. There are slight changes for the unstructured data, which we discuss next.
Text data requires unique processing steps that chunk (or segment) long documents into parts digestible by the LLM, while maintaining topic coherence. We experimented with several approaches and settled on a page-level chunking scheme that aligned well with the format of the Media Guides. We used Amazon Kendra, which is a managed service that takes care of indexing documents, without requiring specification of embeddings, while providing an easy API for retrieval. The following figure illustrates this architecture.
The unified, scalable pipeline we developed allows the PGA TOUR to scale to their full history of data, some of which goes back to the 1800s. It enables future applications that can take live on the course context to create rich real-time experiences.
Development: Evaluating LLMs and developing generative AI applications
We carefully tested and evaluated the first- and third-party LLMs available in Amazon Bedrock to choose the model that is best suited for our pipeline and use case. We selected Anthropic’s Claude v2 and Claude Instant on Amazon Bedrock. For our final structured and unstructured data pipeline, we observe Anthropic’s Claude 2 on Amazon Bedrock generated better overall results for our final data pipeline.
Prompting is a critical aspect of getting LLMs to output text as desired. We spent considerable time experimenting with different prompts for each of the tasks. For example, for the text-to-SQL pipeline we had several fallback prompts, with increasing specificity and gradually simplified table schemas. If a SQL query was invalid and resulted in an error from Athena, we developed an error correction prompt that would pass the error and incorrect SQL to the LLM and ask it to fix it. The final prompt in the text-to-SQL pipeline asks the LLM to take the Athena output, which can be provided in Markdown or CSV format, and provide an answer to the user. For the unstructured text, we developed general prompts to use the context retrieved from Amazon Kendra to answer the user question. The prompt included instructions to use only the information retrieved from Amazon Kendra and not rely on data from the LLM pre-training.
Latency is often a concern with generative AI applications, and it is also the case here. It is especially a concern for text-to-SQL, which requires an initial SQL generation LLM invocation, followed by a response generation LLM invocation. If we’re using a large LLM, such as Anthropic’s Claude V2, this effectively doubles the latency of just one LLM invocation. We experimented with several configurations of large and smaller LLMs to evaluate run time as well as correctness. The following table shows an example for one question is shown below that demonstrates the latency, as well as the generated responses with Anthropic’s Claude V2 and Claude Instant on Amazon Bedrock.
Prototype
In our application, we used a Lambda function to orchestrate the prompt choices and communication between Amazon Athena, Amazon Kendra, and Amazon Bedrock. The virtual assistant features a conversational interface, built with React and the Cloudscape Design System, that uses text and videos to engage with users.
Our virtual assistant can field a wide range of questions about PGA TOUR events, players, statistics, history, and more. For example, when asked about Tony Finau’s longest drives at the Shriners Children’s Open (one of the Signature events of the PGA TOUR), the assistant can query structured data to pull his shot level data. It can extract the exact distance in yards, the round number and the hole during which he achieved this record using the text-to-SQL pipeline. The assistant then interprets this numerical data to generate a final response. The following table contains some example responses.
Question | Response |
What was the longest drive hit by Tony Finau in the Shriners Children’s Open? | The longest drive hit by Tony Finau in the Shriners Childrens Open was 382 yards, which he hit during the first round on hole number 4 in 2018. |
What was the longest drive hit by Collin Morikawa in the Shriners Children’s Open? | Collin Morikawa’s longest drive at the Shriners Childrens Open was 334 yards. This occurred in 2019 during the first round on hole number 15. |
Did anyone make an ace at the 2022 Shriners Children’s Open? | Yes, Adam Hadwin made a hole-in-one on hole 14 during round 3 of the 2022 Shriners Children’s Open |
The following explainer video highlights a few examples of interacting with the virtual assistant.
In initial testing, our PGA TOUR virtual assistant has shown great promise in improving fan experiences. By blending AI technologies like text-to-SQL, semantic search, and natural language generation, the assistant delivers informative, engaging responses. Fans are empowered to effortlessly access data and narratives that were previously hard to find.
What does the future hold?
As we continue development, we will expand the range of questions our virtual assistant can handle. This will require extensive testing, through collaboration between AWS and the PGA TOUR. Over time, we aim to evolve the assistant into a personalized, omni-channel experience accessible across web, mobile, and voice interfaces.
The establishment of a cloud-based generative AI assistant lets the PGA TOUR present its vast data source to multiple internal and external stakeholders. As the sports generative AI landscape evolves, it enables the creation of new content. For example, you can use AI and machine learning (ML) to surface content fans want to see as they’re watching an event, or as production teams are looking for shots from previous tournaments that match a current event. For example, if Max Homa is getting ready to take his final shot at the PGA TOUR Championship from a spot 20 feet from the pin, the PGA TOUR can use AI and ML to identify and present clips, with AI-generated commentary, of him attempting a similar shot five times previously. This kind of access and data allows a production team to immediately add value to the broadcast or allow a fan to customize the type of data that they want to see.
“The PGA TOUR is the industry leader in using cutting-edge technology to improve the fan experience. AI is at the forefront of our technology stack, where it is enabling us to create a more engaging and interactive environment for fans. This is the beginning of our generative AI journey in collaboration with the AWS Generative AI Innovation Center for a transformational end-to-end customer experience. We are working to leverage Amazon Bedrock and our propriety data to create an interactive experience for PGA TOUR fans to find information of interest about an event, player, stats, or other content in an interactive fashion.”
– Scott Gutterman, SVP of Broadcast and Digital Properties at PGA TOUR.
Conclusion
The project we discussed in this post exemplifies how structured and unstructured data sources can be fused using AI to create next-generation virtual assistants. For sports organizations, this technology enables more immersive fan engagement and unlocks internal efficiencies. The data intelligence we surface helps PGA TOUR stakeholders like players, coaches, officials, partners, and media make informed decisions faster. Beyond sports, our methodology can be replicated across any industry. The same principles apply to building assistants that engage customers, employees, students, patients, and other end-users. With thoughtful design and testing, virtually any organization can benefit from an AI system that contextualizes their structured databases, documents, images, videos, and other content.
If you’re interested in implementing similar functionalities, consider using Agents for Amazon Bedrock and Knowledge Bases for Amazon Bedrock as an alternative, fully AWS-managed solution. This approach could further investigate providing intelligent automation and data search abilities through customizable agents. These agents could potentially transform user application interactions to be more natural, efficient, and effective.
About the authors
Scott Gutterman is the SVP of Digital Operations for the PGA TOUR. He is responsible for the TOUR’s overall digital operations, product development and is driving their GenAI strategy.
Ahsan Ali is an Applied Scientist at the Amazon Generative AI Innovation Center, where he works with customers from different domains to solve their urgent and expensive problems using Generative AI.
Tahin Syed is an Applied Scientist with the Amazon Generative AI Innovation Center, where he works with customers to help realize business outcomes with generative AI solutions. Outside of work, he enjoys trying new food, traveling, and teaching taekwondo.
Grace Lang is an Associate Data & ML engineer with AWS Professional Services. Driven by a passion for overcoming tough challenges, Grace helps customers achieve their goals by developing machine learning powered solutions.
Jae Lee is a Senior Engagement Manager in ProServe’s M&E vertical. She leads and delivers complex engagements, exhibits strong problem solving skill sets, manages stakeholder expectations, and curates executive level presentations. She enjoys working on projects focused on sports, generative AI, and customer experience.
Karn Chahar is a Security Consultant with the shared delivery team at AWS. He is a technology enthusiast who enjoys working with customers to solve their security challenges and to improve their security posture in the cloud.
Mike Amjadi is a Data & ML Engineer with AWS ProServe focused on enabling customers to maximize value from data. He specializes in designing, building, and optimizing data pipelines following well-architected principles. Mike is passionate about using technology to solve problems and is committed to delivering the best results for our customers.
Vrushali Sawant is a Front End Engineer with Proserve. She is highly skilled in creating responsive websites. She loves working with customers, understanding their requirements and providing them with scalable, easy to adopt UI/UX solutions.
Neelam Patel is a Customer Solutions Manager at AWS, leading key Generative AI and cloud modernization initiatives. Neelam works with key executives and technology owners to address their cloud transformation challenges and helps customers maximize the benefits of cloud adoption. She has an MBA from Warwick Business School, UK and a Bachelors in Computer Engineering, India.
Dr. Murali Baktha is Global Golf Solution Architect at AWS, spearheads pivotal initiatives involving Generative AI, data analytics and cutting-edge cloud technologies. Murali works with key executives and technology owners to understand customer’s business challenges and designs solutions to address those challenges. He has an MBA in Finance from UConn and a doctorate from Iowa State University.
Mehdi Noor is an Applied Science Manager at Generative Ai Innovation Center. With a passion for bridging technology and innovation, he assists AWS customers in unlocking the potential of Generative AI, turning potential challenges into opportunities for rapid experimentation and innovation by focusing on scalable, measurable, and impactful uses of advanced AI technologies, and streamlining the path to production.
Enhance code review and approval efficiency with generative AI using Amazon Bedrock
In the world of software development, code review and approval are important processes for ensuring the quality, security, and functionality of the software being developed. However, managers tasked with overseeing these critical processes often face numerous challenges, such as the following:
- Lack of technical expertise – Managers may not have an in-depth technical understanding of the programming language used or may not have been involved in software engineering for an extended period. This results in a knowledge gap that can make it difficult for them to accurately assess the impact and soundness of the proposed code changes.
- Time constraints – Code review and approval can be a time-consuming process, especially in larger or more complex projects. Managers need to balance between the thoroughness of review vs. the pressure to meet project timelines.
- Volume of change requests – Dealing with a high volume of change requests is a common challenge for managers, especially if they’re overseeing multiple teams and projects. Similar to the challenge of time constraint, managers need to be able to handle those requests efficiently so as to not hold back project progress.
- Manual effort – Code review requires manual effort by the managers, and the lack of automation can make it difficult to scale the process.
- Documentation – Proper documentation of the code review and approval process is important for transparency and accountability.
With the rise of generative artificial intelligence (AI), managers can now harness this transformative technology and integrate it with the AWS suite of deployment tools and services to streamline the review and approval process in a manner not previously possible. In this post, we explore a solution that offers an integrated end-to-end deployment workflow that incorporates automated change analysis and summarization together with approval workflow functionality. We use Amazon Bedrock, a fully managed service that makes foundation models (FMs) from leading AI startups and Amazon available via an API, so you can choose from a wide range of FMs to find the model that is best suited for your use case. With the Amazon Bedrock serverless experience, you can get started quickly, privately customize FMs with your own data, and integrate and deploy them into your applications using AWS tools without having to manage any infrastructure.
Solution overview
The following diagram illustrates the solution architecture.
The workflow consists of the following steps:
- A developer pushes new code changes to their code repository (such as AWS CodeCommit), which automatically triggers the start of an AWS CodePipeline deployment.
- The application code goes through a code building process, performs vulnerability scans, and conducts unit tests using your preferred tools.
- AWS CodeBuild retrieves the repository and performs a git show command to extract the code differences between the current commit version and the previous commit version. This produces a line-by-line output that indicates the code changes made in this release.
- CodeBuild saves the output to an Amazon DynamoDB table with additional reference information:
- CodePipeline run ID
- AWS Region
- CodePipeline name
- CodeBuild build number
- Date and time
- Status
- Amazon DynamoDB Streams captures the data modifications made to the table.
- An AWS Lambda function is triggered by the DynamoDB stream to process the record captured.
- The function invokes the Anthropic Claude v2 model on Amazon Bedrock via the Amazon Bedrock InvokeModel API call. The code differences, together with a prompt, are provided as input to the model for analysis, and a summary of code changes is returned as output.
- The output from the model is saved back to the same DynamoDB table.
- The manager is notified via Amazon Simple Email Service (Amazon SES) of the summary of code changes and that their approval is required for the deployment.
- The manager reviews the email and provides their decision (either approve or reject) together with any review comments via the CodePipeline console.
- The approval decision and review comments are captured by Amazon EventBridge, which triggers a Lambda function to save them back to DynamoDB.
- If approved, the pipeline deploys the application code using your preferred tools. If rejected, the workflow ends and the deployment does not proceed further.
In the following sections, you deploy the solution and verify the end-to-end workflow.
Prerequisites
To follow the instructions in this solution, you need the following prerequisites:
- An AWS account with an AWS Identity and Access Management (IAM) user who has permissions to AWS CloudFormation, CodePipeline, CodeCommit, CodeBuild, DynamoDB, Lambda, Amazon Bedrock, Amazon SES, EventBridge, and IAM
- Model access to Anthropic Claude on Amazon Bedrock
Deploy the solution
To deploy the solution, complete the following steps:
- Choose Launch Stack to launch a CloudFormation stack in
us-east-1
: - For EmailAddress, enter an email address that you have access to. The summary of code changes will be sent to this email address.
- For modelId, leave as the default anthropic.claude-v2, which is the Anthropic Claude v2 model.
Deploying the template will take about 4 minutes.
- When you receive an email from Amazon SES to verify your email address, choose the link provided to authorize your email address.
- You’ll receive an email titled “Summary of Changes” for the initial commit of the sample repository into CodeCommit.
- On the AWS CloudFormation console, navigate to the Outputs tab of the deployed stack.
- Copy the value of RepoCloneURL. You need this to access the sample code repository.
Test the solution
You can test the workflow end to end by taking on the role of a developer and pushing some code changes. A set of sample codes has been prepared for you in CodeCommit. To access the CodeCommit repository, enter the following commands on your IDE:
You will find the following directory structure for an AWS Cloud Development Kit (AWS CDK) application that creates a Lambda function to perform a bubble sort on a string of integers. The Lambda function is accessible via a publicly available URL.
You make three changes to the application codes.
- To enhance the function to support both quick sort and bubble sort algorithm, take in a parameter to allow the selection of the algorithm to use, and return both the algorithm used and sorted array in the output, replace the entire content of
lambda/index.py
with the following code:
- To reduce the timeout setting of the function from 10 minutes to 5 seconds (because we don’t expect the function to run longer than a few seconds), update line 47 in
my_sample_project/my_sample_project_stack.py
as follows:
- To restrict the invocation of the function using IAM for added security, update line 56 in
my_sample_project/my_sample_project_stack.py
as follows:
- Push the code changes by entering the following commands:
This starts the CodePipeline deployment workflow from Steps 1–9 as outlined in the solution overview. When invoking the Amazon Bedrock model, we provided the following prompt:
Best practices to build generative AI applications on AWS
Generative AI applications driven by foundational models (FMs) are enabling organizations with significant business value in customer experience, productivity, process optimization, and innovations. However, adoption of these FMs involves addressing some key challenges, including quality output, data privacy, security, integration with organization data, cost, and skills to deliver.
In this post, we explore different approaches you can take when building applications that use generative AI. With the rapid advancement of FMs, it’s an exciting time to harness their power, but also crucial to understand how to properly use them to achieve business outcomes. We provide an overview of key generative AI approaches, including prompt engineering, Retrieval Augmented Generation (RAG), and model customization. When applying these approaches, we discuss key considerations around potential hallucination, integration with enterprise data, output quality, and cost. By the end, you will have solid guidelines and a helpful flow chart for determining the best method to develop your own FM-powered applications, grounded in real-life examples. Whether creating a chatbot or summarization tool, you can shape powerful FMs to suit your needs.
Generative AI with AWS
The emergence of FMs is creating both opportunities and challenges for organizations looking to use these technologies. A key challenge is ensuring high-quality, coherent outputs that align with business needs, rather than hallucinations or false information. Organizations must also carefully manage data privacy and security risks that arise from processing proprietary data with FMs. The skills needed to properly integrate, customize, and validate FMs within existing systems and data are in short supply. Building large language models (LLMs) from scratch or customizing pre-trained models requires substantial compute resources, expert data scientists, and months of engineering work. The computational cost alone can easily run into the millions of dollars to train models with hundreds of billions of parameters on massive datasets using thousands of GPUs or TPUs. Beyond hardware, data cleaning and processing, model architecture design, hyperparameter tuning, and training pipeline development demand specialized machine learning (ML) skills. The end-to-end process is complex, time-consuming, and prohibitively expensive for most organizations without the requisite infrastructure and talent investment. Organizations that fail to adequately address these risks can face negative impacts to their brand reputation, customer trust, operations, and revenues.
Amazon Bedrock is a fully managed service that offers a choice of high-performing foundation models (FMs) from leading AI companies like AI21 Labs, Anthropic, Cohere, Meta, Mistral AI, Stability AI, and Amazon via a single API. With the Amazon Bedrock serverless experience, you can get started quickly, privately customize FMs with your own data, and integrate and deploy them into your applications using AWS tools without having to manage any infrastructure. Amazon Bedrock is HIPAA eligible, and you can use Amazon Bedrock in compliance with the GDPR. With Amazon Bedrock, your content is not used to improve the base models and is not shared with third-party model providers. Your data in Amazon Bedrock is always encrypted in transit and at rest, and you can optionally encrypt resources using your own keys. You can use AWS PrivateLink with Amazon Bedrock to establish private connectivity between your FMs and your VPC without exposing your traffic to the internet. With Knowledge Bases for Amazon Bedrock, you can give FMs and agents contextual information from your company’s private data sources for RAG to deliver more relevant, accurate, and customized responses. You can privately customize FMs with your own data through a visual interface without writing any code. As a fully managed service, Amazon Bedrock offers a straightforward developer experience to work with a broad range of high-performing FMs.
Launched in 2017, Amazon SageMaker is a fully managed service that makes it straightforward to build, train, and deploy ML models. More and more customers are building their own FMs using SageMaker, including Stability AI, AI21 Labs, Hugging Face, Perplexity AI, Hippocratic AI, LG AI Research, and Technology Innovation Institute. To help you get started quickly, Amazon SageMaker JumpStart offers an ML hub where you can explore, train, and deploy a wide selection of public FMs, such as Mistral models, LightOn models, RedPajama, Mosiac MPT-7B, FLAN-T5/UL2, GPT-J-6B/Neox-20B, and Bloom/BloomZ, using purpose-built SageMaker tools such as experiments and pipelines.
Common generative AI approaches
In this section, we discuss common approaches to implement effective generative AI solutions. We explore popular prompt engineering techniques that allow you to achieve more complex and interesting tasks with FMs. We also discuss how techniques like RAG and model customization can further enhance FMs’ capabilities and overcome challenges like limited data and computational constraints. With the right technique, you can build powerful and impactful generative AI solutions.
Prompt engineering
Prompt engineering is the practice of carefully designing prompts to efficiently tap into the capabilities of FMs. It involves the use of prompts, which are short pieces of text that guide the model to generate more accurate and relevant responses. With prompt engineering, you can improve the performance of FMs and make them more effective for a variety of applications. In this section, we explore techniques like zero-shot and few-shot prompting, which rapidly adapts FMs to new tasks with just a few examples, and chain-of-thought prompting, which breaks down complex reasoning into intermediate steps. These methods demonstrate how prompt engineering can make FMs more effective on complex tasks without requiring model retraining.
Zero-shot prompting
A zero-shot prompt technique requires FMs to generate an answer without providing any explicit examples of the desired behavior, relying solely on its pre-training. The following screenshot shows an example of a zero-shot prompt with the Anthropic Claude 2.1 model on the Amazon Bedrock console.
In these instructions, we didn’t provide any examples. However, the model can understand the task and generate appropriate output. Zero-shot prompts are the most straightforward prompt technique to begin with when evaluating an FM for your use case. However, although FMs are remarkable with zero-shot prompts, it may not always yield accurate or desired results for more complex tasks. When zero-shot prompts fall short, it is recommended to provide a few examples in the prompt (few-shot prompts).
Few-shot prompting
The few-shot prompt technique allows FMs to do in-context learning from the examples in the prompts and perform the task more accurately. With just a few examples, you can rapidly adapt FMs to new tasks without large training sets and guide them towards the desired behavior. The following is an example of a few-shot prompt with the Cohere Command model on the Amazon Bedrock console.
In the preceding example, the FM was able to identify entities from the input text (reviews) and extract the associated sentiments. Few-shot prompts are an effective way to tackle complex tasks by providing a few examples of input-output pairs. For straightforward tasks, you can give one example (1-shot), whereas for more difficult tasks, you should provide three (3-shot) to five (5-shot) examples. Min et al. (2022) published findings about in-context learning that can enhance the performance of the few-shot prompting technique. You can use few-shot prompting for a variety of tasks, such as sentiment analysis, entity recognition, question answering, translation, and code generation.
Chain-of-thought prompting
Despite its potential, few-shot prompting has limitations, especially when dealing with complex reasoning tasks (such as arithmetic or logical tasks). These tasks require breaking the problem down into steps and then solving it. Wei et al. (2022) introduced the chain-of-thought (CoT) prompting technique to solve complex reasoning problems through intermediate reasoning steps. You can combine CoT with few-shot prompting to improve results on complex tasks. The following is an example of a reasoning task using few-shot CoT prompting with the Anthropic Claude 2 model on the Amazon Bedrock console.
Kojima et al. (2022) introduced an idea of zero-shot CoT by using FMs’ untapped zero-shot capabilities. Their research indicates that zero-shot CoT, using the same single-prompt template, significantly outperforms zero-shot FM performances on diverse benchmark reasoning tasks. You can use zero-shot CoT prompting for simple reasoning tasks by adding “Let’s think step by step” to the original prompt.
ReAct
CoT prompting can enhance FMs’ reasoning capabilities, but it still depends on the model’s internal knowledge and doesn’t consider any external knowledge base or environment to gather more information, which can lead to issues like hallucination. The ReAct (reasoning and acting) approach addresses this gap by extending CoT and allowing dynamic reasoning using an external environment (such as Wikipedia).
Integration
FMs have the ability to comprehend questions and provide answers using their pre-trained knowledge. However, they lack the capacity to respond to queries requiring access to an organization’s private data or the ability to autonomously carry out tasks. RAG and agents are methods to connect these generative AI-powered applications to enterprise datasets, empowering them to give responses that account for organizational information and enable running actions based on requests.
Retrieval Augmented Generation
Retrieval Augmented Generation (RAG) allows you to customize a model’s responses when you want the model to consider new knowledge or up-to-date information. When your data changes frequently, like inventory or pricing, it’s not practical to fine-tune and update the model while it’s serving user queries. To equip the FM with up-to-date proprietary information, organizations turn to RAG, a technique that involves fetching data from company data sources and enriching the prompt with that data to deliver more relevant and accurate responses.
There are several use cases where RAG can help improve FM performance:
- Question answering – RAG models help question answering applications locate and integrate information from documents or knowledge sources to generate high-quality answers. For example, a question answering application could retrieve passages about a topic before generating a summarizing answer.
- Chatbots and conversational agents – RAG allow chatbots to access relevant information from large external knowledge sources. This makes the chatbot’s responses more knowledgeable and natural.
- Writing assistance – RAG can suggest relevant content, facts, and talking points to help you write documents such as articles, reports, and emails more efficiently. The retrieved information provides useful context and ideas.
- Summarization – RAG can find relevant source documents, passages, or facts to augment a summarization model’s understanding of a topic, allowing it to generate better summaries.
- Creative writing and storytelling – RAG can pull plot ideas, characters, settings, and creative elements from existing stories to inspire AI story generation models. This makes the output more interesting and grounded.
- Translation – RAG can find examples of how certain phrases are translated between languages. This provides context to the translation model, improving translation of ambiguous phrases.
- Personalization – In chatbots and recommendation applications, RAG can pull personal context like past conversations, profile information, and preferences to make responses more personalized and relevant.
There are several advantages in using a RAG framework:
- Reduced hallucinations – Retrieving relevant information helps ground the generated text in facts and real-world knowledge, rather than hallucinating text. This promotes more accurate, factual, and trustworthy responses.
- Coverage – Retrieval allows an FM to cover a broader range of topics and scenarios beyond its training data by pulling in external information. This helps address limited coverage issues.
- Efficiency – Retrieval lets the model focus its generation on the most relevant information, rather than generating everything from scratch. This improves efficiency and allows larger contexts to be used.
- Safety – Retrieving the information from required and permitted data sources can improve governance and control over harmful and inaccurate content generation. This supports safer adoption.
- Scalability – Indexing and retrieving from large corpora allows the approach to scale better compared to using the full corpus during generation. This enables you to adopt FMs in more resource-constrained environments.
RAG produces quality results, due to augmenting use case-specific context directly from vectorized data stores. Compared to prompt engineering, it produces vastly improved results with massively low chances of hallucinations. You can build RAG-powered applications on your enterprise data using Amazon Kendra. RAG has higher complexity than prompt engineering because you need to have coding and architecture skills to implement this solution. However, Knowledge Bases for Amazon Bedrock provides a fully managed RAG experience and the most straightforward way to get started with RAG in Amazon Bedrock. Knowledge Bases for Amazon Bedrock automates the end-to-end RAG workflow, including ingestion, retrieval, and prompt augmentation, eliminating the need for you to write custom code to integrate data sources and manage queries. Session context management is built in so your app can support multi-turn conversations. Knowledge base responses come with source citations to improve transparency and minimize hallucinations. The most straightforward way to build generative-AI powered assistant is by using Amazon Q, which has a built-in RAG system.
RAG has the highest degree of flexibility when it comes to changes in the architecture. You can change the embedding model, vector store, and FM independently with minimal-to-moderate impact on other components. To learn more about the RAG approach with Amazon OpenSearch Service and Amazon Bedrock, refer to Build scalable and serverless RAG workflows with a vector engine for Amazon OpenSearch Serverless and Amazon Bedrock Claude models. To learn about how to implement RAG with Amazon Kendra, refer to Harnessing the power of enterprise data with generative AI: Insights from Amazon Kendra, LangChain, and large language models.
Agents
FMs can understand and respond to queries based on their pre-trained knowledge. However, they are unable to complete any real-world tasks, like booking a flight or processing a purchase order, on their own. This is because such tasks require organization-specific data and workflows that typically need custom programming. Frameworks like LangChain and certain FMs such as Claude models provide function-calling capabilities to interact with APIs and tools. However, Agents for Amazon Bedrock, a new and fully managed AI capability from AWS, aims to make it more straightforward for developers to build applications using next-generation FMs. With just a few clicks, it can automatically break down tasks and generate the required orchestration logic, without needing manual coding. Agents can securely connect to company databases via APIs, ingest and structure the data for machine consumption, and augment it with contextual details to produce more accurate responses and fulfill requests. Because it handles integration and infrastructure, Agents for Amazon Bedrock allows you to fully harness generative AI for business use cases. Developers can now focus on their core applications rather than routine plumbing. The automated data processing and API calling also enables FM to deliver updated, tailored answers and perform actual tasks by using proprietary knowledge.
Model customization
Foundation models are extremely capable and enable some great applications, but what will help drive your business is generative AI that knows what’s important to your customers, your products, and your company. And that’s only possible when you supercharge models with your data. Data is the key to moving from generic applications to customized generative AI applications that create real value for your customers and your business.
In this section, we discuss different techniques and benefits of customizing your FMs. We cover how model customization involves further training and changing the weights of the model to enhance its performance.
Fine-tuning
Fine-tuning is the process of taking a pre-trained FM, such as Llama 2, and further training it on a downstream task with a dataset specific to that task. The pre-trained model provides general linguistic knowledge, and fine-tuning allows it to specialize and improve performance on a particular task like text classification, question answering, or text generation. With fine-tuning, you provide labeled datasets—which are annotated with additional context—to train the model on specific tasks. You can then adapt the model parameters for the specific task based on your business context.
You can implement fine-tuning on FMs with Amazon SageMaker JumpStart and Amazon Bedrock. For more details, refer to Deploy and fine-tune foundation models in Amazon SageMaker JumpStart with two lines of code and Customize models in Amazon Bedrock with your own data using fine-tuning and continued pre-training.
Continued pre-training
Continued pre-training in Amazon Bedrock enables you to teach a previously trained model on additional data similar to its original data. It enables the model to gain more general linguistic knowledge rather than focus on a single application. With continued pre-training, you can use your unlabeled datasets, or raw data, to improve the accuracy of foundation model for your domain through tweaking model parameters. For example, a healthcare company can continue to pre-train its model using medical journals, articles, and research papers to make it more knowledgeable on industry terminology. For more details, refer to Amazon Bedrock Developer Experience.
Benefits of model customization
Model customization has several advantages and can help organizations with the following:
- Domain-specific adaptation – You can use a general-purpose FM, and then further train it on data from a specific domain (such as biomedical, legal, or financial). This adapts the model to that domain’s vocabulary, style, and so on.
- Task-specific fine-tuning – You can take a pre-trained FM and fine-tune it on data for a specific task (such as sentiment analysis or question answering). This specializes the model for that particular task.
- Personalization – You can customize an FM on an individual’s data (emails, texts, documents they’ve written) to adapt the model to their unique style. This can enable more personalized applications.
- Low-resource language tuning – You can retrain only the top layers of a multilingual FM on a low-resource language to better adapt it to that language.
- Fixing flaws – If certain unintended behaviors are discovered in a model, customizing on appropriate data can help update the model to reduce those flaws.
Model customization helps overcome the following FM adoption challenges:
- Adaptation to new domains and tasks – FMs pre-trained on general text corpora often need to be fine-tuned on task-specific data to work well for downstream applications. Fine-tuning adapts the model to new domains or tasks it wasn’t originally trained on.
- Overcoming bias – FMs may exhibit biases from their original training data. Customizing a model on new data can reduce unwanted biases in the model’s outputs.
- Improving computational efficiency – Pre-trained FMs are often very large and computationally expensive. Model customization can allow downsizing the model by pruning unimportant parameters, making deployment more feasible.
- Dealing with limited target data – In some cases, there is limited real-world data available for the target task. Model customization uses the pre-trained weights learned on larger datasets to overcome this data scarcity.
- Improving task performance – Fine-tuning almost always improves performance on target tasks compared to using the original pre-trained weights. This optimization of the model for its intended use allows you to deploy FMs successfully in real applications.
Model customization has higher complexity than prompt engineering and RAG because the model’s weight and parameters are being changed via tuning scripts, which requires data science and ML expertise. However, Amazon Bedrock makes it straightforward by providing you a managed experience to customize models with fine-tuning or continued pre-training. Model customization provides highly accurate results with comparable quality output than RAG. Because you’re updating model weights on domain-specific data, the model produces more contextual responses. Compared to RAG, the quality might be marginally better depending on the use case. Therefore, it’s important to conduct a trade-off analysis between the two techniques. You can potentially implement RAG with a customized model.
Retraining or training from scratch
Building your own foundation AI model rather than solely using pre-trained public models allows for greater control, improved performance, and customization to your organization’s specific use cases and data. Investing in creating a tailored FM can provide better adaptability, upgrades, and control over capabilities. Distributed training enables the scalability needed to train very large FMs on massive datasets across many machines. This parallelization makes models with hundreds of billions of parameters trained on trillions of tokens feasible. Larger models have greater capacity to learn and generalize.
Training from scratch can produce high-quality results because the model is training on use case-specific data from scratch, the chances of hallucination are rare, and the accuracy of the output can be amongst the highest. However, if your dataset is constantly evolving, you can still run into hallucination issues. Training from scratch has the highest implementation complexity and cost. It requires the most effort because it requires collecting a vast amount of data, curating and processing it, and training a fairly large FM, which requires deep data science and ML expertise. This approach is time-consuming (it can typically take weeks to months).
You should consider training an FM from scratch when none of the other approaches work for you, and you have the ability to build an FM with a large amount of well-curated tokenized data, a sophisticated budget, and a team of highly skilled ML experts. AWS provides the most advanced cloud infrastructure to train and run LLMs and other FMs powered by GPUs and the purpose-built ML training chip, AWS Trainium, and ML inference accelerator, AWS Inferentia. For more details about training LLMs on SageMaker, refer to Training large language models on Amazon SageMaker: Best practices and SageMaker HyperPod.
Selecting the right approach for developing generative AI applications
When developing generative AI applications, organizations must carefully consider several key factors before selecting the most suitable model to meet their needs. A variety of aspects should be considered, such as cost (to ensure the selected model aligns with budget constraints), quality (to deliver coherent and factually accurate output), seamless integration with current enterprise platforms and workflows, and reducing hallucinations or generating false information. With many options available, taking the time to thoroughly evaluate these aspects will help organizations choose the generative AI model that best serves their specific requirements and priorities. You should examine the following factors closely:
- Integration with enterprise systems – For FMs to be truly useful in an enterprise context, they need to integrate and interoperate with existing business systems and workflows. This could involve accessing data from databases, enterprise resource planning (ERP), and customer relationship management (CRM), as well as triggering actions and workflows. Without proper integration, the FM risks being an isolated tool. Enterprise systems like ERP contain key business data (customers, products, orders). The FM needs to be connected to these systems to use enterprise data rather than work off its own knowledge graph, which may be inaccurate or outdated. This ensures accuracy and a single source of truth.
- Hallucinations – Hallucinations are when an AI application generates false information that appears factual. These need to be carefully addressed before FMs are widely adopted. For example, a medical chatbot designed to provide diagnosis suggestions could hallucinate details about a patient’s symptoms or medical history, leading it to propose an inaccurate diagnosis. Preventing harmful hallucinations like these through technical solutions and dataset curation will be critical to making sure these FMs can be trusted for sensitive applications like healthcare, finance, and legal. Thorough testing and transparency about an FM’s training data and remaining flaws will need to accompany deployments.
- Skills and resources – The successful adoption of FMs will depend heavily on having the proper skills and resources to use the technology effectively. Organizations need employees with strong technical skills to properly implement, customize, and maintain FMs to suit their specific needs. They also require ample computational resources like advanced hardware and cloud computing capabilities to run complex FMs. For example, a marketing team wanting to use an FM to generate advertising copy and social media posts needs skilled engineers to integrate the system, creatives to provide prompts and assess output quality, and sufficient cloud computing power to deploy the model cost-effectively. Investing in developing expertise and technical infrastructure will enable organizations to gain real business value from applying FMs.
- Output quality – The quality of the output produced by FMs will be critical in determining their adoption and use, particularly in consumer-facing applications like chatbots. If chatbots powered by FMs provide responses that are inaccurate, nonsensical, or inappropriate, users will quickly become frustrated and stop engaging with them. Therefore, companies looking to deploy chatbots need to rigorously test the FMs that drive them to ensure they consistently generate high-quality responses that are helpful, relevant, and appropriate to provide a good user experience. Output quality encompasses factors like relevance, accuracy, coherence, and appropriateness, which all contribute to overall user satisfaction and will make or break the adoption of FMs like those used for chatbots.
- Cost – The high computational power required to train and run large AI models like FMs can incur substantial costs. Many organizations may lack the financial resources or cloud infrastructure necessary to use such massive models. Additionally, integrating and customizing FMs for specific use cases adds engineering costs. The considerable expenses required to use FMs could deter widespread adoption, especially among smaller companies and startups with limited budgets. Evaluating potential return on investment and weighing the costs vs. benefits of FMs is critical for organizations considering their application and utility. Cost-efficiency will likely be a deciding factor in determining if and how these powerful but resource-intensive models can be feasibly deployed.
Design decision
As we covered in this post, many different AI techniques are currently available, such as prompt engineering, RAG, and model customization. This wide range of choices makes it challenging for companies to determine the optimal approach for their particular use case. Selecting the right set of techniques depends on various factors, including access to external data sources, real-time data feeds, and the domain specificity of the intended application. To aid in identifying the most suitable technique based on the use case and considerations involved, we walk through the following flow chart, which outlines recommendations for matching specific needs and constraints with appropriate methods.
To gain a clear understanding, let’s go through the design decision flow chart using a few illustrative examples:
- Enterprise search – An employee is looking to request leave from their organization. To provide a response aligned with the organization’s HR policies, the FM needs more context beyond its own knowledge and capabilities. Specifically, the FM requires access to external data sources that provide relevant HR guidelines and policies. Given this scenario of an employee request that requires referring to external domain-specific data, the recommended approach according to the flow chart is prompt engineering with RAG. RAG will help in providing the relevant data from the external data sources as context to the FM.
- Enterprise search with organization-specific output – Suppose you have engineering drawings and you want to extract the bill of materials from them, formatting the output according to industry standards. To do this, you can use a technique that combines prompt engineering with RAG and a fine-tuned language model. The fine-tuned model would be trained to produce bills of materials when given engineering drawings as input. RAG helps find the most relevant engineering drawings from the organization’s data sources to feed in the context for the FM. Overall, this approach extracts bills of materials from engineering drawings and structures the output appropriately for the engineering domain.
- General search – Imagine you want to find the identity of the 30th President of the United States. You could use prompt engineering to get the answer from an FM. Because these models are trained on many data sources, they can often provide accurate responses to factual questions like this.
- General search with recent events – If you want to determine the current stock price for Amazon, you can use the approach of prompt engineering with an agent. The agent will provide the FM with the most recent stock price so it can generate the factual response.
Conclusion
Generative AI offers tremendous potential for organizations to drive innovation and boost productivity across a variety of applications. However, successfully adopting these emerging AI technologies requires addressing key considerations around integration, output quality, skills, costs, and potential risks like harmful hallucinations or security vulnerabilities. Organizations need to take a systematic approach to evaluating their use case requirements and constraints to determine the most appropriate techniques for adapting and applying FMs. As highlighted in this post, prompt engineering, RAG, and efficient model customization methods each have their own strengths and weaknesses that suit different scenarios. By mapping business needs to AI capabilities using a structured framework, organizations can overcome hurdles to implementation and start realizing benefits from FMs while also building guardrails to manage risks. With thoughtful planning grounded in real-world examples, businesses in every industry stand to unlock immense value from this new wave of generative AI. Learn about generative AI on AWS.
About the Authors
Jay Rao is a Principal Solutions Architect at AWS. He focuses on AI/ML technologies with a keen interest in Generative AI and Computer Vision. At AWS, he enjoys providing technical and strategic guidance to customers and helping them design and implement solutions that drive business outcomes. He is a book author (Computer Vision on AWS), regularly publishes blogs and code samples, and has delivered talks at tech conferences such as AWS re:Invent.
Babu Kariyaden Parambath is a Senior AI/ML Specialist at AWS. At AWS, he enjoys working with customers in helping them identify the right business use case with business value and solve it using AWS AI/ML solutions and services. Prior to joining AWS, Babu was an AI evangelist with 20 years of diverse industry experience delivering AI driven business value for customers.
Gemma is now available in Amazon SageMaker JumpStart
Today, we’re excited to announce that the Gemma model is now available for customers using Amazon SageMaker JumpStart. Gemma is a family of language models based on Google’s Gemini models, trained on up to 6 trillion tokens of text. The Gemma family consists of two sizes: a 7 billion parameter model and a 2 billion parameter model. Now, you can use Gemma 2B and Gemma 7B pretrained and instruction-tuned models within SageMaker JumpStart. JumpStart is the machine learning (ML) hub of SageMaker that provides access to foundation models in addition to built-in algorithms and end-to-end solution templates to help you quickly get started with ML.
In this post, we walk through how to deploy the Gemma model and fine tune it for your use cases in SageMaker JumpStart. The complete notebook is available on GitHub.
Gemma model
Gemma is a family of lightweight, state-of-the-art models built from the same research and technology used to create the Gemini models. Developed by Google DeepMind and other teams across Google, Gemma is inspired by Gemini. Gemma exhibits strong generalist capabilities in text domains and state-of-the-art understanding and reasoning skills at scale. It achieves better performance compared to other publicly available models of similar or larger scales across different domains, including question answering, commonsense reasoning, mathematics and science, and coding. Gemma released the model weights to support developer innovation using Gemma models. Gemma was launched with a new Responsible Generative AI Toolkit that provides guidance and essential tools for creating safer AI applications with Gemma.
Foundation models in SageMaker
JumpStart provides access to a range of models from popular model hubs including Hugging Face, PyTorch Hub, and TensorFlow Hub, which you can use within your ML development workflow in SageMaker. Recent advances in ML have given rise to a new class of models known as foundation models, which are typically trained on billions of parameters and are adaptable to a wide category of use cases, such as text summarization, generating digital art, and language translation. Because these models are expensive to train, customers want to use existing pre-trained foundation models and fine-tune them as needed, rather than train these models themselves. SageMaker provides a curated list of models that you can choose from on the SageMaker console.
You can now find foundation models from different model providers within JumpStart, enabling you to get started with foundation models quickly. You can find foundation models based on different tasks or model providers, and review model characteristics and usage terms. You can also try these models using a test UI widget. When you want to use a foundation model at scale, you can do so without leaving SageMaker by using pre-built notebooks from model providers. Because the models are hosted and deployed on AWS, your data, whether used for evaluating the model or using it at scale, is never shared with third parties.
Let’s explore how you can use the Llama Guard model in JumpStart.
Explore the Gemma model in Jumpstart
You can access Gemma foundation models through SageMaker JumpStart in the SageMaker Studio UI and the SageMaker Python SDK. In this section, we go over how to discover the models in SageMaker Studio.
SageMaker Studio is an integrated development environment (IDE) that provides a single web-based visual interface where you can access purpose-built tools to perform all ML development steps, from preparing data to building, training, and deploying your ML models. For more details on how to get started and set up SageMaker Studio, see Amazon SageMaker Studio.
In the AWS Management Console for SageMaker Studio, go to SageMaker JumpStart under Prebuilt and automated solutions. Jumpstart contains pre-trained models, notebooks, and prebuilt solutions.
On the SageMaker JumpStart landing page, you can find the Gemma model by searching for Gemma.
You can then select from a variety of Gemma model variants, including Gemma 2B, Gemma 7B, Gemma 2B instruct, and Gemma 7B instruct.
Choose the model card to view details about the model such as the license, data used to train, and how to use the model. You will also find a Deploy button, which takes you to a landing page where you can test inference with an example payload.
Deploy Gemma with SageMaker Python SDK
You can find the code showing the deployment of Gemma on JumpStart and an example of how to use the deployed model in this GitHub notebook.
Start by selecting the SageMaker Model Hub model ID and model version to use when deploying Gemma.
Choose a model ID from the following table, which details the default configuration options for the JumpStart deployment. Because of the large vocabulary size of 256 thousand tokens, Gemma 7B can only fit on a single A10G GPU when supporting a 1 thousand context length. For this reason, JumpStart uses a larger default instance for Gemma 7B.
Model ID | Default inference instance | Tensor parallel degree | Supported context Length |
huggingface-llm-gemma-2b | ml.g5.xlarge | 1 | 8k |
huggingface-llm-gemma-2b-instruct | ml.g5.xlarge | 1 | 8k |
huggingface-llm-gemma-7b | ml.g5.12xlarge | 4 | 8k |
huggingface-llm-gemma-7b-instruct | ml.g5.12xlarge | 4 | 8k |
You can now deploy the model using SageMaker JumpStart. The following code uses the default instance ml.g5.12xlarge
for the inference endpoint You can deploy the model on other instance types by passing instance_type
in the JumpStartModel
class. The deployment might take 5-10 minutes.
For successful deployment, you must manually change the accept_eula
argument in the model’s deploy method to True
. This model is deployed using the text-generation-inference (TGI) deep learning container.
Invoke endpoint
You can programmatically retrieve example payloads from the JumpStartModel
object. This will help you get started by observing pre-formatted instruction prompts that Gemma can ingest.
Before we look at specific prompts, let’s consider the chat template for Gemma Instruct models.
Here, you place your prompt in the [USER_PROMPT]
location. There’s no support for a system instruction; instead, you can prepend the desired instruction to the user prompt. Additionally, if you have a multi-turn conversation, then the model prompt can alternate between user and assistant as needed.
Now consider a few instruction example prompts. Here, you ask Gemma to write a Hello World program.
The following is the expected output:
Next, invoke Gemma for the creative task of writing a poem.
The following is the output:
This looks pretty good!
Now, let’s look at latency and throughput performance benchmarking for model serving with the default JumpStart deployment configuration. Here, we show how model performance might differ for your typical endpoint workload. In the following tables, you can observe that small-sized queries (256 input words and 256 output tokens) are quite performant under a large number of concurrent users, reaching token throughput on the order of one thousand to two thousand tokens per second. However, as the number of input words approaches Gemma’s maximum supported context length of eight thousand tokens, the endpoint saturates its batching capacity—the number of concurrent requests allowed to be processed simultaneously—due to instance memory-bound constraints.
For more information on how to consider this information and adjust deployment configurations for your specific use case, see Benchmark and optimize endpoint deployment in Amazon SageMaker JumpStart.
. | Throughput (tokens/s) | ||||||||||
Concurrent users | 1 | 2 | 4 | 8 | 16 | 32 | 64 | 128 | |||
model | Instance type | Input words | Output tokens | . | . | . | . | . | . | . | . |
gemma-2b-instruct | ml.g5.xlarge | 256 | 256 | 73 | 137 | 262 | 486 | 829 | 1330 | 1849 | 1834 |
2048 | 256 | 69 | 126 | 227 | 373 | 537 | 704 | 764 | — | ||
7936 | 256 | 60 | 100 | 147 | 195 | 226 | 230 | — | — | ||
gemma-7b-instruct | ml.g5.12xlarge | 256 | 256 | 62 | 119 | 227 | 413 | 601 | 811 | 937 | 962 |
2048 | 256 | 56 | 100 | 172 | 245 | 267 | 273 | — | — | ||
7936 | 256 | 44 | 67 | 77 | 77 | 78 | — | — | — |
. | P50 latency (ms/token) | ||||||||||
Concurrent users | 1 | 2 | 4 | 8 | 16 | 32 | 64 | 128 | |||
model | Instance type | Input words | Output tokens | . | . | . | . | . | . | . | . |
gemma-2b-instruct | ml.g5.xlarge | 256 | 256 | 13 | 14 | 15 | 16 | 19 | 23 | 33 | 49 |
2048 | 256 | 14 | 15 | 17 | 20 | 28 | 43 | 79 | — | ||
7936 | 256 | 16 | 19 | 26 | 39 | 68 | 136 | — | — | ||
Gemma-7b-instruct | ml.g5.12xlarge | 256 | 256 | 16 | 16 | 17 | 19 | 26 | 38 | 57 | 110 |
2048 | 256 | 17 | 19 | 23 | 32 | 52 | 119 | — | — | ||
7936 | 256 | 22 | 29 | 45 | 105 | 197 | — | — | — |
Fine-tune Gemma using SageMaker Python SDK
Next, we show you how to fine-tune the Gemma 7B instruct model on a conversational-formatted dataset using QLoRA technique. As mentioned previously, due to the large vocabulary size of 256 thousand and the 8 thousand context length, JumpStart offers the following default configurations for QLoRA fine-tuning.
Model ID | Default training instance | Maximum input sequence length | Per device training batch size | Gradient accumulation steps |
huggingface-llm-gemma-2b | ml.g5.2xlarge | 1024 | 1 | 4 |
huggingface-llm-gemma-2b-instruct | ml.g5.2xlarge | 1024 | 1 | 4 |
huggingface-llm-gemma-7b | ml.g5.12xlarge | 2048 | 1 | 4 |
huggingface-llm-gemma-7b-instruct | ml.g5.12xlarge | 2048 | 1 | 4 |
Let’s load and process the dataset in conversational format. The example dataset for this demonstration is OpenAssistant’s TOP-1 Conversation Threads.
The training data should be formulated in JSON lines (.jsonl
) format, where each line is a dictionary representing a set of conversations. One example within the JSON lines file is shown below. For details on how to process the dataset, see the notebook in GitHub.
Underlying the training scripts, JumpStart uses HuggingFace SFTTrainer with QLoRA and FlashAttention. Flash Attention enables scaling efficiency, leading to faster training and inference. Besides chat-based fine-tuning, JumpStart also supports instruction and domain adaptation fine-tuning with QLoRA. For details, see the notebook in GitHub.
After the fine-tuning, you can deploy the fine-tuned model to a SageMaker endpoint.
You can choose to deploy the model fine-tuned on conversation data in SageMaker endpoint with HuggingFace messages API feature as an alternative approach. The benefit of using Message API feature is that it allow users to pass the query examples in raw format as shown in the following table and chat_template within the endpoint will convert the query examples into the proper format before sending them into the model. For details, see the deployment section after fine-tuning in the notebook.
After deploying the fine-tuned model in an endpoint, you can compare output responses from pre-trained and fine-tuned Gemma models. The examples are collected from test data, which is a hold-out set from the training data. The inference parameters are max_tokens
as 512, top_p
as 0.9, temperature
as 0.4, and top_k
as 50.
As you can see with the fine tuning results, the agent answers are more accurate.
To further evaluate your fine tuned models, you may explore FM evaluation with Amazon SageMaker Clarify to see how your fine tuned model performs in comparison to base model.
Inputs | Ground truth | Response from base model | Response from fine-tuned model | |
1 |
User: I want to work on my vocabulary, can you give me some simple ways that I can find new words to use to increase the diversity of my sentences? Assistant: Certainly! Improving your vocabulary is an ongoing process, but here are some simple ways to find new words and increase the diversity of your sentences: 1. Read widely: Reading books, articles, and blogs from different genres and subjects exposes you to new words and phrases. Note down any unfamiliar words you encounter and look up their meanings. 2. Use a thesaurus: Whenever you write or read, use a thesaurus to find synonyms for commonly used words. This not only helps you learn new words but also enhances your understanding of their subtle differences in meaning. 3. Word-a-day calendars or apps: Subscribe to a word-a-day email or download a vocabulary-building app. These tools introduce you to a new word each day, along with its meaning, pronunciation, and usage examples. 4. Join a book club or discussion group: Engaging in conversations and discussions about books, articles, or any other topic exposes you to different perspectives and new vocabulary. Listen attentively to others and try using new words in context. 5. Play word games: Games like Scrabble, Boggle, and crossword puzzles challenge your vocabulary skills and help you learn new words in an enjoyable way. 6. Watch movies, TV shows, or documentaries: Visual media often use diverse language, slang, and jargon. Watching movies, TV shows, or documentaries with subtitles can help you better understand and remember new words. 7. Practice writing: Regularly writing stories, essays, or journal entries encourages you to use new words and phrases. Experiment with different writing styles and genres to expand your vocabulary. 8. Learn a new word’s etymology: Understanding the origin and history of a word can make it easier to remember and use correctly. Researching the etymology also exposes you to related words and concepts. 9. Use flashcards or spaced repetition software: Create flashcards with new words, their meanings, and example sentences. Review them regularly or use spaced repetition software like Anki to optimize your learning. 10. Set personal goals: Set achievable targets, such as learning five new words per week. Keep track of your progress and reward yourself for meeting your goals. By consistently incorporating these strategies into your daily routine, you’ll gradually build a diverse and robust vocabulary. User: Hm, i really like the idea of reading. Can you recomend me a cool science-fiction thriller which i could read? |
Assistant: Certainly! Here are a few recommendations for science-fiction thrillers that you might enjoy: 1. “The Three-Body Problem” by Liu Cixin – This award-winning novel explores the first contact between humanity and an alien civilization, and the political and personal ramifications that result. It’s a gripping and thought-provoking read that delves into deep questions about the nature of existence and the universe. 2. “Annihilation” by Jeff Vandermeer – This novel follows a group of scientists as they venture into a mysterious area known as “Area X,” where they encounter strange and unsettling phenomena. As they delve deeper into the area, they begin to uncover dark secrets about the nature of reality and the world around them. 3. “Dark Matter” by Blake Crouch – This novel follows a brilliant physicist named Jason Dessen, who is abducted and transported to a parallel universe. As he tries to find his way back to his own reality, he must confront his own choices and the consequences of his actions. 4. “The Andromeda Strain” by Michael Crichton – This classic novel follows a team of scientists as they try to contain and study a deadly extraterrestrial microbe that has landed on Earth. As they race against time to find a way to neutralize the microbe, they must also contend with political intrigue and personal conflicts. 5. “Ready Player One” by Ernest Cline – This novel is set in a dystopian future where the majority of the population spends their time in a virtual reality world known as the OASIS. When the creator of the OASIS dies and leaves behind a challenge to find a hidden Easter egg, a young protagonist sets out on a thrilling quest that will change his life forever. I hope one of these recommendations sparks your interest! |
Assistant: The text does not contain information about your reading preferences or interests, therefore I cannot provide a book recommendation. |
Assistant: Sure, here are a few science-fiction thriller recommendations that you might enjoy: – “Dune” by Frank Herbert: This epic sci-fi novel is set on the desert planet of Arrakis, where a young prince named Paul Atreides must face the challenges of a hostile environment and a treacherous political landscape. – “The Martian” by Andy Weir: This gripping sci-fi thriller follows the story of Mark Watney, an astronaut stranded on Mars after a dust storm, as he uses his ingenuity and resourcefulness to survive and eventually make his way back to Earth. – “The Time Machine” by H.G. Wells: This classic sci-fi novel takes a philosophical approach to the concept of time travel, exploring the consequences of altering the past and the future. – “The Blade Runner” by Philip K. Dick: This neo-noir sci-fi novel, set in a futuristic Los Angeles, explores the intersection of human and artificial intelligence, and its influence on society and the nature of reality. – “The Androids” by Isaac Asimov: This sci-fi novel explores the concept of artificial intelligence and its potential consequences, particularly when humans develop advanced androids that challenge traditional human values and beliefs. These are just a few suggestions, and ultimately the best book for you will depend on your personal preferences and what kind of experience you’re looking for in a sci-fi thriller. But I hope these recommendations give you a good starting point for exploring the genre. Please let me know if you have any further questions or if there’s anything I can help you with. |
2 |
User: Write a Hello World program in VB.NET Assistant: Here you go: “`vbnet User: Make it take input from the $ and then say “Hello, $” instead of “Hello, World!” |
Assistant: Public Module HelloWorld Public Sub Main() System.Console.WriteLine(“What is your name?”) Dim name As String = System.Console.ReadLine() System.Console.WriteLine(“Hello, ” & name & “!”) End Sub End Module |
Assistant: “`vbnet Public Module HelloWorld Public Sub Main() Dim name As String = Console.ReadLine() Console.WriteLine(“Hello, ” & name) End Sub End Module |
Assistant: Sure! Here is an example: “`vbnet |
3 |
User: Give me a list of to do items when starting a project to train a large language model that can understanding and respond to human questions Assistant: The following steps are a condensed list of steps to take when starting a project to train a large language model capable of comprehending and responding to human inquiries: 1. Establish project scope and objectives, target audience. This should give you a basic idea of what the required steps are. Should I go into more detail on one of the points? User: These tasks need to be broken down further, could you please provide an updated to-do list with a more detailed breakdown of the tasks ahead |
Assistant: Here is a more detailed breakdown of each step: 1. Establish project scope and objectives: It is important to clearly define the goals of the project and what the target audience is. This will help you make decisions about what type of data to gather and what model architecture is appropriate. 2. Gather and preprocess data: This is an important step as the quality and diversity of the data will directly impact the performance of the model. Preprocessing may involve cleaning the data, tokenizing text, and converting words to numerical representations. 3. Select model architecture: There are many different types of models to choose from, including recurrent neural networks (RNNs), transformer models, and convolutional neural networks (CNNs). The choice will depend on the specific requirements of the project and the amount of data available. 4. Divide dataset and implement data augmentation: Splitting the data into training, validation, and testing subsets is important for monitoring the performance of the model during training. Data augmentation can be used to artificially increase the size of the dataset and improve the robustness of the model. 5. Train model and monitor progress: This step involves training the model on the training data, using the validation set to monitor progress and adjust hyperparameters as needed. It may be necessary to repeat this process several times until the model is performing well on the validation set. 6. Evaluate model on test set: Once the model is trained, it’s important to evaluate its performance on a separate test set to get an idea of its generalization ability. This can be done by comparing the model’s predictions to the actual outcomes and analyzing the errors. 7. Fine-tune for specific tasks or domains: This step involves further training the model on task-specific data to improve its performance for a particular domain. This can be done through transfer learning, where a pre-trained model is fine-tuned on new data, or through fine-tuning, where the model is trained from scratch on the new data. 8. Create an interface for user input: This involves designing an interface that allows users to input questions or requests, and receive responses from the model. 9. Test in controlled environment: Before deploying the model in a real-world setting, it’s important to test it in a controlled environment and gather feedback from users. This will help identify any errors or biases in the model and provide opportunities for improvement. 10. Continuously monitor and improve performance: Finally, it is important to continuously monitor the performance of the model and make updates and improvements as needed. This may involve retraining the model on new data, fine-tuning hyperparameters, or updating the architecture. |
Assistant: ## To-Do List for Training a Large Language Model for Question Answering **1. Project Scope & Objectives:** * Define project goals and desired functionalities. **2. Data Acquisition & Preprocessing:** * Identify sources for training data and gather a diverse text dataset. **3. Model Selection & Configuration:** * Select an appropriate model architecture based on the task complexity and desired performance. **4. Training & Evaluation:** * Divide the training data into training, validation, and testing subsets. **5. Fine-Tuning & Interface Development:** * Fine-tune the model for specific tasks or domains through transfer learning. **6. Testing & Improvement:** * Test the model in a controlled environment with various input scenarios. **Additional Considerations:** * Set a timeline and budget for the project. **Note:** This list is not exhaustive and may need to be adapted based on the specific project requirements. |
Assistant: Sure, here is a more detailed breakdown of the tasks involved in training a large language model to understand and respond to human questions: **1. Establish Project Scope and Objectives** **2. Gather and Preprocess a Comprehensive/Diverse Text Dataset** **3. Select an Appropriate Model Architecture, Adjust Hyperparameters for Accuracy/Speed** **4. Divide Dataset into Training, Validation, Testing Subsets, Implement Data Augmentation** **5. Train Model, Monitor Progress on Validation Set, Adjust as Needed** **6. Evaluate Model on Test Set, Compare to Benchmarks, Analyze Errors and Biases** **7. Fine-Tune for Specific Tasks or Domains Through Transfer Learning or Fine-Tuning** **8. Create Interface for User Input and Model Response** **9. Test in Controlled Environment, Gather Feedback for Improvement** **10. Continuously Monitor and Improve Performance Through Retraining and Fine-Tuning** These are the steps involved in training a large language model to understand and respond to human questions. Please let me know if you have question. |
Clean up
After you have tested the endpoints, delete the SageMaker inference endpoints and the model to avoid incurring charges.
Conclusion
In this post, we showed you how to get started with Gemma in SageMaker Studio and deploy the model for inference. We also showed you how you can fine tune Gemma models on SageMaker Jumpstart.
Because foundation models are pre-trained, they can help lower training and infrastructure costs and enable customization for your use case. Visit SageMaker JumpStart in SageMaker Studio now to get started.
This guidance is for informational purposes only. You should still perform your own independent assessment, and take measures to ensure that you comply with your own specific quality control practices and standards, and the local rules, laws, regulations, licenses and terms of use that apply to you, your content, and the third-party model referenced in this guidance. AWS has no control or authority over the third-party model referenced in this guidance, and does not make any representations or warranties that the third-party model is secure, virus-free, operational, or compatible with your production environment and standards. AWS does not make any representations, warranties or guarantees that any information in this guidance will result in a particular outcome or result.
About the authors
Dr. Kyle Ulrich is an Applied Scientist with the Amazon SageMaker built-in algorithms team. His research interests include scalable machine learning algorithms, computer vision, time series, Bayesian non-parametrics, and Gaussian processes. His PhD is from Duke University and he has published papers in NeurIPS, Cell, and Neuron.
Dr. Xin Huang is a Senior Applied Scientist for Amazon SageMaker JumpStart and Amazon SageMaker built-in algorithms. He focuses on developing scalable machine learning algorithms. His research interests are in the area of natural language processing, explainable deep learning on tabular data, and robust analysis of non-parametric space-time clustering. He has published many papers in ACL, ICDM, KDD conferences, and Royal Statistical Society: Series A.
Rachna Chadha is a Principal Solution Architect AI/ML in Strategic Accounts at AWS. Rachna is an optimist who believes that ethical and responsible use of AI can improve society in future and bring economical and social prosperity. In her spare time, Rachna likes spending time with her family, hiking, and listening to music.
Evan Kravitz is a software engineer at Amazon Web Services, working on SageMaker JumpStart. He enjoys cooking and going on runs in New York City.
Dr. Ashish Khetan is a Senior Applied Scientist with Amazon SageMaker built-in algorithms and helps develop machine learning algorithms. He got his PhD from University of Illinois Urbana-Champaign. He is an active researcher in machine learning and statistical inference, and has published many papers in NeurIPS, ICML, ICLR, JMLR, ACL, and EMNLP conferences.
Moderate audio and text chats using AWS AI services and LLMs
Online gaming and social communities offer voice and text chat functionality for their users to communicate. Although voice and text chat often support friendly banter, it can also lead to problems such as hate speech, cyberbullying, harassment, and scams. Today, many companies rely solely on human moderators to review toxic content. However, verifying violations in chat is time-consuming, error-prone, and challenging to scale.
In this post, we introduce solutions that enable audio and text chat moderation using various AWS services, including Amazon Transcribe, Amazon Comprehend, Amazon Bedrock, and Amazon OpenSearch Service.
Social platforms seek an off-the-shelf moderation solution that is straightforward to initiate, but they also require customization for managing diverse policies. Latency and cost are also critical factors that must be taken into account. By orchestrating toxicity classification with large language models (LLMs) using generative AI, we offer a solution that balances simplicity, latency, cost, and flexibility to satisfy various requirements.
The sample code for this post is available in the GitHub repository.
Audio chat moderation workflow
An audio chat moderation workflow could be initiated by a user reporting other users on a gaming platform for policy violations such as profanity, hate speech, or harassment. This represents a passive approach to audio moderation. The system records all audio conversations without immediate analysis. When a report is received, the workflow retrieves the related audio files and initiates the analysis process. A human moderator then reviews the reported conversation, investigating its content to determine if it violates platform policy.
Alternatively, the workflow could be triggered proactively. For instance, in a social audio chat room, the system could record all conversations and apply analysis.
Both passive and proactive approaches can trigger the following pipeline for audio analysis.
The audio moderation workflow involves the following steps:
- The workflow begins with receiving the audio file and storing it on a Amazon Simple Storage Service (Amazon S3) bucket for Amazon Transcribe to access.
- The Amazon Transcribe
StartTranscriptionJob
API is invoked with Toxicity Detection enabled. Amazon Transcribe converts the audio into text, providing additional information about toxicity analysis. For more information about toxicity analysis, refer to Flag harmful language in spoken conversations with Amazon Transcribe Toxicity Detection. - If the toxicity analysis returns a toxicity score exceeding a certain threshold (for example, 50%), we can use Knowledge Bases for Amazon Bedrock to evaluate the message against customized policies using LLMs.
- The human moderator receives a detailed audio moderation report highlighting the conversation segments considered toxic and in violation of policy, allowing them to make an informed decision.
The following screenshot shows a sample application displaying toxicity analysis for an audio segment. It includes the original transcription, the results from the Amazon Transcribe toxicity analysis, and the analysis conducted using an Amazon Bedrock knowledge base through the Amazon Bedrock Anthropic Claude V2 model.
The LLM analysis provides a violation result (Y or N) and explains the rationale behind the model’s decision regarding policy violation. Furthermore, the knowledge base includes the referenced policy documents used by the evaluation, providing moderators with additional context.
Amazon Transcribe Toxicity Detection
Amazon Transcribe is an automatic speech recognition (ASR) service that makes it straightforward for developers to add speech-to-text capability to their applications. The audio moderation workflow uses Amazon Transcribe Toxicity Detection, which is a machine learning (ML)-powered capability that uses audio and text-based cues to identify and classify voice-based toxic content across seven categories, including sexual harassment, hate speech, threats, abuse, profanity, insults, and graphic language. In addition to analyzing text, Toxicity Detection uses speech cues such as tones and pitch to identify toxic intent in speech.
The audio moderation workflow activates the LLM’s policy evaluation only when the toxicity analysis exceeds a set threshold. This approach reduces latency and optimizes costs by selectively applying LLMs, filtering out a significant portion of the traffic.
Use LLM prompt engineering to accommodate customized policies
The pre-trained Toxicity Detection models from Amazon Transcribe and Amazon Comprehend provide a broad toxicity taxonomy, commonly used by social platforms for moderating user-generated content in audio and text formats. Although these pre-trained models efficiently detect issues with low latency, you may need a solution to detect violations against your specific company or business domain policies, which the pre-trained models alone can’t achieve.
Additionally, detecting violations in contextual conversations, such as identifying child sexual grooming conversations, requires a customizable solution that involves considering the chat messages and context outside of it, such as user’s age, gender, and conversation history. This is where LLMs can offer the flexibility needed to extend these requirements.
Amazon Bedrock is a fully managed service that offers a choice of high-performing foundation models (FMs) from leading AI companies. These solutions use Anthropic Claude v2 from Amazon Bedrock to moderate audio transcriptions and text chat messages using a flexible prompt template, as outlined in the following code:
The template contains placeholders for the policy description, the chat message, and additional rules that requires moderation. The Anthropic Claude V2 model delivers responses in the instructed format (Y or N), along with an analysis explaining why it thinks the message violates the policy. This approach allows you to define flexible moderation categories and articulate your policies in human language.
The traditional method of training an in-house classification model involves cumbersome processes such as data annotation, training, testing, and model deployment, requiring the expertise of data scientists and ML engineers. LLMs, in contrast, offer a high degree of flexibility. Business users can modify prompts in human language, leading to enhanced efficiency and reduced iteration cycles in ML model training.
Amazon Bedrock knowledge bases
Although prompt engineering is efficient for customizing policies, injecting lengthy policies and rules directly into LLM prompts for each message may introduce latency and increase cost. To address this, we use Amazon Bedrock knowledge bases as a managed Retrieval Augmented Generation (RAG) system. This enables you to manage the policy document flexibly, allowing the workflow to retrieve only the relevant policy segments for each input message. This minimizes the number of tokens sent to the LLMs for analysis.
You can use the AWS Management Console to upload the policy documents to an S3 bucket and then index the documents to a vector database for efficient retrieval. The following is a conceptual workflow managed by an Amazon Bedrock knowledge base that retrieves documents from Amazon S3, splits the text into chunks, and invokes the Amazon Bedrock Titan text embeddings model to convert the text chunks into vectors, which are then stored in the vector database.
In this solution, we use Amazon OpenSearch Service as the vector store. OpenSearch is a scalable, flexible, and extensible open source software suite for search, analytics, security monitoring, and observability applications, licensed under the Apache 2.0 license. OpenSearch Service is a fully managed service that makes it straightforward to deploy, scale, and operate OpenSearch in the AWS Cloud.
After the document is indexed in OpenSearch Service, the audio and text moderation workflow sends chat messages, triggering the following query flow for customized policy evaluation.
The process is similar to the initiation workflow. First, the text message is converted to text embeddings using the Amazon Bedrock Titan Text Embedding API. These embeddings are then used to perform a vector search against the OpenSearch Service database, which has already been populated with document embeddings. The database returns policy chunks with the highest matching score, relevant to the input text message. We then compose prompts containing both the input chat message and the policy segment, which are sent to Anthropic Claude V2 for evaluation. The LLM model returns an analysis result based on the prompt instructions.
For detailed instructions on how to create a new instance with your policy document in an Amazon Bedrock knowledge base, refer to Knowledge Bases now delivers fully managed RAG experience in Amazon Bedrock.
Text chat moderation workflow
The text chat moderation workflow follows a similar pattern to audio moderation, but it uses Amazon Comprehend toxicity analysis, which is tailored for text moderation. The sample app supports an interface for uploading bulk text files in CSV or TXT format and provides a single-message interface for quick testing. The following diagram illustrates the workflow.
The text moderation workflow involves the following steps:
- The user uploads a text file to an S3 bucket.
- Amazon Comprehend toxicity analysis is applied to the text message.
- If the toxicity analysis returns a toxicity score exceeding a certain threshold (for example, 50%), we use an Amazon Bedrock knowledge base to evaluate the message against customized policies using the Anthropic Claude V2 LLM.
- A policy evaluation report is sent to the human moderator.
Amazon Comprehend toxicity analysis
In the text moderation workflow, we use Amazon Comprehend toxicity analysis to assess the toxicity level of the text messages. Amazon Comprehend is a natural language processing (NLP) service that uses ML to uncover valuable insights and connections in text. The Amazon Comprehend toxicity detection API assigns an overall toxicity score to text content, ranging from 0–1, indicating the likelihood of it being toxic. It also categorizes text into the following categories and provides a confidence score for each: hate_speech
, graphic, harrassement_or_abuse
, sexual, violence_or_threat
, insult, and profanity.
In this text moderation workflow, Amazon Comprehend toxicity analysis plays a crucial role in identifying whether the incoming text message contains toxic content. Similar to the audio moderation workflow, it includes a condition to activate the downstream LLM policy evaluation only when the toxicity analysis returns a score exceeding a predefined threshold. This optimization helps reduce overall latency and cost associated with LLM analysis.
Summary
In this post, we introduced solutions for audio and text chat moderation using AWS services, including Amazon Transcribe, Amazon Comprehend, Amazon Bedrock, and OpenSearch Service. These solutions use pre-trained models for toxicity analysis and are orchestrated with generative AI LLMs to achieve the optimal balance in accuracy, latency, and cost. They also empower you to flexibly define your own policies.
You can experience the sample app by following the instructions in the GitHub repo.
About the author
Lana Zhang is a Senior Solutions Architect at AWS WWSO AI Services team, specializing in AI and ML for Content Moderation, Computer Vision, Natural Language Processing and Generative AI. With her expertise, she is dedicated to promoting AWS AI/ML solutions and assisting customers in transforming their business solutions across diverse industries, including social media, gaming, e-commerce, media, advertising & marketing.
Set up cross-account Amazon S3 access for Amazon SageMaker notebooks in VPC-only mode using Amazon S3 Access Points
Advancements in artificial intelligence (AI) and machine learning (ML) are revolutionizing the financial industry for use cases such as fraud detection, credit worthiness assessment, and trading strategy optimization. To develop models for such use cases, data scientists need access to various datasets like credit decision engines, customer transactions, risk appetite, and stress testing. Managing appropriate access control for these datasets among the data scientists working on them is crucial to meet stringent compliance and regulatory requirements. Typically, these datasets are aggregated in a centralized Amazon Simple Storage Service (Amazon S3) location from various business applications and enterprise systems. Data scientists across business units working on model development using Amazon SageMaker are granted access to relevant data, which can lead to the requirement of managing prefix-level access controls. With an increase in use cases and datasets using bucket policy statements, managing cross-account access per application is too complex and long for a bucket policy to accommodate.
Amazon S3 Access Points simplify managing and securing data access at scale for applications using shared datasets on Amazon S3. You can create unique hostnames using access points to enforce distinct and secure permissions and network controls for any request made through the access point.
S3 Access Points simplifies the management of access permissions specific to each application accessing a shared dataset. It enables secure, high-speed data copy between same-Region access points using AWS internal networks and VPCs. S3 Access Points can restrict access to VPCs, enabling you to firewall data within private networks, test new access control policies without impacting existing access points, and configure VPC endpoint policies to restrict access to specific account ID-owned S3 buckets.
This post walks through the steps involved in configuring S3 Access Points to enable cross-account access from a SageMaker notebook instance.
Solution overview
For our use case, we have two accounts in an organization: Account A (111111111111), which is used by data scientists to develop models using a SageMaker notebook instance, and Account B (222222222222), which has required datasets in the S3 bucket test-bucket-1
. The following diagram illustrates the solution architecture.
To implement the solution, complete the following high-level steps:
- Configure Account A, including VPC, subnet security group, VPC gateway endpoint, and SageMaker notebook.
- Configure Account B, including S3 bucket, access point, and bucket policy.
- Configure AWS Identity and Access Management (IAM) permissions and policies in Account A.
You should repeat these steps for each SageMaker account that needs access to the shared dataset from Account B.
The names for each resource mentioned in this post are examples; you can replace them with other names as per your use case.
Configure Account A
Complete the following steps to configure Account A:
- Create a VPC called
DemoVPC
. - Create a subnet called
DemoSubnet
in the VPCDemoVPC
. - Create a security group called
DemoSG
. - Create a VPC S3 gateway endpoint called
DemoS3GatewayEndpoint
. - Create the SageMaker execution role.
- Create a notebook instance called
DemoNotebookInstance
and the security guidelines as outlined in How to configure security in Amazon SageMaker.- Specify the Sagemaker execution role you created.
- For the notebook network settings, specify the VPC, subnet, and security group you created.
- Make sure that Direct Internet access is disabled.
You assign permissions to the role in subsequent steps after you create the required dependencies.
Configure Account B
To configure Account B, complete the following steps:
- In Account B, create an S3 bucket called
test-bucket-1
following Amazon S3 security guidance. - Upload your file to the S3 bucket.
- Create an access point called
test-ap-1
in Account B.- Don’t change or edit any Block Public Access settings for this access point (all public access should be blocked).
- Attach the following policy to your access point:
The actions defined in the preceding code are sample actions for demonstration purposes. You can define the actions as per your requirements or use case.
- Add the following bucket policy permissions to access the access point:
The preceding actions are examples. You can define the actions as per your requirements.
Configure IAM permissions and policies
Complete the following steps in Account A:
- Confirm that the SageMaker execution role has the AmazonSagemakerFullAccess custom IAM inline policy, which looks like the following code:
The actions in the policy code are sample actions for demonstration purposes.
- Go to the
DemoS3GatewayEndpoint
endpoint you created and add the following permissions:
- To get a prefix list, run the AWS Command Line Interface (AWS CLI) describe-prefix-lists command:
- In Account A, Go to the security group
DemoSG
for the target SageMaker notebook instance - Under Outbound rules, create an outbound rule with All traffic or All TCP, and then specify the destination as the prefix list ID you retrieved.
This completes the setup in both accounts.
Test the solution
To validate the solution, go to the SageMaker notebook instance terminal and enter the following commands to list the objects through the access point:
- To list the objects successfully through S3 access point
test-ap-1
:
- To get the objects successfully through S3 access point
test-ap-1
:
Clean up
When you’re done testing, delete any S3 access points and S3 buckets. Also, delete any Sagemaker notebook instances to stop incurring charges.
Conclusion
In this post, we showed how S3 Access Points enables cross-account access to large, shared datasets from SageMaker notebook instances, bypassing size constraints imposed by bucket policies while configuring at-scale access management on shared datasets.
To learn more, refer to Easily Manage Shared Data Sets with Amazon S3 Access Points.
About the authors
Kiran Khambete is working as Senior Technical Account Manager at Amazon Web Services (AWS). As a TAM, Kiran plays a role of technical expert and strategic guide to helping Enterprise customers achieving their business goals.
Ankit Soni with total experience of 14 years holds the position of Principal Engineer at NatWest Group, where he has served as a Cloud Infrastructure Architect for the past six years.
Kesaraju Sai Sandeep is a Cloud Engineer specializing in Big Data Services at AWS.
How Amazon and Columbia University are collaborating to advance AI in healthcare
Amazon Health Services’ Sunita Mishra and Columbia University’s Katrina Armstrong discuss technology’s potential role in medical settings.Read More
Run an audience overlap analysis in AWS Clean Rooms
Advertisers, publishers, and advertising technology providers are actively seeking efficient ways to collaborate with their partners to generate insights about their collective datasets. One common reason to engage in data collaboration is to run an audience overlap analysis, which is a common analysis to run when media planning and evaluating new partnerships.
In this post, we explore what an audience overlap analysis is, discuss the current technical approaches and their challenges, and illustrate how you can run secure audience overlap analysis using AWS Clean Rooms.
Audience overlap analysis
Audience overlap is the percentage of users in your audience who are also present in another dataset (calculated as the number of users present in both your audience and another dataset divided by the total number of users in your audience). In the digital media planning process, audience overlaps are often conducted to compare an advertiser’s first-party dataset with a media partner’s (publisher) dataset. The analysis helps determine how much of the advertiser’s audience can be reached by a given media partner. By evaluating the overlap, advertisers can determine whether a media partner provides unique reach or if the media partner’s audience predominantly overlaps with the advertiser’s existing audience.
Current approaches and challenges
Advertisers, publishers, third-party data providers, and other entities often share their data when running audience overlaps or match tests. Common methods for sharing data, such as using pixels and SFTP transfers, can carry risk because they involve moving sensitive customer information. Sharing this data to another party can be time consuming and increase the risk of potential data breaches or unauthorized access. If the receiving party mishandles the data, it could violate privacy regulations, resulting in legal risks. Also, any perceived misuse or exposure of customer data can erode consumer trust, leading to reputational damage and potential loss of business.
Solution overview
AWS Clean Rooms can help you and your partners effortlessly and securely collaborate on and analyze your collective datasets—without copying each other’s underlying data. With AWS Clean Rooms, you can create a data clean room in minutes and collaborate with your partners to generate unique insights. AWS Clean Rooms allows you to run an audience overlap analysis and generate valuable insights while avoiding risks associated with other current approaches.
The following are key concepts and prerequisites to use AWS Clean Rooms:
- Each party in the analysis (collaboration member) needs to have an AWS account.
- One member invites the other member to the AWS Clean Rooms collaboration. It doesn’t matter which member creates the invitation. The collaboration creator uses the invitee’s AWS account ID as input to send invitations.
- Only one member can query in the collaboration, and only one member can receive results from the collaboration. The abilities of each member are defined when the collaboration is created.
- Each collaboration member stores datasets in their respective Amazon Simple Storage Service (Amazon S3) bucket and catalogs them (creates a schema with column names and data types) in the AWS Glue Data Catalog. You can also create the Data Catalog definition using the Amazon Athena create database and create table statements.
- Collaborators need to have their S3 buckets and Data Catalog tables in the same AWS Region.
- Collaborators can use the AWS Clean Rooms console, APIs, or AWS SDKs to set up a collaboration.
- AWS Clean Rooms enables you to use any column as a join key, for example hashed MAIDs, emails, IP addresses, and RampIDs.
- Each collaboration member associates their own data to the collaboration.
Let’s look at a scenario in which an advertiser collaborates with a publisher to identify the audience overlap. In this example, the publisher creates the collaboration, invites the advertiser, and designates the advertiser as the member who can query and receive results.
Prerequisites
To invite another person to a collaboration, you need their AWS account ID. In our use case, the publisher needs the AWS account ID of the advertiser.
Create a collaboration
In our use case, the publisher creates a collaboration using the AWS Clean Rooms console and invites the advertiser.
To create a collaboration, complete the following steps:
- On the AWS Clean Rooms, console, choose Collaborations in the navigation pane.
- Choose Create collaboration.
- For Name, enter a name for the collaboration.
- In the Members section, enter the AWS account ID of the account you want to invite (in this case, the advertiser).
- In the Member abilities section, choose the member who can query and receive results (in this case, the advertiser).
- For Query logging, decide if you want query logging turned on. The queries are logged to Amazon CloudWatch.
- For Cryptographic computing, decide if you want to turn on support for cryptographic computing (pre-encrypt your data before associating it). AWS Clean Rooms will then run queries on the encrypted data.
- Choose Next.
- On the Configure membership page, choose if you want to create the membership and collaboration now, or create the collaboration but activate your membership later.
- For Query results settings defaults, choose if you want to keep the default settings to receive results.
- For Log storage in Amazon CloudWatch Logs, specify your log settings.
- Specify any tags and who is paying for queries.
- Choose Next.
- Review the configuration and choose to either create the collaboration and membership now, or just the collaboration.
The publisher sends an invitation to the advertiser. The advertiser reviews the collaboration settings and creates a membership.
Create a configured table and set analysis rules
The publisher creates a configured table from the AWS Glue table (which represents the metadata definition of the S3 data, including location, so it can be read by AWS Clean Rooms when the query is run).
Complete the following steps:
- On the AWS Clean Rooms console, choose Configured tables in the navigation pane.
- Choose Configure new table.
- In the Choose AWS Glue table section, choose your database and table.
- In the Columns allowed in collaboration section, choose which of the existing table columns to allow for querying in the collaboration.
- In the Configured table details section, enter a name and optional description for the configured table.
- Choose Configure new table.
- Choose the analysis rule type that matches the type of queries you want to allow on table. To allow an aggregation analysis, such as finding the size of the audience overlap, choose the aggregation analysis rule type.
- In the Aggregate functions section, choose COUNT DISTINCT as the aggregate function.
- In the Join controls section, choose whether your collaborator is required to join a table with yours. Because this is an audience overlap use case, select No, only overlap can be queried.
- Select the operators to allow for matching (for this example, select AND and OR).
- In the Dimension controls section, choose if you want to make any columns available as dimensions.
- In the Scalar functions section, choose if you want to limit the scalar functions allowed.
- Choose Next.
- In the Aggregation constraints section, choose the minimum aggregation constraint for the configured table.
This allows you to filter out rows that don’t meet a certain minimum threshold of users (for example, if the threshold is set to 10, rows that aggregate fewer than 10 users are filtered out).
Associate the table to the collaboration
AWS Clean Rooms requires access to read the table in order to run the query submitted by the advertiser. Complete the following steps to associate the table:
- On the AWS Clean Rooms console, navigate to your collaboration.
- Choose Associate table.
- For Configured table name, choose the name of your configured table.
- In the Table association details section, enter a name and optional description for the table.
- In the Service access section, you can choose to can use the default settings to create an AWS Identity and Access Management (IAM) service role for AWS Clean Rooms automatically, or you can use an existing role. IAM permissions are required to create or modify the role and pass the role to AWS Clean Rooms.
- Choose Associate table.
The advertiser also completes the steps detailed in the preceding sections to create a configured table and associate it to the collaboration.
Run queries in the query editor
The advertiser can now navigate to the Queries tab for the collaboration and review tables to query and their analysis rules. You can specify
the S3 bucket where the output of the overlap query will go.
The advertiser can now write and run an overlap query. You can use a hashed email as a join key for the query (you have the option to use any column as the join key and can also use multiple columns for multiple join keys). You can also use the Analysis Builder no-code option to have AWS Clean Rooms generate SQL on your behalf. For our use case, we run the following queries:
The query results are sent to the advertiser’s S3 bucket, as shown in the following screenshot.
Clean up
It’s a best practice to delete resources that are no longer being used. The advertiser and publisher should clean up their respective resources:
- Advertiser – The advertiser deletes their configured table associations and collaboration membership. However, they don’t have to delete their configured table because it’s reusable across collaborations.
- Publisher – The publisher deletes their configured table associations and the collaboration. They don’t have to delete their configured table because it’s reusable across collaborations.
Conclusion
In this post, we demonstrated how to set up an audience overlap collaboration using AWS Clean Rooms for media planning and partnership evaluation using a hashed email as a join key between datasets. Advertisers are increasingly turning to AWS Clean Rooms to conduct audience overlap analyses with their media partners, aiding their media investment decisions. Furthermore, audience overlaps help you accelerate your partnership evaluations by identifying the extent of overlap you share with potential partners.
To learn more about AWS Clean Rooms, watch the video Getting Started with AWS Clean Rooms, and refer to the following additional resources:
- AWS Clean Rooms Now Generally Available — Collaborate with Your Partners without Sharing Raw Data
- AWS on Air: AWS Clean Rooms is now available for General Availability
- Introducing AWS for Advertising & Marketing: Helping customers reinvent the industry with purpose-built services, solutions, and partners
- Introducing four new solutions that help customers integrate AWS Clean Rooms into their advertising workflows
- AWS Clean Rooms User Guide
About the Authors
Eric Saccullo is a Senior Business Development Manager for AWS Clean Rooms at Amazon Web Services. He is focused on helping customers collaborate with their partners in privacy-enhanced ways to gain insights and improve business outcomes.
Shamir Tanna is a Senior Technical Product Manager at Amazon Web Services.
Ryan Malecky is a Senior Solutions Architect at Amazon Web Services. He is focused on helping customers gain insights from their data, especially with AWS Clean Rooms.