Super charge your LLMs with RAG at scale using AWS Glue for Apache Spark

Super charge your LLMs with RAG at scale using AWS Glue for Apache Spark

Large language models (LLMs) are very large deep-learning models that are pre-trained on vast amounts of data. LLMs are incredibly flexible. One model can perform completely different tasks such as answering questions, summarizing documents, translating languages, and completing sentences. LLMs have the potential to revolutionize content creation and the way people use search engines and virtual assistants. Retrieval Augmented Generation (RAG) is the process of optimizing the output of an LLM, so it references an authoritative knowledge base outside of its training data sources before generating a response. While LLMs are trained on vast volumes of data and use billions of parameters to generate original output, RAG extends the already powerful capabilities of LLMs to specific domains or an organization’s internal knowledge base—without having to retrain the LLMs. RAG is a fast and cost-effective approach to improve LLM output so that it remains relevant, accurate, and useful in a specific context. RAG introduces an information retrieval component that uses the user input to first pull information from a new data source. This new data from outside of the LLM’s original training data set is called external data. The data might exist in various formats such as files, database records, or long-form text. An AI technique called embedding language models converts this external data into numerical representations and stores it in a vector database. This process creates a knowledge library that generative AI models can understand.

RAG introduces additional data engineering requirements:

  • Scalable retrieval indexes must ingest massive text corpora covering requisite knowledge domains.
  • Data must be preprocessed to enable semantic search during inference. This includes normalization, vectorization, and index optimization.
  • These indexes continuously accumulate documents. Data pipelines must seamlessly integrate new data at scale.
  • Diverse data amplifies the need for customizable cleaning and transformation logic to handle the quirks of different sources.

In this post, we will explore building a reusable RAG data pipeline on LangChain—an open source framework for building applications based on LLMs—and integrating it with AWS Glue and Amazon OpenSearch Serverless. The end solution is a reference architecture for scalable RAG indexing and deployment. We provide sample notebooks covering ingestion, transformation, vectorization, and index management, enabling teams to consume disparate data into high-performing RAG applications.

Data preprocessing for RAG

Data pre-processing is crucial for responsible retrieval from your external data with RAG. Clean, high-quality data leads to more accurate results with RAG, while privacy and ethics considerations necessitate careful data filtering. This lays the foundation for LLMs with RAG to reach their full potential in downstream applications.

To facilitate effective retrieval from external data, a common practice is to first clean up and sanitize the documents. You can use Amazon Comprehend or AWS Glue sensitive data detection capability to identify sensitive data and then use Spark to clean up and sanitize the data. The next step is to split the documents into manageable chunks. The chunks are then converted to embeddings and written to a vector index, while maintaining a mapping to the original document. This process is shown in the figure that follows. These embeddings are used to determine semantic similarity between queries and text from the data sources

Solution overview

In this solution, we use LangChain integrated with AWS Glue for Apache Spark and Amazon OpenSearch Serverless. To make this solution scalable and customizable, we use Apache Spark’s distributed capabilities and PySpark’s flexible scripting capabilities. We use OpenSearch Serverless as a sample vector store and use the Llama 3.1 model.

The benefits of this solution are:

  • You can flexibly achieve data cleaning, sanitizing, and data quality management in addition to chunking and embedding.
  • You can build and manage an incremental data pipeline to update embeddings on Vectorstore at scale.
  • You can choose a wide variety of embedding models.
  • You can choose a wide variety of data sources including databases, data warehouses, and SaaS applications supported in AWS Glue.

This solution covers the following areas:

  • Processing unstructured data such as HTML, Markdown, and text files using Apache Spark. This includes distributed data cleaning, sanitizing, chunking, and embedding vectors for downstream consumption.
  • Bringing it all together into a Spark pipeline that incrementally processes sources and publishes vectors to an OpenSearch Serverless
  • Querying the indexed content using the LLM model of your choice to provide natural language answers.

Prerequisites

To continue this tutorial, you must create the following AWS resources in advance:

Complete the following steps to launch an AWS Glue Studio notebook:

  1. Download the Jupyter Notebook file.
  2. On the AWS Glue console, chooseNotebooks in the navigation pane.
  3. Under Create job, select Notebook.
  4. For Options, choose Upload Notebook.
  5. Choose Create notebook. The notebook will start up in a minute.

  1. Run the first two cells to configure an AWS Glue interactive session.


Now you have configured the required settings for your AWS Glue notebook.

Vector store setup

First, create a vector store. A vector store provides efficient vector similarity search by providing specialized indexes. RAG complements LLMs with an external knowledge base that’s typically built using a vector database hydrated with vector-encoded knowledge articles.

In this example, you will use Amazon OpenSearch Serverless for its simplicity and scalability to support a vector search at low latency and up to billions of vectors. Learn more in Amazon OpenSearch Service’s vector database capabilities explained.

Complete the following steps to set up OpenSearch Serverless:

  1. For the cell under Vectorestore Setup, replace <your-iam-role-arn> with your IAM role Amazon Resource Name (ARN), replace <region> with your AWS Region, and run the cell.
  2. Run the next cell to create the OpenSearch Serverless collection, security policies, and access policies.

You have provisioned OpenSearch Serverless successfully. Now you’re ready to inject documents into the vector store.

Document preparation

In this example, you will use a sample HTML file as the HTML input. It’s an article with specialized content that LLMs cannot answer without using RAG.

  1. Run the cell under Sample document download to download the HTML file, create a new S3 bucket, and upload the HTML file to the bucket.

  1. Run the cell under Document preparation. It loads the HTML file into Spark DataFrame df_html.

  1. Run the two cells under Parse and clean up HTMLto define functions parse_html and format_md. We use Beautiful Soup to parse HTML, and convert it to Markdown using markdownify in order to use MarkdownTextSplitter for chunking. These functions will be used inside a Spark Python user-defined function (UDF) in later cells.

  1. Run the cell under Chunking HTML. The example uses LangChain’s MarkdownTextSplitter to split the text along markdown-formatted headings into manageable chunks. Adjusting chunk size and overlap is crucial to help prevent the interruption of contextual meaning, which can affect the accuracy of subsequent vector store searches. The example uses a chunk size of 1,000 and a chunk overlap of 100 to preserve information continuity, but these settings can be fine-tuned to suit different use cases.

  1. Run the three cells under Embedding. The first two cells configure LLMs and deploy them through Amazon SageMaker In the third cell, the function process_batchinjects the documents into the vector store through OpenSearch implementation inside LangChain, which inputs the embeddings model and the documents to create the entire vector store.

  1. Run the two cells under Pre-process HTML document. The first cell defines the Spark UDF, and the second cell triggers the Spark action to run the UDF per record containing the entire HTML content.

You have successfully ingested an embedding into the OpenSearch Serverless collection.

Question answering

In this section, we are going to demonstrate the question-answering capability using the embedding ingested in the previous section.

  1. Run the two cells under Question Answering to create the OpenSearchVectorSearch client, the LLM using Llama 3.1, and define RetrievalQA where you can customize how the documents fetched should be added to the prompt using the chain_type Optionally, you can choose other foundation models (FMs). For such cases, refer to the model card to adjust the chunking length.

  1. Run the next cell to do a similarity search using the query “What is Task Decomposition?” against the vector store providing the most relevant information. It takes a few seconds to make documents available in the index. If you get an empty output in the next cell, wait 1-3 minutes and retry.

Now that you have the relevant documents, it’s time to use the LLM to generate an answer based on the embeddings.

  1. Run the next cell to invoke the LLM to generate an answer based on the embeddings.

As you expect, the LLM answered with a detailed explanation about task decomposition. For production workloads, balancing latency and cost efficiency is crucial in semantic searches through vector stores. It’s important to select the most suitable k-NN algorithm and parameters for your specific needs, as detailed in this post. Additionally, consider using product quantization (PQ) to reduce the dimensionality of embeddings stored in the vector database. This approach can be advantageous for latency-sensitive tasks, though it might involve some trade-offs in accuracy. For additional details, see Choose the k-NN algorithm for your billion-scale use case with OpenSearch.

Clean up

Now to the final step, cleaning up the resources:

  1. Run the cell under Clean up to delete S3, OpenSearch Serverless, and SageMaker resources.

  1. Delete the AWS Glue notebook job.

Conclusion

This post explored a reusable RAG data pipeline using LangChain, AWS Glue, Apache Spark, Amazon SageMaker JumpStart, and Amazon OpenSearch Serverless. The solution provides a reference architecture for ingesting, transforming, vectorizing, and managing indexes for RAG at scale by using Apache Spark’s distributed capabilities and PySpark’s flexible scripting capabilities. This enables you to preprocess your external data in the phases including cleaning, sanitization, chunking documents, generating vector embeddings for each chunk, and loading into a vector store.


About the Authors

Noritaka Sekiyama is a Principal Big Data Architect on the AWS Glue team. He is responsible for building software artifacts to help customers. In his spare time, he enjoys cycling with his road bike.

Akito Takeki is a Cloud Support Engineer at Amazon Web Services. He specializes in Amazon Bedrock and Amazon SageMaker. In his spare time, he enjoys travelling and spending time with his family.

Ray Wang is a Senior Solutions Architect at Amazon Web Services. Ray is dedicated to building modern solutions on the Cloud, especially in NoSQL, big data, and machine learning. As a hungry go-getter, he passed all 12 AWS certificates to make his technical field not only deep but wide. He loves to read and watch sci-fi movies in his spare time.

Vishal Kajjam is a Software Development Engineer on the AWS Glue team. He is passionate about distributed computing and using ML/AI for designing and building end-to-end solutions to address customers’ Data Integration needs. In his spare time, he enjoys spending time with family and friends.

Savio Dsouza is a Software Development Manager on the AWS Glue team. His team works on generative AI applications for the Data Integration domain and distributed systems for efficiently managing data lakes on AWS and optimizing Apache Spark for performance and reliability.

Kinshuk Pahare is a Principal Product Manager on AWS Glue. He leads a team of Product Managers who focus on AWS Glue platform, developer experience, data processing engines, and generative AI. He had been with AWS for 4.5 years. Before that he did product management at Proofpoint and Cisco.

Read More

From RAG to fabric: Lessons learned from building real-world RAGs at GenAIIC – Part 1

From RAG to fabric: Lessons learned from building real-world RAGs at GenAIIC – Part 1

The AWS Generative AI Innovation Center (GenAIIC) is a team of AWS science and strategy experts who have deep knowledge of generative AI. They help AWS customers jumpstart their generative AI journey by building proofs of concept that use generative AI to bring business value. Since the inception of AWS GenAIIC in May 2023, we have witnessed high customer demand for chatbots that can extract information and generate insights from massive and often heterogeneous knowledge bases. Such use cases, which augment a large language model’s (LLM) knowledge with external data sources, are known as Retrieval-Augmented Generation (RAG).

This two-part series shares the insights gained by AWS GenAIIC from direct experience building RAG solutions across a wide range of industries. You can use this as a practical guide to building better RAG solutions.

In this first post, we focus on the basics of RAG architecture and how to optimize text-only RAG. The second post outlines how to work with multiple data formats such as structured data (tables, databases) and images.

Anatomy of RAG

RAG is an efficient way to provide an FM with additional knowledge by using external data sources and is depicted in the following diagram:

  • Retrieval: Based on a user’s question (1), relevant information is retrieved from a knowledge base (2) (for example, an OpenSearch index).
  • Augmentation: The retrieved information is added to the FM prompt (3.a) to augment its knowledge, along with the user query (3.b).
  • Generation: The FM generates an answer (4) by using the information provided in the prompt.

The following is a general diagram of a RAG workflow. From left to right are the retrieval, the augmentation, and the generation. In practice, the knowledge base is often a vector store.

Diagram of end-to-end RAG solution.

A deeper dive in the retriever

In a RAG architecture, the FM will base its answer on the information provided by the retriever. Therefore, a RAG is only as good as its retriever, and many of the tips that we share in our practical guide are about how to optimize the retriever. But what is a retriever exactly? Broadly speaking, a retriever is a module that takes a query as input and outputs relevant documents from one or more knowledge sources relevant to that query.

Document ingestion

In a RAG architecture, documents are often stored in a vector store. As shown in the following diagram, vector stores are populated by chunking the documents into manageable pieces (1) (if a document is short enough, chunking might not be required) and transforming each chunk of the document into a high-dimensional vector using a vector embedding (2), such as the Amazon Titan embeddings model. These embeddings have the characteristic that two chunks of texts that are semantically close have vector representations that are also close in that embedding (in the sense of the cosine or Euclidean distance).

The following diagram illustrates the ingestion of text documents in the vector store using an embedding model. Note that the vectors are stored alongside the corresponding text chunk (3), so that at retrieval time, when you identify the chunks closest to the query, you can return the text chunk to be passed to the FM prompt.

Diagram of the ingestion process.

Semantic search

Vector stores allow for efficient semantic search: as shown in the following diagram, given a user query (1), we vectorize it (2) (using the same embedding as the one that was used to build the vector store) and then look for the nearest vectors in the vector store (3), which will correspond to the document chunks that are semantically closest to the initial query (4). Although vector stores and semantic search have become the default in RAG architectures, more traditional keyword-based search is still valuable, especially when searching for domain-specific words (such as technical jargon) or names. Hybrid search is a way to use both semantic search and keywords to rank a document, and we will give more details on this technique in the section on advanced RAG techniques.

The following diagram illustrates the retrieval of text documents that are semantically close to the user query. You must use the same embedding model at ingestion time and at search time.

Diagram of the retrival process.

Implementation on AWS

A RAG chatbot can be set up in a matter of minutes using Amazon Bedrock Knowledge Bases. The knowledge base can be linked to an Amazon Simple Storage Service (Amazon S3) bucket and will automatically chunk and index the documents it contains in an OpenSearch index, which will act as the vector store. The retrieve_and_generate API does both the retrieval and a call to an FM (Amazon Titan or Anthropic’s Claude family of models on Amazon Bedrock), for a fully managed solution. The retrieve API only implements the retrieval component and allows for a more custom approach downstream, such as document post processing before calling the FM separately.

In this blog post, we will provide tips and code to optimize a fully custom RAG solution with the following components:

  • An OpenSearch Serverless vector search collection as the vector store
  • Custom chunking and ingestion functions to ingest the documents in the OpenSearch index
  • A custom retrieval function that takes a user query as an input and outputs the relevant documents from the OpenSearch index
  • FM calls to your model of choice on Amazon Bedrock to generate the final answer.

In this post, we focus on a custom solution to help readers understand the inner workings of RAG. Most of the tips we provide can be adapted to work with Amazon Bedrock Knowledge Bases, and we will point this out in the relevant sections.

Overview of RAG use cases

While working with customers on their generative AI journey, we encountered a variety of use cases that fit within the RAG paradigm. In traditional RAG use cases, the chatbot relies on a database of text documents (.doc, .pdf, or .txt). In part 2 of this post, we will discuss how to extend this capability to images and structured data. For now, we’ll focus on a typical RAG workflow: the input is a user question, and the output is the answer to that question, derived from the relevant text chunks or documents retrieved from the database. Use cases include the following:

  • Customer service– This can include the following:
    • Internal– Live agents use an internal chatbot to help them answer customer questions.
    • External– Customers directly chat with a generative AI chatbot.
    • Hybrid– The model generates smart replies for live agents that they can edit before sending to customers.
  • Employee training and resources– In this use case, chatbots can use employee training manuals, HR resources, and IT service documents to help employees onboard faster or find the information they need to troubleshoot internal issues.
  • Industrial maintenance– Maintenance manuals for complex machines can have several hundred pages. Building a RAG solution around these manuals helps maintenance technicians find relevant information faster. Note that maintenance manuals often have images and schemas, which could put them in a multimodal bucket.
  • Product information search– Field specialists need to identify relevant products for a given use case, or conversely find the right technical information about a given product.
  • Retrieving and summarizing financial news– Analysts need the most up-to-date information on markets and the economy and rely on large databases of news or commentary articles. A RAG solution is a way to efficiently retrieve and summarize the relevant information on a given topic.

In the following sections, we will give tips that you can use to optimize each aspect of the RAG pipeline (ingestion, retrieval, and answer generation) depending on the underlying use case and data format. To verify that the modifications improve the solution, you first need to be able to assess the performance of the RAG solution.

Evaluating a RAG solution

Contrary to traditional machine learning (ML) models, for which evaluation metrics are well defined and straightforward to compute, evaluating a RAG framework is still an open problem. First, collecting ground truth (information known to be correct) for the retrieval component and the generation component is time consuming and requires human intervention. Secondly, even with several question-and-answer pairs available, it’s difficult to automatically evaluate if the RAG answer is close enough to the human answer.

In our experience, when a RAG system performs poorly, we found the retrieval part to almost always be the culprit. Large pre-trained models such as Anthropic’s Claude model will generate high-quality answers if provided with the right information, and we notice two main failure modes:

  • The relevant information isn’t present in the retrieved documents: In this case, the FM can try to make up an answer or use its own knowledge to answer. Adding guardrails against such behavior is essential.
  • Relevant information is buried within an excessive amount of irrelevant data: When the scope of the retriever is too broad, the FM can get confused and start mixing up multiple data sources, resulting in a wrong answer. More advanced models such as Anthropic’s Claude Sonnet 3.5 and Opus are reported to be more robust against such behavior, but this is still a risk to be aware of.

To evaluate the quality of the retriever, you can use the following traditional retrieval metrics:

  • Top-k accuracy: Measures whether at least one relevant document is found within the top k retrieved documents.
  • Mean Reciprocal Rank (MRR)– This metric considers the ranking of the retrieved documents. It’s calculated as the average of the reciprocal ranks (RR) for each query. The RR is the inverse of the rank position of the first relevant document. For example, if the first relevant document is in third position, the RR is 1/3. A higher MRR indicates that the retriever can rank the most relevant documents higher.
  • Recall– This metric measures the ability of the retriever to retrieve relevant documents from the corpus. It’s calculated as the number of relevant documents that are successfully retrieved over the total number of relevant documents. Higher recall indicates that the retriever can find most of the relevant information.
  • Precision– This metric measures the ability of the retriever to retrieve only relevant documents and avoid irrelevant ones. It’s calculated by the number of relevant documents successfully retrieved over the total number of documents retrieved. Higher precision indicates that the retriever isn’t retrieving too many irrelevant documents.

Note that if the documents are chunked, the metrics must be computed at the chunk level. This means the ground truth to evaluate a retriever is pairs of question and list of relevant document chunks. In many cases, there is only one chunk that contains the answer to the question, so the ground truth becomes question and relevant document chunk.

To evaluate the quality of the generated response, two main options are:

  • Evaluation by subject matter experts: this provides the highest reliability in terms of evaluation but can’t scale to a large number of questions and slows down iterations on the RAG solution.
  • Evaluation by FM (also called LLM-as-a-judge):
    • With a human-created starting point: Provide the FM with a set of ground truth question-and-answer pairs and ask the FM to evaluate the quality of the generated answer by comparing it to the ground truth one.
    • With an FM-generated ground truth: Use an FM to generate question-and-answer pairs for given chunks, and then use this as a ground truth, before resorting to an FM to compare RAG answers to that ground truth.

We recommend that you use an FM for evaluations to iterate faster on improving the RAG solution, but to use subject-matter experts (or at least human evaluation) to provide a final assessment of the generated answers before deploying the solution.

A growing number of libraries offer automated evaluation frameworks that rely on additional FMs to create a ground truth and evaluate the relevance of the retrieved documents as well as the quality of the response:

  • Ragas– This framework offers FM-based metrics previously described, such as context recall, context precision, answer faithfulness, and answer relevancy. It needs to be adapted to Anthropic’s Claude models because of its heavy dependence on specific prompts.
  • LlamaIndex– This framework provides multiple modules to independently evaluate the retrieval and generation components of a RAG system. It also integrates with other tools such as Ragas and DeepEval. It contains modules to create ground truth (query-and-context pairs and question-and-answer pairs) using an FM, which alleviates the use of time-consuming human collection of ground truth.
  • RefChecker– This is an Amazon Science library focused on fine-grained hallucination detection.

Troubleshooting RAG

Evaluation metrics give an overall picture of the performance of retrieval and generation, but they don’t help diagnose issues. Diving deeper into poor responses can help you understand what’s causing them and what you can do to alleviate the issue. You can diagnose the issue by looking at evaluation metrics and also by having a human evaluator take a closer look at both the LLM answer and the retrieved documents.

The following is a brief overview of issues and potential fixes. We will describe each of the techniques in more detail, including real-world use cases and code examples, in the next section.

  • The relevant chunk wasn’t retrieved (retriever has low top k accuracy and low recall or spotted by human evaluation):
    • Try increasing the number of documents retrieved by the nearest neighbor search and re-ranking the results to cut back on the number of chunks after retrieval.
    • Try hybrid search. Using keywords in combination with semantic search (known as hybrid search) might help, especially if the queries contain names or domain-specific jargon.
    • Try query rewriting. Having an FM detect the intent or rewrite the query can help create a query that’s better suited for the retriever. For instance, a user query such as “What information do you have in the knowledge base about the economic outlook in China?” contains a lot of context that isn’t relevant to the search and would be more efficient if rewritten as “economic outlook in China” for search purposes.
  • Too many chunks were retrieved (retriever has low precision or spotted by human evaluation):
    • Try using keyword matching to restrict the search results. For example, if you’re looking for information about a specific entity or property in your knowledge base, only retrieve documents that explicitly mention them.
    • Try metadata filtering in your OpenSearch index. For example, if you’re looking for information in news articles, try using the date field to filter only the most recent results.
    • Try using query rewriting to get the right metadata filtering. This advanced technique uses the FM to rewrite the user query as a more structured query, allowing you to make the most of OpenSearch filters. For example, if you’re looking for the specifications of a specific product in your database, the FM can extract the product name from the query, and you can then use the product name field to filter out the product name.
    • Try using reranking to cut down on the number of chunks passed to the FM.
  • A relevant chunk was retrieved, but it’s missing some context (can only be assessed by human evaluation):
    • Try changing the chunking strategy. Keep in mind that small chunks are good for precise questions, while large chunks are better for questions that require a broad context:
      • Try increasing the chunk size and overlap as a first step.
      • Try using section-based chunking. If you have structured documents, use sections delimiters to cut your documents into chunks to have more coherent chunks. Be aware that you might lose some of the more fine-grained context if your chunks are larger.
    • Try small-to-large retrievers. If you want to keep the fine-grained details of small chunks but make sure you retrieve all the relevant context, small-to-large retrievers will retrieve your chunk along with the previous and next ones.
  • If none of the above help:
    • Consider training a custom embedding.
  • The retriever isn’t at fault, the problem is with FM generation (evaluated by a human or LLM):
    • Try prompt engineering to mitigate hallucinations.
    • Try prompting the FM to use quotes in its answers, to allow for manual fact checking.
    • Try using another FM to evaluate or correct the answer.

A practical guide to improving the retriever

Note that not all the techniques that follow need to be implemented together to optimize your retriever—some might even have opposite effects. Use the preceding troubleshooting guide to get a shortlist of what might work, then look at the examples in the corresponding sections that follow to assess if the method can be beneficial to your retriever.

Hybrid search

Example use case: A large manufacturer built a RAG chatbot to retrieve product specifications. These documents contain technical terms and product names. Consider the following example queries:

query_1 = "What is the viscosity of product XYZ?"
query_2 = "How viscous is XYZ?"

The queries are equivalent and need to be answered with the same document. The keyword component will make sure that you’re boosting documents mentioning the name of the product, XYZ while the semantic component will make sure that documents containing viscosity get a high score, even when the query contains the word viscous.

Combining vector search with keyword search can effectively handle domain-specific terms, abbreviations, and product names that embedding models might struggle with. Practically, this can be achieved in OpenSearch by combining a k-nearest neighbors (k-NN) query with keyword matching. The weights for the semantic search compared to keyword search can be adjusted. See the following example code:

vector_embedding = compute_embedding(query)
size = 10
semantic_weight = 10
keyword_weight = 1
search_query = {"size":size, "query": { "bool": { "should":[] , "must":[] } } }
    # semantic search
    search_query['query']['bool']['should'].append(
            {"function_score": 
             { "query": 
              {"knn": 
               {"vector_field": 
                {"vector": vector_embedding, 
                "k": 10 # The number of nearest neighbors to retrieve
                }}}, 
              "weight": semantic_weight } })
              
    # keyword search
    search_query['query']['bool']['should'].append({
             "function_score": 
            { "query": 
             {"match": 
             # This will increase the score of chunks that match the words in the query
              {"chunk_text":  query} 
              },
             "weight": keyword_weight } })

Amazon Bedrock Knowledge Bases also supports hybrid search, but you can’t adjust the weights for semantic compared to keyword search.

Adding metadata information to text chunks

Example use case: Using the same example of a RAG chatbot for product specifications, consider product specifications that are several pages long and where the product name is only present in the header of the document. When ingesting the document into the knowledge base, it’s chunked into smaller pieces for the embedding model, and the product name only appears in the first chunk, which contains the header. See the following example:

# Note: the following document was generated by Anthropic’s Claude Sonnet 
# and does not contain information about a real product

document_name = "Chemical Properties for Product XYZ"

chunk_1 = """
Product Description:
XYZ is a multi-purpose cleaning solution designed for industrial and commercial use. 
It is a concentrated liquid formulation containing anionic and non-ionic surfactants, 
solvents, and alkaline builders.

Chemical Composition:
- Water (CAS No. 7732-18-5): 60-80%
- 2-Butoxyethanol (CAS No. 111-76-2): 5-10%
- Sodium Hydroxide (CAS No. 1310-73-2): 2-5%
- Ethoxylated Alcohols (CAS No. 68439-46-3): 1-3%
- Sodium Metasilicate (CAS No. 6834-92-0): 1-3%
- Fragrance (Proprietary Mixture): <1%
"""

# chunk 2 below doesn't contain any mention of "XYZ"
chunk_2 = """
Physical Properties:
- Appearance: Clear, yellow liquid
- Odor: Mild, citrus fragrance
- pH (concentrate): 12.5 - 13.5
- Specific Gravity: 1.05 - 1.10
- Solubility in Water: Complete
- VOC Content: <10%

Shelf-life:
When stored in its original, unopened container at temperatures between 15°C and 25°C,
 the product has a shelf life of 24 months from the date of manufacture.
Once opened, the shelf life is reduced due to potential contamination and exposure to
 air. It is recommended to use the product within 6 months after opening the container.
"""

The chunk containing information about the shelf life of XYZ doesn’t contain any mention of the product name, so retrieving the right chunk when searching for shelf life of XYZ among dozens of other documents mentioning the shelf life of various products isn’t possible. A solution is to prepend the document name or title to each chunk. This way, when performing a hybrid search about the shelf life of product XYZ, the relevant chunk is more likely to be retrieved.

# append the document name to the chunks to improve context,
# now chunk 2 will contain the product name

chunk_1 = document_name + chunk_1
chunk_2 = document_name + chunk_2

This is one way to use document metadata to improve search results, which can be sufficient in some cases. Later, we discuss how you can use metadata to filter the OpenSearch index.

Small-to-large chunk retrieval

Example use case: A customer built a chatbot to help their agents better serve customers. When the agent tries to help a customer troubleshoot their internet access, he might search for How to troubleshoot internet access? You can see a document where the instructions are split between two chunks in the following example. The retriever will most likely return the first chunk but might miss the second chunk when using hybrid search. Prepending the document title might not help in this example.

document_title = "Resolving network issues"

chunk_1 = """
[....]

# Troubleshooting internet access:

1. Check your physical connections:
   - Ensure that the Ethernet cable (if using a wired connection) is securely 
   plugged into both your computer and the modem/router.
   - If using a wireless connection, check that your device's Wi-Fi is turned 
   on and connected to the correct network.

2. Restart your devices:
   - Reboot your computer, laptop, or mobile device.
   - Power cycle your modem and router by unplugging them from the power source, 
   waiting for a minute, and then plugging them back in.

"""

chunk_2 = """
3. Check for network outages:
   - Contact your internet service provider (ISP) to inquire about any known 
   outages or service disruptions in your area.
   - Visit your ISP's website or check their social media channels for updates on 
   service status.
  
4. Check for interference:
   - If using a wireless connection, try moving your device closer to the router or access point.
   - Identify and eliminate potential sources of interference, such as microwaves, cordless phones, or other wireless devices operating on the same frequency.

# Router configuration

[....]
"""

To mitigate this issue, the first thing to try is to slightly increase the chunk size and overlap, reducing the likelihood of improper segmentation, but this requires trial and error to find the right parameters. A more effective solution is to employ a small-to-large chunk retrieval strategy. After retrieving the most relevant chunks through semantic or hybrid search (chunk_1 in the preceding example), adjacent chunks (chunk_2) are retrieved, merged with the initial chunks and provided to the FM for a broader context. You can even pass the full document text if the size is reasonable.

This method requires an additional OpenSearch field in the index to keep track of the chunk number and document name at ingest time, so that you can use those to retrieve the neighboring chunks after retrieving the most relevant chunk. See the following code example.

document_name = doc['document_name'] 
current_chunk = doc['current_chunk']

query = {
    "query": {
        "bool": {
            "must": [
                {
                    "match": {
                        "document_name": document_name
                    }
                }
            ],
            "should": [
                {"term": {"chunk_number": current_chunk - 1}},
                {"term": {"chunk_number": current_chunk + 1}}
            ],
            "minimum_should_match": 1
        }
    }
}

A more general approach is to do hierarchical chunking, in which each small (child) chunk is linked to a larger (parent) chunk. At retrieval time, you retrieve the child chunks, but then replace them with the parent chunks before sending the chunks to the FM.

Amazon Bedrock Knowledge Bases can perform hierarchical chunking.

Section-based chunking

Example use case: A financial news provider wants to build a chatbot to retrieve and summarize commentary articles about certain geographic regions, industries, or financial products. The questions require a broad context, such as What is the outlook for electric vehicles in China? Answering that question requires access to the entire section on electric vehicles in the “Chinese Auto Industry Outlook” commentary article. Compare that to other question and answer use cases that require small chunks to answer a question (such as our example about searching for product specifications).

Example use case: Section based chunking also works well for how-to-guides (such as the preceding internet troubleshooting example) or industrial maintenance use cases where the user needs to follow step-by-step instructions and having truncated content would have a negative impact.

Using the structure of the text document to determine where to split it is an efficient way to create chunks that are coherent and contain all relevant context. If the document is in HTML or Markdown format, you can use the section delimiters to determine the chunks (see Langchain Markdown Splitter or HTML Splitter). If the documents are in PDF format, the Textractor library provides a wrapper around Amazon Textract that uses the Layout feature to convert a PDF document to Markdown or HTML.

Note that section-based chunking will create chunks with varying size, and they might not fit the context window of Cohere Embed, which is limited to 500 tokens. Amazon Titan Text Embeddings are better suited to section-based chunking because of their context window of 8,192 tokens.

To implement section based chunking in Amazon Bedrock Knowledge Bases, you can use an AWS Lambda function to run a custom transformation. Amazon Bedrock Knowledge Bases also has a feature to create semantically coherent chunks, called semantic chunking. Instead of using the sections of the documents to determine the chunks, it uses embedding distance to create meaningful clusters of sentences.

Rewriting the user query

Query rewriting is a powerful technique that can benefit a variety of use cases.

Example use case: A RAG chatbot that’s built for a food manufacturer allows customers to ask questions about products, such as ingredients, shelf-life, and allergens. Consider the following example query:

query = """" 
Can you list all the ingredients in the nuts and seeds granola?
Put the allergens in all caps. 
"""

Query rewriting can help with two things:

  • It can rewrite the query just for search purposes, without information about formatting that might distract the retriever.
  • It can extract a list of keywords to use for hybrid search.
  • It can extract the product name, which can be used as a filter in the OpenSearch index to refine search results (more details in the next section).

In the following code, we prompt the FM to rewrite the query and extract keywords and the product name. To avoid introducing too much latency with query rewriting, we suggest using a smaller model like Anthropic’s Claude Haiku and provide an example of a reformatted query to boost the performance.

import json

query_rewriting_prompt = """
Rewrite the query as a json with the following keys:
- rewritten_query: a better version of the user's query that will be used to compute 
an embedding and do semantic search
- keywords: a list of keywords that correspond to the query, to be used in a 
search engine, it should not contain the product name.
- product_name: if the query is a about a specific product, give the name here,
 otherwise say None.

<example>
H: what are the ingedients in the savory trail mix?
A: {{
  "rewritten_query": "ingredients savory trail mix",
  "keywords": ["ingredients"],
  "product_name": "savory trail mix"
}}
</example>

<query>
{query}
</query>

Only output the json, nothing else.
"""

def rewrite_query(query):
    response = call_FM(query_rewriting_prompt.format(query=query))
    print(response)
    json_query = json.loads(response)
    return json_query
    
rewrite_query(query)

The code output will be the following json:

{ 
"rewritten_query":"ingredients nuts and seeds granola allergens",
"keywords": ["ingredients", "allergens"], 
"product_name": "nuts and seeds granola" 
}

Amazon Bedrock Knowledge Bases now supports query rewriting. See this tutorial.

Metadata filtering

Example use case: Let’s continue with the previous example, where a customer asks “Can you list all the ingredients in the nuts and seeds granola? Put the allergens in bold and all caps.” Rewriting the query allowed you to remove superfluous information about the formatting and improve the results of hybrid search. However, there might be dozens of products that are either granola, or nuts, or granola with nuts.

If you enforce an OpenSearch filter to match exactly the product name, the retriever will return only the product information for nuts and seeds granola instead of the k-nearest documents when using hybrid search. This will reduce the number of tokens in the prompt and will both improve latency of the RAG chatbot and diminish the risk of hallucinations because of information overload.

This scenario requires setting up the OpenSearch index with metadata. Note that if your documents don’t come with metadata attached, you can use an FM at ingest time to extract metadata from the documents (for example, title, date, and author).

oss = get_opensearch_serverless_client()
request = {
"product_info": product_info, # full text for the product information
"vector_field_product":embed_query_titan(product_info), # embedding for product information
"product_name": product_name,
"date": date, # optional field, can allow to sort by most recent
"_op_type": "index",
"source": file_key # this is the s3 location, you can replace this with a URL
}
oss.index(index = index_name, body = request)

The following is an example of combining hybrid search, query rewriting, and filtering on the product_name field. Note that for the product name, we use a match_phrase clause to make sure that if the product name contains several words, the product name is matched in full; that is, if the product you’re looking for is “nuts and seeds granola”, you don’t want to match all product names that contain “nuts”, “seeds”, or “granola”.

query = """
Can you list all the ingredients in the nuts and seeds granola?
Put the allergens in bold and all caps.
"""
# using the rewrite_query function from the previous section
json_query = rewrite_query(query) 

# get the product name and keywords from the json query
product_name = json_query["product_name"] 
keywords = json_query["keywords"]

# compute the vector embedding of the rewritten query
vector_embedding = compute_embedding(json_query["rewritten_query"])

#initialize search query dictionary
search_query = {"size":10, "query": { "bool": { "should":[] , "must":[] } } }
# add must with match_phrase clause to filter on product name
search_query['query']['bool']['should'].append(
    {"match_phrase": {
            "product_name": product_name # Extracted product name must match product name field 
        }
        }

# semantic search
search_query['query']['bool']['should'].append(
        {"function_score": 
            { "query": 
            {"knn": 
            {"vector_field_product": 
            {"vector": vector_embedding, 
            "k": 10 # The number of nearest neighbors to retrieve
            }}}, 
            "weight": semantic_weight } })
            
# keyword search
search_query['query']['bool']['should'].append(
{"function_score": 
        { "query": 
            {"match": 
            # This will increase the score of chunks that match the words in the query
            {"product_info":  query} 
            },
            "weight": keyword_weight } })

Amazon Bedrock Knowledge Bases recently introduced the ability to use metadata. See Amazon Bedrock Knowledge Bases now supports metadata filtering to improve retrieval accuracy for details on the implementation.

Training custom embeddings

Training custom embeddings is a more expensive and time-consuming way to improve a retriever, so it shouldn’t be the first thing to try to improve your RAG. However, if the performance of the retriever is still not satisfactory after trying the tips already mentioned, then training a custom embedding can boost its performance. Amazon Titan Text Embeddings models aren’t currently available for fine tuning, but the FlagEmbedding library on Hugging Face provides a way to fine-tune BAAI embeddings, which are available in several sizes and rank highly in the Hugging Face embedding leaderboard. Fine-tuning requires the following steps:

  • Gather positive question-and-document pairs. You can do this manually or by using an FM prompted to generate questions based on the document.
  • Gather negative question-and-document pairs. It’s important to focus on documents that might be considered relevant by the pre-trained model but are not. This process is called hard negative mining.
  • Feed those pairs to the FlagEmbedding training module for fine-tuning as a JSON:
    {"query": str, "pos": List[str], "neg":List[str]}
    where query is the query, pos is a list of positive texts, and neg is a list of negative texts.
  • Combine the fine-tuned model with a pre-trained model using to avoid over-fitting on the fine-tuning dataset.
  • Deploy the final model for inference, for example on Amazon SageMaker, and evaluate it on sample questions.

Improving reliability of generated responses

Even with an optimized retriever, hallucinations can still occur. Prompt engineering is the best way to help prevent hallucinations in RAG. Additionally, asking the FM to generate quotations used in the answer can further reduce hallucinations and empower the user to verify the information sources.

Prompt engineering guardrails

Example use case: We built a chatbot that analyzes scouting reports for a professional sports franchise. The user might input What are the strengths of Player X? Without guardrails in the prompt, the FM might try to fill the gaps in the provided documents by using its own knowledge of Player X (if he’s a well-known player) or worse, make up information by combining knowledge it has about other players.

The FM’s training knowledge can sometimes get in the way of RAG answers. Basic prompting techniques can help mitigate hallucinations:

  • Instruct the FM to only use information available in the documents to answer the question.
    • Only use the information available in the documents to answer the question
  • Giving the FM the option to say when it doesn’t have the answer.
    • If you can’t answer the question based on the documents provided, say you don’t know.

Asking the FM to output quotes

Another approach to make answers more reliable is to output supporting quotations. This has two benefits:

  • It allows the FM to generate its response by first outputting the relevant quotations, and then using them to generate its answer.
  • The presence of the quotation in the cited document can be checked programmatically, and the user can be warned if the quotation wasn’t found in the text. They can also look in the referenced document to get more context about the quotation.

In the following example, we prompt the FM to output quotations in <quote> tags. The quotations are nicely formatted as a JSON, with the source document name. Note how we put each document in its own <doc_i> tag in the prompt, so that the FM can reference it.

# Note: The scouting reports for Player A and Player B used in the example below
# were generated by Anthropic’s Claude Sonnet 3.
quotes_tags_prompt = """
You are an Ai assistant chatbot designed to answer questions about your team's baseballe scouting reports.

Here are some reports about players that you can use to answer the question:

<doc_0>
{doc_0}
</doc_0>

<doc_1>
{doc_1}
</doc_1>

<instructions>
- In <scratchpad> tags, you should put the document names and quotes from these documents 
 that help you answer this question. 
 You must format this as one or more jsons format with 'document_name' and 'quote' as keys. 
- Then put your answer in <answer> tags. 
 If you refer to documents in your answer, make sure you are using the corresponding tag 
 e.g. in doc_0, in doc_3 etc.
- If you can't answer the question because there is not enough information in the reports say 
 "I can't answer this based on the provided reports." 
</instructions>

<question>
{question}
</question>
"""

Below is the FM response to the question "What are the weaknesses of each player?":

"""
<scratchpad>
[
{
"document_name": "doc_0",
"quote": "The main concern with Player A is his approach at the plate. He can be overly aggressive at times and will chase pitches out of the zone. Improving his plate discipline and pitch recognition will be key to him reaching his full offensive potential."
},
{
"document_name": "doc_1",
"quote": "The main area of focus for Player B's development will be improving his plate discipline and pitch recognition. He can be overly aggressive at times, leading to a high strikeout rate and a lower on-base percentage than his hit tool would suggest."
}
]
</scratchpad>
<answer>
The main weaknesses of the players are:
For Player A:
- Overly aggressive approach at the plate, chasing pitches out of the zone
- Needs to improve plate discipline and pitch recognition to reach full offensive potential
For Player B:
- Can be overly aggressive at the plate, leading to high strikeout rate and lower on-base percentage
- Needs to improve plate discipline and pitch recognition
</answer>
"""

One drawback of generating quotations outside of the answer is that the FM tends to only select one or two short quotations per document. This doesn’t make it a great choice when you’re asking complex questions that need to pull a lot of information from diverse documents. It also leads to creating more output tokens that are eventually hidden from the user, which can significantly degrade the latency of the RAG if the quotations are too long.

Generating an answer with inline quotations

Continuing the example of the chatbot to analyze scouting reports, the user might want to summarize reports about a player from several scouts. A single player can have over a dozen scouting reports that the FM could use in its summary, and the previous method outputs only a small number of quotations as justification. One alternative is to ask the FM to embed quotations directly in the answer, by using quotation marks and inline citations.

# Note: The scouting reports for Player A used in the example below
# were generated by Anthropic’s Claude Sonnet 3.

quotes_in_text_prompt = """
You are an Ai assistant chatbot designed to answer questions about your team's baseballe scouting reports.

Here are some reports about players that you can use to answer the question:

<doc_0>
{doc_0}
</doc_0>

...

<doc_10>
{doc_10}
</doc_10>

<instructions>
- Put your answer in <answer> tags.
- Use as much information from different reports as possible.
- You should only use information in the documents to answer. If you don't have enough information in the reports to answer, say you can't answer based on the reports.
- You should ground your answer by quoting the relevant documents by using quotation marks.
- After the quotes, put an inline citation <example>Player A is "very athletic" (doc_2)</example>
</instructions>

<question>
{question}
</question>
"""

Below is the FM response to the question "What are the strengths of player A?":

"""
<scratchpad>
[
{
"document_name": "doc_0",
"quote": "The main concern with Player A is his approach at the plate. He can be overly aggressive at times and will chase pitches out of the zone. Improving his plate discipline and pitch recognition will be key to him reaching his full offensive potential."
},
{
"document_name": "doc_1",
"quote": "The main area of focus for Player B's development will be improving his plate discipline and pitch recognition. He can be overly aggressive at times, leading to a high strikeout rate and a lower on-base percentage than his hit tool would suggest."
}
]
</scratchpad>
<answer>
The main weaknesses of the players are:
For Player A:
- Overly aggressive approach at the plate, chasing pitches out of the zone
- Needs to improve plate discipline and pitch recognition to reach full offensive potential
For Player B:
- Can be overly aggressive at the plate, leading to high strikeout rate and lower on-base percentage
- Needs to improve plate discipline and pitch recognition
</answer>
"""

Verifying quotes

You can use a Python script to check if a quotation is present in the referenced text, thanks to the tag doc_i. However, while this checking mechanism guarantees no false positives, there can be false negatives. When the quotation-checking function fails to find a quotation in the documents, it means only that the quotation isn’t present verbatim in the text. The information might still be factually correct but formatted differently. The FM might remove punctuation or correct misspellings from the original document, or the presence of Unicode characters in the original document that cannot be generated by the FM make the quotation-checking function fail.

To improve the user experience, you can display in the UI if the quotation was found, in which case the user can fully trust the response, and if the quotation wasn’t found, the UI can display a warning and suggest that the user check the cited source. Another benefit of prompting the FM to provide the associated source in the response is that it allows you to display only the sources in the UI to avoid information overload but still provide the user with a way to look for additional information if needed.

An additional FM call, potentially with another model, can be used to assess the response instead of using the more rigid approach of the Python script. However, using an FM to grade another FM answer has some uncertainty and it cannot match the reliability provided by using a script to check the quotation or, in the case of a suspect quotation, by using human verification.

Conclusion

Building effective text-only RAG solutions requires carefully optimizing the retrieval component to surface the most relevant information to the language model. Although FMs are highly capable, their performance is heavily dependent on the quality of the retrieved context.

As the adoption of generative AI continues to accelerate, building trustworthy and reliable RAG solutions will become increasingly crucial across industries to facilitate their broad adoption. We hope the lessons learned from our experiences at AWS GenAIIC provide a solid foundation for organizations embarking on their own generative AI journeys.

In this part of this series, we covered the core concepts behind RAG architectures and discussed strategies for evaluating RAG performance, both quantitatively through metrics and qualitatively by analyzing individual outputs. We outlined several practical tips for improving text retrieval, including using hybrid search techniques, enhancing context through data preprocessing, and rewriting queries for better relevance. We also explored methods for increasing reliability, such as prompting the language model to provide supporting quotations from the source material and programmatically verifying their presence.

In the second post in this series, we will discuss RAG beyond text. We will present techniques to work with multiple data formats, including structured data (tables and databases) and multimodal RAG, which mixes text and images.


About the Author

Aude Genevay is a Senior Applied Scientist at the Generative AI Innovation Center, where she helps customers tackle critical business challenges and create value using generative AI. She holds a PhD in theoretical machine learning and enjoys turning cutting-edge research into real-world solutions.

Read More

Enhance your Amazon Redshift cloud data warehouse with easier, simpler, and faster machine learning using Amazon SageMaker Canvas

Enhance your Amazon Redshift cloud data warehouse with easier, simpler, and faster machine learning using Amazon SageMaker Canvas

Machine learning (ML) helps organizations to increase revenue, drive business growth, and reduce costs by optimizing core business functions such as supply and demand forecasting, customer churn prediction, credit risk scoring, pricing, predicting late shipments, and many others.

Conventional ML development cycles take weeks to many months and requires sparse data science understanding and ML development skills. Business analysts’ ideas to use ML models often sit in prolonged backlogs because of data engineering and data science team’s bandwidth and data preparation activities.

In this post, we dive into a business use case for a banking institution. We will show you how a financial or business analyst at a bank can easily predict if a customer’s loan will be fully paid, charged off, or current using a machine learning model that is best for the business problem at hand. The analyst can easily pull in the data they need, use natural language to clean up and fill any missing data, and finally build and deploy a machine learning model that can accurately predict the loan status as an output, all without needing to become a machine learning expert to do so. The analyst will also be able to quickly create a business intelligence (BI) dashboard using the results from the ML model within minutes of receiving the predictions. Let’s learn about the services we will use to make this happen.

Amazon SageMaker Canvas is a web-based visual interface for building, testing, and deploying machine learning workflows. It allows data scientists and machine learning engineers to interact with their data and models and to visualize and share their work with others with just a few clicks.

SageMaker Canvas has also integrated with Data Wrangler, which helps with creating data flows and preparing and analyzing your data. Built into Data Wrangler, is the Chat for data prep option, which allows you to use natural language to explore, visualize, and transform your data in a conversational interface.

Amazon Redshift is a fast, fully managed, petabyte-scale data warehouse service that makes it cost-effective to efficiently analyze all your data using your existing business intelligence tools.

Amazon QuickSight powers data-driven organizations with unified (BI) at hyperscale. With QuickSight, all users can meet varying analytic needs from the same source of truth through modern interactive dashboards, paginated reports, embedded analytics, and natural language queries.

Solution overview

The solution architecture that follows illustrates:

  1. A business analyst signing in to SageMaker Canvas.
  2. The business analyst connects to the Amazon Redshift data warehouse and pulls the desired data into SageMaker Canvas to use.
  3. We tell SageMaker Canvas to build a predictive analysis ML model.
  4. After the model has been built, get batch prediction results.
  5. Send the results to QuickSight for users to further analyze.

Prerequisites

Before you begin, make sure you have the following prerequisites in place:

  • An AWS account and role with the AWS Identity and Access Management (IAM) privileges to deploy the following resources:
    • IAM roles.
    • A provisioned or serverless Amazon Redshift data warehouse. For this post we’ll use a provisioned Amazon Redshift cluster.
    • A SageMaker domain.
    • A QuickSight account (optional).
  • Basic knowledge of a SQL query editor.

Set up the Amazon Redshift cluster

We’ve created a CloudFormation template to set up the Amazon Redshift cluster.

  1. Deploy the Cloudformation template to your account.
  2. Enter a stack name, then choose Next twice and keep the rest of parameters as default.
  3. In the review page, scroll down to the Capabilities section, and select I acknowledge that AWS CloudFormation might create IAM resources.
  4. Choose Create stack.

The stack will run for 10–15 minutes. After it’s finished, you can view the outputs of the parent and nested stacks as shown in the following figures:

Parent stack

Nested stack 

Sample data

You will use a publicly available dataset that AWS hosts and maintains in our own S3 bucket as a workshop for bank customers and their loans that includes customer demographic data and loan terms.

Implementation steps

Load data to the Amazon Redshift cluster

  1. Connect to your Amazon Redshift cluster using Query Editor v2. To navigate to the Amazon Redshift Query v2 editor, please follow the steps Opening query editor v2.
  2. Create a table in your Amazon Redshift cluster using the following SQL command:
    DROP table IF EXISTS public.loan_cust;
    
    CREATE TABLE public.loan_cust (
        loan_id bigint,
        cust_id bigint,
        loan_status character varying(256),
        loan_amount bigint,
        funded_amount_by_investors double precision,
        loan_term bigint,
        interest_rate double precision,
        installment double precision,
        grade character varying(256),
        sub_grade character varying(256),
        verification_status character varying(256),
        issued_on character varying(256),
        purpose character varying(256),
        dti double precision,
        inquiries_last_6_months bigint,
        open_credit_lines bigint,
        derogatory_public_records bigint,
        revolving_line_utilization_rate double precision,
        total_credit_lines bigint,
        city character varying(256),
        state character varying(256),
        gender character varying(256),
        ssn character varying(256),
        employment_length bigint,
        employer_title character varying(256),
        home_ownership character varying(256),
        annual_income double precision,
        age integer
    ) DISTSTYLE AUTO;

  3. Load data into the loan_cust table using the following COPY command:
    COPY loan_cust  FROM 's3://redshift-demos/bootcampml/loan_cust.csv'
    iam_role default
    region 'us-east-1' 
    delimiter '|'
    csv
    IGNOREHEADER 1;

  4. Query the table to see what the data looks like:
    SELECT * FROM loan_cust LIMIT 100;

Set up chat for data

  1. To use the chat for data option in Sagemaker Canvas, you must enable it in Amazon Bedrock.
    1. Open the AWS Management Console, go to Amazon Bedrock, and choose Model access in the navigation pane.
    2. Choose Enable specific models, under Anthropic, select Claude and select Next.
    3. Review the selection and click Submit.
  2. Navigate to Amazon SageMaker service from the AWS management console, select Canvas and click on Open Canvas.
  3. Choose Datasets from the navigation pane, then choose the Import data dropdown, and select Tabular.
  1. For Dataset name, enter redshift_loandata and choose Create.
  2. On the next page, choose Data Source and select Redshift as the source. Under Redshift, select + Add Connection.
  3. Enter the following details to establish your Amazon Redshift connection :
    1. Cluster Identifier: Copy the ProducerClusterName from the CloudFormation nested stack outputs.
    2. You can reference the preceding screen shot for Nested Stack, where you will find the cluster identifier output.
    3. Database name: Enter dev.
    4. Database user: Enter awsuser.
    5. Unload IAM Role ARN: Copy theRedshiftDataSharingRoleName from the nested stack outputs.
    6. Connection Name: Enter MyRedshiftCluster.
    7. Choose Add connection.

  4. After the connection is created, expand the public schema, drag the loan_cust table into the editor, and choose Create dataset.
  5. Choose the redshift_loandata dataset and choose Create a data flow.
  6. Enter redshift_flow for the name and choose Create.
  7. After the flow is created, choose Chat for data prep.
  8. In the text box, enter summarize my data and choose the run arrow.
  9. The output should look something like the following:
  1. Now you can use natural language to prep the dataset. Enter Drop ssn and filter for ages over 17 and click on the run arrow. You will see it was able to handle both steps. You can also view the PySpark code that it ran. To add these steps as dataset transforms, choose Add to steps.
  2. Rename the step to drop ssn and filter age > 17, choose Update, and then choose Create model.
  3. Export data and create model: Enter loan_data_forecast_dataset for the Dateset name, for Model name, enter loan_data_forecast, for Problem type, select Predictive analysis, for Target column, select loan_status, and click Export and create model.
  4. Verify the correct Target column and Model type is selected and click on Quick build.
  5. Now the model is being created. It usually takes 14–20 minutes depending on the size of your data set.
  6. After the model has completed training, you will be routed to the Analyze tab. There, you can see the average prediction accuracy and the column impact on prediction outcome. Note that your numbers might differ from the ones you see in the following figure, because of the stochastic nature of the ML process.

Use the model to make predictions

  1. Now let’s use the model to make predictions for the future status of loans. Choose Predict.
  2. Under Choose the prediction type, select Batch prediction, then select Manual.
  3. Then select loan_data_forecast_dataset from the dataset list, and click Generate predictions.
  4. You’ll see the following after the batch prediction is complete. Click on the breadcrumb menu next to the Ready status and click on Preview to view the results.
  5. You can now view the predictions and download them as CSV.
  6. You can also generate single predictions for one row of data at a time. Under Choose the prediction type, select Single Prediction and then change the values for any of the input fields that you’d like, and choose Update.

Analyze the predictions

We will now show you how to use Quicksight to visualize the predictions data from SageMaker canvas to further gain insights from your data. SageMaker Canvas has direct integration with QuickSight, which is a cloud-powered business analytics service that helps employees within an organization to build visualizations, perform ad-hoc analysis, and quickly get business insights from their data, anytime, on any device.

  1. With the preview page up, choose Send to Amazon QuickSight.
  2. Enter a QuickSight user name you want to share the results to.
  3. Choose Send and you should see confirmation saying the results were sent successfully.
  4. Now, you can create a QuickSight dashboard for predictions.
    1. Go to the QuickSight console by entering QuickSight in your console services search bar and choose QuickSight.
    2. Under Datasets, select the SageMaker Canvas dataset that was just created.
    3. Choose Edit Dataset.
    4. Under the State field, change the data type to State.
    5. Choose Create with Interactive sheet selected.
    6. Under visual types, choose the Filled map
    7. Select the State and Probability
    8. Under Field wells, choose Probability and change the Aggregate to Average and Show as to Percent.
    9. Choose Filter and add a filter for loan_status to include fully paid loans only. Choose Apply.
    10. At the top right in the blue banner, choose Share and Publish Dashboard.
    11. We use the name Average probability for fully paid loan by state, but feel free to use your own.
    12. Choose Publish dashboard and you’re done. You would now be able to share this dashboard with your predictions to other analysts and consumers of this data.

Clean up

Use the following steps to avoid any extra cost to your account:

  1. Sign out of SageMaker Canvas
  2. In the AWS console, delete the CloudFormation stack you launched earlier in the post.

Conclusion

We believe integrating your cloud data warehouse (Amazon Redshift) with SageMaker Canvas opens the door to producing many more robust ML solutions for your business at faster and without needing to move data and with no ML experience.

You now have business analysts producing valuable business insights, while letting data scientists and ML engineers help refine, tune, and extend models as needed. SageMaker Canvas integration with Amazon Redshift provides a unified environment for building and deploying machine learning models, allowing you to focus on creating value with your data rather than focusing on the technical details of building data pipelines or ML algorithms.

Additional reading:

  1. SageMaker Canvas Workshop
  2. re:Invent 2022 – SageMaker Canvas
  3. Hands-On Course for Business Analysts – Practical Decision Making using No-Code ML on AWS

About the Authors

Suresh Patnam is Principal Sales Specialist  AI/ML and Generative AI at AWS. He is passionate about helping businesses of all sizes transform into fast-moving digital organizations focusing on data, AI/ML, and generative AI.

Sohaib Katariwala is a Sr. Specialist Solutions Architect at AWS focused on Amazon OpenSearch Service. His interests are in all things data and analytics. More specifically he loves to help customers use AI in their data strategy to solve modern day challenges.

Michael Hamilton is an Analytics & AI Specialist Solutions Architect at AWS. He enjoys all things data related and helping customers solution for their complex use cases.

Nabil Ezzarhouni is an AI/ML and Generative AI Solutions Architect at AWS. He is based in Austin, TX and  passionate about Cloud, AI/ML technologies, and Product Management. When he is not working, he spends time with his family, looking for the best taco in Texas. Because…… why not?

Read More

Create a generative AI-based application builder assistant using Amazon Bedrock Agents

Create a generative AI-based application builder assistant using Amazon Bedrock Agents

In this post, we set up an agent using Amazon Bedrock Agents to act as a software application builder assistant.

Agentic workflows are a fresh new perspective in building dynamic and complex business use- case based workflows with the help of large language models (LLM) as their reasoning engine or brain. These agentic workflows decompose the natural language query-based tasks into multiple actionable steps with iterative feedback loops and self-reflection to produce the final result using tools and APIs.

Amazon Bedrock Agents helps you accelerate generative AI application development by orchestrating multistep tasks. Amazon Bedrock Agents uses the reasoning capability of foundation models (FMs) to break down user-requested tasks into multiple steps. They use the developer-provided instruction to create an orchestration plan and then carry out the plan by invoking company APIs and accessing knowledge bases using Retrieval Augmented Generation (RAG) to provide a final response to the end user. This offers tremendous use case flexibility, enables dynamic workflows, and reduces development cost. Amazon Bedrock Agents is instrumental in customization and tailoring apps to help meet specific project requirements while protecting private data and securing their applications. These agents work with AWS managed infrastructure capabilities and Amazon Bedrock, reducing infrastructure management overhead. Additionally, agents streamline workflows and automate repetitive tasks. With the power of AI automation, you can boost productivity and reduce cost.

Amazon Bedrock is a fully managed service that offers a choice of high-performing FMs from leading AI companies like AI21 Labs, Anthropic, Cohere, Meta, Mistral AI, Stability AI, and Amazon through a single API, along with a broad set of capabilities to build generative AI applications with security, privacy, and responsible AI.

Solution overview

Typically, a three-tier software application has a UI interface tier, a middle tier (the backend) for business APIs, and a database tier. The generative AI–based application builder assistant from this post will help you accomplish tasks through all three tiers. It can generate and explain code snippets for UI and backend tiers in the language of your choice to improve developer productivity and facilitate rapid development of use cases. The agent can recommend software and architecture design best practices using the AWS Well-Architected Framework for the overall system design.

The agent can generate SQL queries using natural language questions using a database schema DDL (data definition language for SQL) and execute them against a database instance for the database tier.

We use Amazon Bedrock Agents with two knowledge bases for this assistant. Amazon Bedrock Knowledge Bases inherently uses the Retrieval Augmented Generation (RAG) technique. A typical RAG implementation consists of two parts:

  • A data pipeline that ingests data from documents typically stored in Amazon Simple Storage Service (Amazon S3) into a knowledge base, namely a vector database such as Amazon OpenSearch Serverless, so that it’s available for lookup when a question is received
  • An application that receives a question from the user, looks up the knowledge base for relevant pieces of information (context), creates a prompt that includes the question and the context, and provides it to an LLM for generating a response

The following diagram illustrates how our application builder assistant acts as a coding assistant, recommends AWS design best practices, and aids in SQL code generation.

architecture diagram for this notebook to demonstrate the conditional workflow for llms. This shows 3 workflows possible via this Application Builder Assistant. 1) Text to SQL - generate SQL statements via natural language and execute it against a local DB 2) web scraped knowledge base on AWS well architected framework - user can ask questions on it 3) Write and explain code via Claude LLM. User can ask any of these three types of questions making it an application builder assistant.

Based on the three workflows in the preceding figure, let’s explore the type of task you need for different use cases:

  • Use case 1 – If you want to write and validate a SQL query against a database, use the existing DDL schemas set up as knowledge base 1 to come up with the SQL query. The following are sample user queries:
    • What are the total sales amounts by year?
    • What are the top five most expensive products?
    • What is the total revenue for each employee?
  • Use case 2 – If you want recommendations on design best practices, look up the AWS Well-Architected Framework knowledge base (knowledge base 2). The following are sample user queries:
    • How can I design secure VPCs?
    • What are some S3 best practices?
  • Use case 3 – You might want to author some code, such as helper functions like validate email, or use existing code. In this case, use prompt engineering techniques to call the default agent LLM and generate the email validation code. The following are sample user queries:
    • Write a Python function to validate email address syntax.
    • Explain the following code in lucid, natural language to me. $code_to_explain (this variable is populated using code contents from any code file of your choice. More details can be found in the notebook).

Prerequisites

To run this solution in your AWS account, complete the following prerequisites:

  1. Clone the GitHub repository and follow the steps explained in the README.
  2. Set up an Amazon SageMaker notebook on an ml.t3.medium Amazon Elastic Compute Cloud (Amazon EC2) instance. For this post, we have provided an AWS CloudFormation template, available in the GitHub repository. The CloudFormation template also provides the required AWS Identity and Access Management (IAM) access to set up the vector database, SageMaker resources, and AWS Lambda
  3. Acquire access to models hosted on Amazon Bedrock. Choose Manage model access in the navigation pane on the Amazon Bedrock console and choose from the list of available options. We use Anthropic’s Claude v3 (Sonnet) on Amazon Bedrock and Amazon Titan Embeddings Text v2 on Amazon Bedrock for this post.

Implement the solution

In the GitHub repository notebook, we cover the following learning objectives:

  1. Choose the underlying FM for your agent.
  2. Write a clear and concise agent instruction to use one of the two knowledge bases and base agent LLM. (Examples given later in the post.)
  3. Create and associate an action group with an API schema and a Lambda function.
  4. Create, associate, and ingest data into the two knowledge bases.
  5. Create, invoke, test, and deploy the agent.
  6. Generate UI and backend code with LLMs.
  7. Recommend AWS best practices for system design with the AWS Well-Architected Framework guidelines.
  8. Generate, run, and validate the SQL from natural language understanding using LLMs, few-shot examples, and a database schema as a knowledge base.
  9. Clean up agent resources and their dependencies using a script.

Agent instructions and user prompts

The application builder assistant agent instruction looks like the following.

Hello, I am AI Application Builder Assistant. I am capable of answering the following three categories of questions:

- Best practices for design of software applications using the content inside the AWS best practices 
and AWS well-architected framework Knowledge Base. I help customers understand AWS best practices for 
building applications with AWS services.

- Generate a valid SQLite query for the customer using the database schema inside the Northwind DB knowledge base 
and then execute the query that answers the question based on the [Northwind] dataset. If the Northwind DB Knowledge Base search 
function result did not contain enough information to construct a full query try to construct a query to the best of your ability 
based on the Northwind database schema.

- Generate and Explain code for the customer following standard programming language syntax</p><p>Feel free to ask any questions 
along those lines!

Each user question to the agent by default includes the following system prompt.

Note: The following system prompt remains the same for each agent invocation, only the {user_question_to_agent} gets replaced with user query.

Question: {user_question_to_agent} 

Given an input question, you will use the existing Knowledge Bases on AWS 
Well-Architected Framework and Northwind DB Knowledge Base.

- For building and designing software applications, you will use the existing Knowledge Base on AWS well-architected framework 
to generate a response of the most relevant design principles and links to any documents. This Knowledge Base response can then be passed 
to the functions available to answer the user question. The final response to the direct answer to the user question. 
It has to be in markdown format highlighting any text of interest. Remove any backticks in the final response.

- To generate code for a given user question,  you can use the default Large Language model to come up with the response. 
This response can be in code markdown format. You can optionally provide an explanation for the code.

- To explain code for a given user question, you can use the default Large Language model to come up with the response.

- For SQL query generation you will ONLY use the existing database schemas in the Northwind DB Knowledge Base to create a syntactically 
correct SQLite query and then you will EXECUTE the SQL Query using the functions and API provided to answer the question.

Make sure to use ONLY existing columns and tables based on the Northwind DB database schema. Make sure to wrap table names with 
square brackets. Do not use underscore for table names unless that is part of the database schema. Make sure to add a semicolon after 
the end of the SQL statement generated.</p><p>Remove any backticks and any html tags like <table><th><tr> in the 
final response.

Here are a few examples of questions I can help answer by generating and then executing a SQLite query:

- What are the total sales amounts by year?</p>
- What are the top 5 most expensive products?</p>
- What is the total revenue for each employee?</p>

Cost considerations

The following are important cost considerations:

  • This current implementation has no separate charges for building resources using Amazon Bedrock Knowledge Bases or Amazon Bedrock Agents.
  • You will incur charges for embedding model and text model invocation on Amazon Bedrock. For more details, refer to Amazon Bedrock pricing.
  • You will incur charges for Amazon S3 and vector DB usage. For more details, see Amazon S3 pricing and Amazon OpenSearch Service Pricing, respectively.

Clean up

To avoid incurring unnecessary costs, the implementation automatically cleans up resources after an entire run of the notebook. You can check the notebook instructions in the Clean-up Resources section on how to avoid the automatic cleanup and experiment with different prompts.

The order of resource cleanup is as follows:

  1. Disable the action group.
  2. Delete the action group.
  3. Delete the alias.
  4. Delete the agent.
  5. Delete the Lambda function.
  6. Empty the S3 bucket.
  7. Delete the S3 bucket.
  8. Delete IAM roles and policies.
  9. Delete the vector DB collection policies.
  10. Delete the knowledge bases.

Conclusion

This post demonstrated how to query and integrate workflows with Amazon Bedrock Agents using multiple knowledge bases to create a generative AI–based software application builder assistant that can author and explain code, generate SQL using DDL schemas, and recommend design suggestions using the AWS Well-Architected Framework.

Beyond code generation and explanation of code as demonstrated in this post, to run and troubleshoot application code in a secure test environment, you can refer to Code Interpreter setup with Amazon Bedrock Agents

For more information on creating agents to orchestrate workflows, see Amazon Bedrock Agents.

Acknowledgements

The author thanks all the reviewers for their valuable feedback.


About the Author

Shayan Ray is an Applied Scientist at Amazon Web Services. His area of research is all things natural language (like NLP, NLU, NLG). His work has been focused on conversational AI, task-oriented dialogue systems and LLM-based agents. His research publications are on natural language processing, personalization, and reinforcement learning.

Read More

Transitioning from Amazon Rekognition people pathing: Exploring other alternatives

Transitioning from Amazon Rekognition people pathing: Exploring other alternatives

Amazon Rekognition people pathing is a machine learning (ML)–based capability of Amazon Rekognition Video that users can use to understand where, when, and how each person is moving in a video. This capability can be used for multiple use cases, such as for understanding:

  1. Retail analytics – Customer flow in the store and identifying high-traffic areas
  2. Sports analytics – Players’ movements across the field or court
  3. Industrial safety – Workers’ movement in work environments to promote compliance with safety protocols

After careful consideration, we made the decision to discontinue Rekognition people pathing on October 31, 2025. New customers will not be able to access the capability effective October 24, 2024, but existing customers will be able to use the capability as normal until October 31, 2025.

This post discusses an alternative solution to Rekognition people pathing and how you can implement this solution in your applications.

Alternatives to Rekognition people pathing

One alternative to Amazon Rekognition people pathing combines the open source ML model YOLOv9, which is used for object detection, and the open source ByteTrack algorithm, which is used for multi-object tracking.

Overview of YOLO9 and ByteTrack

YOLOv9 is the latest in the YOLO object detection model series. It uses a specialized architecture called Generalized Efficient Layer Aggregation Network (GELAN) to analyze images efficiently. The model divides an image into a grid, quickly identifying and locating objects in each section in a single pass. It then refines its results using a technique called programmable gradient information (PGI) to improve accuracy, especially for easily missed objects. This combination of speed and accuracy makes YOLOv9 ideal for applications that need fast and reliable object detection.

ByteTrack is an algorithm for tracking multiple moving objects in videos, such as people walking through a store. What makes it special is how it handles objects that are both straightforward and difficult to detect. Even when someone is partially hidden or in a crowd, ByteTrack can often still follow them. It’s designed to be fast and accurate, working well even when there are many people to track simultaneously.

When you combine YOLOv9 and ByteTrack for people pathing, you can review people’s movements across video frames. YOLOv9 provides person detections in each video frame. ByteTrack takes these detections and associates them across frames, creating consistent tracks for each individual, showing how people move through the video over time.

Example code

The following code example is a Python script that can be used as an AWS Lambda function or as part of your processing pipeline. You can also deploy YOLOv9 and ByteTrack for inference using Amazon SageMaker. SageMaker provides several options for model deployment, such as real-time inference, asynchronous inference, serverless inference, and batch inference. You can choose the suitable option based on your business requirements.

Here’s a high-level breakdown of how the Python script is executed:

  1. Load the YOLOv9 model – This model is used for detecting objects in each frame.
  2. Start the ByteTrack tracker – This tracker assigns unique IDs to objects and tracks them across frames.
  3. Iterate through video frame by frame – For each frame, the script iterates by detecting objects, tracking path, and drawing bounding boxes and labels around them. All these are saved on a JSON file.
  4. Output the processed video – The final video is saved with all the detected and tracked objects, annotated on each frame.
# install and import necessary packages
!pip install opencv-python ultralytics
!pip install imageio[ffmpeg]

import cv2
import imageio
import json
from ultralytics import YOLO
from pathlib import Path

# Load an official Segment model from YOLOv9
model = YOLO('yolov9e-seg.pt') 

# define the function that changes YOLOV9 output to Person pathing API output format
def change_format(results, ts, person_only):
    #set person_only to True if you only want to track persons, not other objects.
    object_json = []

    for i, obj in enumerate(results.boxes):
        x_center, y_center, width, height = obj.xywhn[0]
        # Calculate Left and Top from center
        left = x_center - (width / 2)
        top = y_center - (height / 2)
        obj_name = results.names[int(obj.cls)]
        # Create dictionary for each object detected
        if (person_only and obj_name == "person") or not person_only:
            obj_data = {
                obj_name: {
                    "BoundingBox": {
                        "Height": float(height),
                        "Left": float(left),
                        "Top": float(top),
                        "Width": float(width)
                    },
                    "Index": int(obj.id)  # Object index
                },
                "Timestamp": ts  # timestamp of the detected object
            }
        object_json.append(obj_data)

    return object_json

#  Function for person tracking with json outputs and optional videos with annotation 
def person_tracking(video_path, person_only=True, save_video=True):
    # open the video file
    reader = imageio.get_reader(video_path)
    frames = []
    i = 0
    all_object_data = []
    file_name = Path(video_path).stem

    for frame in reader:
        # Convert frame from RGB (imageio's default) to BGR (OpenCV's default)
        frame_bgr = cv2.cvtColor(frame, cv2.COLOR_RGB2BGR)
        try:
            # Run YOLOv9 tracking on the frame, persisting tracks between frames with bytetrack
            conf = 0.2
            iou = 0.5
            results = model.track(frame_bgr, persist=True, conf=conf, iou=iou, show=False, tracker="bytetrack.yaml")

            # change detection results to Person pathing API output formats.
            object_json = change_format(results[0], i, person_only)
            all_object_data.append(object_json)

            # Append the annotated frame to the frames list (for mp4 creation)
            annotated_frame = results[0].plot()
            frames.append(annotated_frame)
            i += 1

        except Exception as e:
            print(f"Error processing frame: {e}")
            break

    # save the object tracking array to json file
    with open(f'{file_name}_output.json', 'w') as file:
        json.dump(all_object_data, file, indent=4)
   
     # save annotated video
    if save_video is True:
        # Create a VideoWriter object of mp4
        fourcc = cv2.VideoWriter_fourcc(*'mp4v')
        output_path = f"{file_name}_annotated.mp4"
        fps = reader.get_meta_data()['fps']
        frame_size = reader.get_meta_data()['size']
        video_writer = cv2.VideoWriter(output_path, fourcc, fps, frame_size)

        # Write each frame to the video and release the video writer object when done
        for frame in frames:
            video_writer.write(frame)
        video_writer.release()
        print(f"Video saved to {output_path}")

    return all_object_data
    
        
#main function to call 
video_path = './MOT17-09-FRCNN-raw.webm'
all_object_data = person_tracking(video_path, person_only=True, save_video=True)

Validation

We use the following video to showcase this integration. The video shows a football practice session, where the quarter back is starting a play.

The following table shows an example of the content from the JSON file with person tracking outputs by timestamp.

Timestamp PersonIndex Bounding box
Height Left Top Width
0 42 0.51017 0.67687 0.44032 0.17873
0 63 0.41175 0.05670 0.3148 0.07048
1 42 0.49158 0.69260 0.44224 0.16388
1 65 0.35100 0.06183 0.57447 0.06801
4 42 0.49799 0.70451 0.428963 0.13996
4 63 0.33107 0.05155 0.59550 0.09304
4 65 0.78138 0.49435 0.20948 0.24886
7 42 0.42591 0.65892 0.44306 0.0951
7 63 0.28395 0.06604 0.58020 0.13908
7 65 0.68804 0.43296 0.30451 0.18394

The video below show the results with the people tracking output

Other open source solutions for people pathing

Although YOLOv9 and ByteTrack offer a powerful combination for people pathing, several other open source alternatives are worth considering:

  1. DeepSORT – A popular algorithm that combines deep learning features with traditional tracking methods
  2. FairMOT – Integrates object detection and reidentification in a single network, offering users the ability to track objects in crowded scenes

These solutions can be effectively deployed using Amazon SageMaker for inference.

Conclusion

In this post, we have outlined how you can test and implement YOLOv9 and Byte Track as an alternative to Rekognition people pathing. Combined with AWS tool offerings such as AWS Lambda and Amazon SageMaker, you can implement such open source tools for your applications.


About the Authors

Fangzhou Cheng is a Senior Applied Scientist at AWS. He builds science solutions for AWS Rekgnition and AWS Monitron to provide customers with state-of-the-art models. His areas of focus include generative AI, computer vision, and time-series data analysis

Marcel Pividal is a Senior AI Services SA in the World- Wide Specialist Organization, bringing over 22 years of expertise in transforming complex business challenges into innovative technological solutions. As a thought leader in generative AI implementation, he specializes in developing secure, compliant AI architectures for enterprise- scale deployments across multiple industries.

Read More

Unlocking generative AI for enterprises: How SnapLogic powers their low-code Agent Creator using Amazon Bedrock

Unlocking generative AI for enterprises: How SnapLogic powers their low-code Agent Creator using Amazon Bedrock

This post is cowritten with Greg Benson, Aaron Kesler and David Dellsperger from SnapLogic.

The landscape of enterprise application development is undergoing a seismic shift with the advent of generative AI. SnapLogic, a leader in generative integration and automation, has introduced the industry’s first low-code generative AI development platform, Agent Creator, designed to democratize AI capabilities across all organizational levels. Agent Creator is a no-code visual tool that empowers business users and application developers to create sophisticated large language model (LLM) powered applications and agents without programming expertise.

This intuitive platform enables the rapid development of AI-powered solutions such as conversational interfaces, document summarization tools, and content generation apps through a drag-and-drop interface. By using SnapLogic’s library of more than 800 pre-built connectors and data transformation capabilities, users can seamlessly integrate various data sources and AI models, dramatically accelerating the development process compared to traditional coding methods. This innovative platform empowers employees, regardless of their coding skills, to create generative AI processes and applications through a low-code visual designer.

Pre-built templates tailored to various use cases are included, significantly enhancing both employee and customer experiences. Agent Creator is a versatile extension to the SnapLogic platform that is compatible with modern databases, APIs, and even legacy mainframe systems, fostering seamless integration across various data environments. Its low-code interface drastically reduces the time needed to develop generative AI applications.

Agent Creator

Creating enterprise-grade, LLM-powered applications and integrations that meet security, governance, and compliance requirements has traditionally demanded the expertise of programmers and data scientists. Not anymore! SnapLogic’s Agent Creator revolutionizes this landscape by empowering everyone to create generative AI–powered applications and automations without any coding. Enterprises can use SnapLogic’s Agent Creator to store their knowledge in vector databases and create powerful generative AI solutions that augment LLMs with relevant enterprise-specific knowledge, a framework also known as Retrieval Augmented Generation (RAG). This capability accelerates business operations by providing a toolkit for users to create departmental chat assistants, add LLM-powered search to portals, automate processes involving documents, and much more. Additionally, this platform offers:

  • LLM-powered processes and apps in minutes – Agent Creator empowers enterprise users to create custom LLM-powered workflows without coding. Whether your HR department needs a Q&A workflow for employee benefits, your legal team needs a contract redlining solution, or your analysts need a research report analysis engine, Agent Creator provides the tools and flexibility to build it all.
  • Automate intelligent document processing (IDP) – Agent Creator can extract valuable data from invoices, purchase orders, resumes, insurance claims, loan applications, and other unstructured sources automatically. The IDP solution uses the power of LLMs to automate tedious document-centric processes, freeing up your team for higher-value work.
  • Boost productivity – Empowers knowledge workers with the ability to automatically and reliably summarize reports and articles, quickly find answers, and extract valuable insights from unstructured data. Agent Creator’s low-code approach allows anyone to use the power of AI to automate tedious portions of their work, regardless of their technical expertise.

The following demo shows Agent Creator in action.

To deliver these robust features, Agent Creator uses Amazon Bedrock, a foundational platform that provides managed infrastructure to use state-of-the-art foundation models (FMs). This eliminates the complexities of setting up and maintaining the underlying hardware and software so SnapLogic can focus on innovation and application development rather than infrastructure management.

What is Amazon Bedrock

Amazon Bedrock is a fully managed service that provides access to high-performing FMs from leading AI startups and Amazon through a unified API, making it easier for enterprises to develop generative AI applications. Users can choose from a wide range of FMs to find the best fit for their use case. With Amazon Bedrock, organizations can experiment with and evaluate top models, customize them with their data using techniques like fine-tuning and RAG, and build intelligent agents that use enterprise systems and data sources. The serverless experience offered by Amazon Bedrock enables quick deployment, private customization, and secure integration of these models into applications without the need to manage underlying infrastructure. Key features include experimenting with prompts, augmenting response generation with data sources, creating reasoning agents, adapting models to specific tasks, and improving application efficiency with provisioned throughput, providing a robust and scalable solution for enterprise AI needs. The robust capabilities and unified API of Amazon Bedrock make it an ideal foundation for developing enterprise-grade AI applications.

By using the Amazon Bedrock high-performing FMs, secure customization options, and seamless integration features, SnapLogic’s Agent Creator maximizes its potential to deliver powerful, low-code AI solutions. This integration not only enhances the Agent Creator’s ability to create and deploy sophisticated AI models quickly but also makes them scalable, secure, and efficient.

Why Agent Creator uses Amazon Bedrock

SnapLogic’s Agent Creator uses Amazon Bedrock to deliver a powerful, low-code generative AI development platform that meets the unique needs of its enterprise customers. By integrating Amazon Bedrock, Agent Creator benefits from several key advantages:

  • Access to top-tier FMs – Amazon Bedrock provides access to high-performing FMs from leading AI providers through a unified API. Agent Creator offers enterprises the ability to experiment with and deploy sophisticated AI models without the complexity of managing the underlying infrastructure.
  • Seamless customization and integration –The serverless architecture of Amazon Bedrock frees up the time of Agent Creator developers so they can focus on innovation and rapid development. It facilitates the seamless customization of FMs with enterprise-specific data using advanced techniques like prompt engineering and RAG so outputs are relevant and accurate.
  • Enhanced security and compliance – Security and compliance are paramount for enterprise AI applications. SnapLogic uses Amazon Bedrock to build its platform, capitalizing on the proximity to data already stored in Amazon Web Services (AWS). Because of this strategic decision, SnapLogic can offer enhanced security and compliance measures while significantly reducing latency for its customers. By processing data closer to where it resides, SnapLogic promotes faster, more efficient operations that meet stringent regulatory requirements, ultimately delivering a superior experience for businesses relying on their data integration and management solutions. Because Amazon Bedrock offers robust features to meet these requirements, Agent Creator adheres to stringent security protocols and governance standards, giving enterprises confidence in their generative AI deployments.
  • Accelerated development and deployment – With Amazon Bedrock, Agent Creator empowers users to quickly experiment with various FMs, accelerating the development cycle. The managed infrastructure streamlines the testing and deployment process, enabling rapid iteration and implementation of intelligent applications.
  • Scalability and performance – Generative AI applications built using Agent Creator are scalable and performant because of Amazon Bedrock. It can handle large volumes of data and interactions, which is crucial for enterprises requiring robust applications. Provisioned throughput options enable efficient model inference, promoting smooth operation even under heavy usage.

By harnessing the capabilities of Amazon Bedrock, SnapLogic’s Agent Creator delivers a comprehensive, low-code solution that allows enterprises to capitalize on the transformative potential of generative AI. This integration simplifies the development process while enhancing the capabilities, security, and scalability of AI applications, driving significant business value and innovation.

Solution approach

Agent Creator integrates Amazon Bedrock, Anthropic’s Claude, and Amazon OpenSearch Service vector databases to deliver a comprehensive and powerful low-code visual interface for building generative AI solutions. At its core, Amazon Bedrock provides the foundational infrastructure for robust performance, security, and scalability for deploying machine learning (ML) models. This foundational layer is critical for managing the complexities of AI model deployment, and therefore SnapLogic can offer a seamless user experience. This integrated architecture not only supports advanced AI functionalities but also makes it easy to use. By abstracting the complexities of generative AI development and providing a user-friendly visual interface, Agent Creator offers enterprises the ability to use powerful AWS generative AI services without needing deep technical knowledge.

Control plane and data plane implementation

SnapLogic’s Agent Creator platform follows a decoupled architecture, separating the control plane and data plane for enhanced security and scalability.

Control plane

The control plane is responsible for managing and orchestrating the various components of the platform. The control plane is hosted and managed by SnapLogic, meaning that customers don’t have to worry about the underlying infrastructure and can focus on their core business requirements. SnapLogic’s control plane comprises several components that manage and orchestrate the platform’s operations. Here are some key components:

  • Designer – A visual interface where users can design, build, and configure integrations and data flows
  • Manager – A centralized management console for monitoring, scheduling, and controlling the execution of integrations and data pipelines
  • Monitor – A comprehensive reporting and analytics dashboard that provides insights into the performance, usage, and health of the platform
  • API management (APIM) – A component that manages and secures the exposure of integrations and data services as APIs, providing seamless integration with external applications and systems.

By separating the control plane from the data plane, SnapLogic offers a scalable and secure architecture so customers can use generative AI capabilities while maintaining control over their data within their own virtual private cloud (VPC) environment.

Data plane

The data plane is where the actual data processing and integration take place. To address customers’ requirements about data privacy and sovereignty, SnapLogic deploys the data plane within the customer’s VPC on AWS. This approach means that customer data never leaves their controlled environment, providing an extra layer of security and compliance. By using Amazon Bedrock, SnapLogic can invoke generative AI models directly from the customer’s VPC, enabling real-time processing and analysis of customer data without needing to move it outside the secure environment. The integration with Amazon Bedrock is achieved through the Amazon Bedrock InvokeModel APIs. SnapLogic’s data plane, running within the customer’s VPC, calls these APIs to invoke the desired generative AI models hosted on Amazon Bedrock.

Functional components

The solution comprises the following functional components:

  • Vector Database Snap Pack – Manages the reading and writing of data to vector databases. This pack is crucial for maintaining the integrity and accessibility of the enterprise-specific knowledge stored in the OpenSearch vector database.
  • Chunker Snap – Segments large texts into manageable pieces. This functionality is important for processing large documents so the AI can handle and analyze text effectively.
  • Embedding Snap – Converts text segments into vectors. This step is vital for integrating enterprise-specific knowledge into AI prompts, enhancing the relevance and accuracy of AI responses.
  • LLM Snap Pack – Facilitates interactions with Claude and other language models. The AI can generate responses and perform tasks based on the processed and retrieved data.
  • Prompt Generator Snap – Enriches queries with the most relevant data so the AI prompts are contextually accurate and tailored to the specific needs of the enterprise.
  • Pre-Built Pipeline Patterns for indexing and retrieving – To streamline the deployment of intelligent applications, Agent Creator includes pre-built pipeline patterns. These patterns simplify common tasks such as indexing, retrieving data, and processing documents so AI-driven solutions can be deployed without the need for deep technical expertise.
  • Frontend Starter Kit – To simplify the deployment of user-facing applications, Agent Creator includes a Frontend Starter Kit. This kit provides pre-built components and templates for creating intuitive and responsive interfaces. Enterprises can quickly develop and deploy chat assistant UI applications, and applications not only function well but also provide a seamless and engaging user experience.

Data flow and control flow

In the architecture of Agent Creator, the interaction between Agent Creator platform, Amazon Bedrock, OpenSearch Service, and Anthropic’s Claude involves a sophisticated and efficient management of data flow and control flow. By effectively managing the data and control flows between Agent Creator and AWS services, SnapLogic provides a robust, secure, and efficient platform for developing and deploying enterprise-grade solutions. This architecture supports advanced integration functionalities and offers a seamless, user-friendly experience, making it a valuable tool for enterprise customers.

Data flow

Here is an example of this data flow for an Agent Creator pipeline that involves data ingestion, preprocessing, and vectorization using Chunker and Embedding Snaps. The resulting vectors are stored in OpenSearch Service databases for efficient retrieval and querying. When a query is initiated, relevant vectors are retrieved to augment the query with context-specific data, and the enriched query is processed by the LLM Snap Pack to generate responses.

The data flow follows these steps:

  1. Data ingestion and preprocessing – Enterprise data is ingested from various sources such as documents, databases, and APIs. Chunker Snap processes large texts and documents by segmenting them into smaller, manageable chunks to make them compatible with downstream processing steps.
  2. Vectorization – The text chunks are passed to the Embedding Snap, which converts them into vector representations using embedding models. These vectors are numerical representations that capture the semantic meaning of the text. The resulting vectors are stored in OpenSearch Service vector databases, which manage and index these vectors for efficient retrieval and querying.
  3. Data retrieval and augmentation – When a query is initiated, the Vector Database Snap Pack retrieves relevant vectors from OpenSearch Service using similarity search algorithms to match the query with stored vectors. The retrieved vectors augment the initial query with context-specific enterprise data, enhancing its relevance.
  4. PromptResponse generation – The Prompt Generator Snap refines the final query so it’s well-formed and optimized for the language model. The language model generates a response, which is then postprocessed, if necessary, before delivery.
  5. Interaction with LLMs – The augmented query is forwarded to the LLM Snap Pack, which interacts with Anthropic’s Claude and other integrated language models. This interaction generates responses based on the enriched query.

Control flow

The control flow in Agent Creator is orchestrated between the control plane and the data plane. The control plane hosts the user environment, stores configuration settings and user-created assets, and provides access to various components. The data plane executes pipelines, connecting to cloud-based or on-premises data endpoints, with the control plane orchestrating the workflow across interconnected snaps. Here is an example of this control flow for a Agent Creator.

The control flow follows these steps:

  1. Initiating requests – Users initiate requests using Agent Creator’s low-code visual interface, specifying tasks such as creating Q&A assistants or automating document processing. Pre-built UI components such as the Frontend Starter Kit capture user inputs and streamline the interaction process.
  2. Orchestrating pipelines – Agent Creator orchestrates workflows using interconnected snaps, each performing a specific function such as ingestion, chunking, vectorization, or querying. The architecture employs an event-driven model, where the completion of one snap triggers the next step in the workflow.
  3. Managing interactions with AWS services – Agent Creator communicates with AWS services, including Amazon Bedrock and OpenSearch Service, and Anthropic’s Claude in Amazon Bedrock, through secure API calls. The serverless infrastructure of Amazon Bedrock manages the execution of ML models, resulting in a scalable and reliable application.
  4. Observability – Robust mechanisms are in place for handling errors during data processing or model inference. Errors are logged and notifications are sent to system administrators for resolution. Continuous logging and monitoring provide transparency and facilitate troubleshooting. Logs are centrally stored and analyzed to maintain system integrity.
  5. Final output delivery – The generated AI responses are delivered to end user applications or interfaces, integrated into SnapLogic’s dashboards. User feedback is collected to continuously improve AI models and processing pipelines, enhancing overall system performance.

Use cases

You can use the SnapLogic Agent Creator for many different use cases. The next paragraphs illustrate just a few.

IDP on quarterly reports

A leading pharmaceutical data provider empowered their analysts by using Agent Creator and AutoIDP to automate data extraction on pharmaceutical drugs. By processing their portfolio of quarterly reports through LLMs, they could ask standardized questions to extract information that was previously gathered manually. This automation not only reduced errors but also saved significant time and resources, leading to a 35% reduction in costs and a centralized pool of reusable data assets, providing a single source of truth for their entire organization.

Automating market intelligence insights

A global telecommunications company used Agent Creator to process a multitude of RSS feeds, extracting only business-relevant information. This data was then integrated into Salesforce as a real-time feed of market insights. As the customer noted, “This automation allows us to filter and synthesize crucial data, delivering targeted, real-time insights to our sales teams, enhancing their productivity without the need for individual AI licenses.”

Agent Creator Amazon Bedrock roadmap

Development and improvement are ongoing for Agent Creator, with several enhancements released recently and more to come in the future.

Recent releases

Extended support for more Amazon Bedrock capabilities was made available with the August 2024 release. Support for retrieving and generating against Amazon Bedrock and Amazon Bedrock Knowledge Bases through snap orchestration was added as well as support for invoking Amazon Bedrock Agents. Continual enhancements for new models and additional authentication mechanisms have been released supporting AWS Identity and Access Management (IAM) role authentication and cross-account IAM role authentication. All Agent Creator LLM Snaps have also been updated to support a more raw request payload, adding support to specify entire conversations (for continued conversations) as well as the ability to specify prompts beyond just text.

Support for the Amazon Bedrock Converse API was released recently. With the Amazon Bedrock Converse API support, Agent Creator is able to support models beyond Amazon Titan and Anthropic’s Claude. This comes with added support for multi-modal prompt capabilities, which is delivered through new Snaps to orchestrate the building of these more complex payloads.

Conclusion

SnapLogic has revolutionized enterprise AI with its Agent Creator, the industry’s first low-code generative AI development platform. By integrating advanced generative AI services such as Amazon Bedrock and OpenSearch Service vector databases and cutting edge LLMs such as Anthropic’s Claude, SnapLogic empowers enterprise users, from product to sales to marketing, to create sophisticated generative AI–driven applications without deep technical expertise. This platform reduces dependency on specialized programmers and accelerates innovation by streamlining the generative AI development process with pre-built pipeline patterns and a Frontend Starter Kit.

Agent Creator offers robust performance, security, and scalability so enterprises can use powerful generative AI tools for competitive advantage. By pioneering this comprehensive approach, SnapLogic not only addresses current enterprise needs but also positions organizations to harness Amazon Bedrock for future advancements in generative AI technology, driving significant business value and operational efficiency for our enterprise customers.

To use Agent Creator effectively, schedule a demo of SnapLogic’s Agent Creator  to learn how it can address your specific use cases. Identify potential pilot projects, such as creating departmental Q&A assistants, automating document processing, or putting an LLM to work for you behind the scenes. Prepare to store your enterprise knowledge in vector databases, which Agent Creator can use to augment LLMs with your specific information through RAG. Begin with a small project, such as creating a departmental Q&A assistant, to demonstrate the value of Agent Creator and use this success to build momentum for larger initiatives. To learn more about how to make best use of Amazon Bedrock, refer to the Amazon Bedrock Documentation.


About the authors

Asheesh Goja is Principal Solutions Architect at AWS. Prior to AWS, Asheesh worked at prominent organizations such as Cisco and UPS, where he spearheaded initiatives to accelerate the adoption of several emerging technologies. His expertise spans ideation, co-design, incubation, and venture product development. Asheesh holds a wide portfolio of hardware and software patents, including a real-time C++ DSL, IoT hardware devices, Computer Vision and Edge AI prototypes. As an active contributor to the emerging fields of Generative AI and Edge AI, Asheesh shares his knowledge and insights through tech blogs and as a speaker at various industry conferences and forums.

Dhawal PatelDhawal Patel is a Principal Machine Learning Architect at AWS. He has worked with organizations ranging from large enterprises to mid-sized startups on problems related to distributed computing, and Artificial Intelligence. He focuses on Deep learning including NLP and Computer Vision domains. He helps customers achieve high performance model inference on SageMaker.

Greg Benson is a Professor of Computer Science at the University of San Francisco and Chief Scientist at SnapLogic. He joined the USF Department of Computer Science in 1998 and has taught undergraduate and graduate courses including operating systems, computer architecture, programming languages, distributed systems, and introductory programming. Greg has published research in the areas of operating systems, parallel computing, and distributed systems. Since joining SnapLogic in 2010, Greg has helped design and implement several key platform features including cluster processing, big data processing, the cloud architecture, and machine learning. He currently is working on Generative AI for data integration.

Aaron Kesler is the Senior Product Manager for AI products and services at SnapLogic, Aaron applies over ten years of product management expertise to pioneer AI/ML product development and evangelize services across the organization. He is the author of the upcoming book “What’s Your Problem?” aimed at guiding new product managers through the product management career. His entrepreneurial journey began with his college startup, STAK, which was later acquired by Carvertise with Aaron contributing significantly to their recognition as Tech Startup of the Year 2015 in Delaware. Beyond his professional pursuits, Aaron finds joy in golfing with his father, exploring new cultures and foods on his travels, and practicing the ukulele.

David Dellsperger is a Senior Staff Software Engineer and Technical Lead of the Agent Creator product at SnapLogic. David has been working as a Software Engineer emphasizing in Machine Learning and AI for over a decade previously focusing on AI in Healthcare and now focusing on the SnapLogic Agent Creator. David spends his time outside of work playing video games and spending quality time with his yellow lab, Sudo

Read More

Next-generation learning experience using Amazon Bedrock and Anthropic’s Claude: Innovation from Classworks

Next-generation learning experience using Amazon Bedrock and Anthropic’s Claude: Innovation from Classworks

This post is co-written with Jerry Henley, Hans Buchheim and Roy Gunter from Classworks.

Classworks is an online teacher and student platform that includes academic screening, progress monitoring, and specially designed instruction for reading and math for grades K–12. Classworks’s unique ability to ingest student assessment data from various sources, analyze it, and automatically deliver a customized learning progression for each student sets them apart. Although this evidence-based model has significantly impacted student growth, supporting diverse learning needs in a classroom of 25 students working independently remains challenging. Teachers often find themselves torn between assisting individual students and delivering group instruction, ultimately hindering the learning experience for all.

To address the challenges of personalized learning and teacher workload, Classworks introduces Wittly by Classworks, an AI-powered learning assistant built on Amazon Bedrock, a fully managed service that makes it straightforward to build generative AI applications.

Wittly’s innovative approach centers on two key aspects:

  • Harnessing Anthropic’s Claude in Amazon Bedrock for advanced AI capabilities – Wittly uses Amazon Bedrock to seamlessly integrate with Anthropic’s Claude Sonnet 3.5, a state-of-the-art large language model (LLM). This powerful combination enables Wittly to provide tailored learning support and foster self-directed learning environments at scale.
  • Personalization and teacher empowerment – This comprises two objectives:
    • Personalized learning – Through AI-driven differentiated instruction, Wittly adapts to individual student needs, enhancing their learning experience.
    • Reduced teacher workload – By reducing the workload, Wittly allows educators to concentrate on high-impact student support, facilitating better educational outcomes.

In this post, we discuss how Classworks uses Amazon Bedrock and Anthropic’s Claude Sonnet to deliver next-generation differentiated learning with Wittly.

Powering differentiated learning with Amazon Bedrock

The ability to deliver differentiated learning to a classroom of diverse learners is transformative. Engaging students with instruction tailored to their current learning skills accelerates mastery and fosters critical thinking and independent problem-solving. However, providing such personalized instruction to an entire classroom is labor-intensive and time-consuming for teachers.

Wittly uses generative AI to offer explanations of each skill at a student’s interest level in various ways. When students encounter challenging concepts, Wittly provides clear, concise guidance tailored to their learning style and language preferences, enabling them to grasp concepts at their own pace and overcome obstacles independently. With the scalable infrastructure of Amazon Bedrock, Wittly handles diverse classroom needs simultaneously, making personalized instruction a reality for every student.

Amazon Bedrock serves as the cornerstone of Wittly’s AI capabilities, offering several key advantages:

  • Single API access – Simplifies integration with Anthropic’s Claude foundation models (FMs), allowing for straightforward updates and potential expansion to other models in the future. This unified interface accelerates development cycles by reducing the complexity of working with multiple AI models. It also future proofs Wittly’s AI infrastructure, enabling seamless adoption of new models of capabilities as they become available, without significant code changes.
  • Serverless architecture – Eliminates the need for infrastructure management, enabling Classworks to focus on educational content and user experience. This approach provides automatic scaling to handle varying loads, from individual student sessions to entire school districts accessing the platform simultaneously. It also optimizes costs by allocating resources based on actual usage rather than maintaining constant capacity. The reduced operational overhead allows Wittly’s team to dedicate more time and resources to enhancing the core educational features of the platform.

Combining cutting-edge AI technology with thoughtful implementation and robust safeguards, Wittly represents a significant leap forward in personalized digital learning assistance. The system’s architecture, powered by Amazon Bedrock and Anthropic’s Claude Sonnet 3.5, enables Wittly to adapt to individual student needs while maintaining high standards of safety, privacy, and educational efficacy. By integrating these advanced technologies, Wittly not only enhances the learning experience but also makes sure it’s accessible, secure, and tailored to the unique requirements of every student.

Increasing teacher capacity and bandwidth

Meeting the diverse needs of students in a single classroom, particularly during intervention periods or in resource rooms, can be overwhelming. By differentiating instruction for students learning independently, Wittly saves valuable teacher time. Students can seek clarification and guidance from Wittly before asking for the teacher’s help, fostering a self-directed learning environment that eases the teacher’s burden.

This approach is particularly beneficial when a teacher delivers small group lessons while others learn independently. Knowing that interactive explanations are available to students learning each concept is a significant relief for teachers managing diverse ability levels in a classroom. By harnessing the powerful capabilities of Anthropic’s Claude Sonnet 3.5, Wittly creates a more efficient, personalized learning ecosystem that benefits both students and teachers.

Solution overview

The following diagram illustrates the solution architecture.

 

The solution consists of the following key components:

  • Wittly interface – The frontend component where students interact with the learning assistant is designed to be intuitive and engaging.
  • Classworks API – This API manages the data exchange and serves as the central hub for communication between various system components.
  • Wittly AI assistant prompt – Students receive a tailored prompt for the AI based on the student’s first name, grade level, learning objectives, and conversation history.
  • Student common misconception prompt – This prompt actively identifies potential misconceptions related to the current learning objective, enhancing the student experience.
  • Anthropic’s Claude on Amazon Bedrock – Amazon Bedrock orchestrates AI interactions, providing a fully managed service that simplifies the integration of the state-of-the-art Anthropic’s Claude models.

Monitoring the Wittly platform

In the rapidly evolving landscape of AI-powered education, robust monitoring isn’t only beneficial—it’s essential. Classworks recognizes this criticality and has developed a comprehensive monitoring strategy for the Wittly platform. This approach is pivotal in maintaining the highest standards of performance, optimizing resource allocation, and continually refining the user experience. More specifically, the Wittly platform monitors the following metrics:

  • Token usage – By tracking overall token consumption and visualizing usage patterns by feature and user type, we can plan resources efficiently and manage costs effectively.
  • Request volume – Monitoring API calls helps us detect unusual spikes and analyze usage patterns, enabling predictive scaling decisions and providing system reliability.
  • Response times – Measuring and analyzing latency, breaking down response times by query complexity and user segments. This allows us to identify and address performance bottlenecks promptly.
  • Costs – Implementing detailed cost tracking and modeling for various usage scenarios supports our budget management and pricing strategies, leading to sustainable growth.
  • Quality metrics – Logging and analyzing user feedback, along with correlating satisfaction metrics with model performance, guides our continuous improvement efforts.
  • Error tracking – Setting up alerts for critical errors and performing advanced error categorization and trend analysis helps us integrate seamlessly with our development workflow and maintain system integrity.
  • User engagement – Visualizing user journeys and feature adoption rates through monitoring feature usage informs our product development priorities, enhancing the overall user experience.
  • System health – By tracking overall system performance, we gain a holistic view of system dependencies, supporting proactive maintenance and maintaining a stable platform.

To achieve this, we use Amazon CloudWatch to capture key performance data, such as average latency and token counts. This information is then seamlessly integrated into our Grafana dashboard for real-time visualization and analysis. The following screenshot showcases our monitoring dashboard created using Grafana, which visually represents these critical metrics and provides actionable insights. Grafana is an open-source platform for monitoring and observability, enabling users to query, visualize, and understand their data through customizable dashboards.

This comprehensive monitoring framework enables Classworks to deliver exceptional value to our users by optimizing AI-powered features and maintaining high performance standards. With cutting-edge tools like Grafana for data collection, alerting, and in-depth visualization and analysis, we can adapt and expand our monitoring capabilities in tandem with the growing complexity of our AI integration.

Engaging with Wittly: A student’s experience

As students embark on their Classworks activities, they are greeted by Wittly, their AI-powered learning assistant, integrated seamlessly into the Classworks instructional toolbar. When students encounter challenging concepts or need additional help, they can choose the Wittly icon to open an interactive chat window.

Unlike other AI chat-based systems that rely on open-ended questions, Wittly offers a set of pre-created AI response options. This guided approach makes sure conversations remain focused and relevant to the current activity. When Wittly provides explanations or poses questions, students can select from the provided responses, indicating their understanding or need for further clarification.

The student engagement workflow includes the following steps:

  1. Wittly is called when a student needs help with a specific activity in Classworks.
  2. Each Classworks activity focuses on a particular skill or concept, and we’ve tagged all activities with learning objectives for the specific activity.
  3. When a student accesses Wittly, we send key pieces of information, including the student’s first name, the learning objective of the activity they’re working on, and the language preference of the student.
  4. Wittly generates a personalized response to help the student. This typically includes a greeting using the student’s name, an explanation of the concept, an example related to the learning objective, and a prompt asking if the explanation helped the student understand the concept.

The following is a sample interaction, starting with the input sent to Wittly:

{

  "student_name": "Alex",

  "learning_objective": "Identify and use proper punctuation in compound sentences",

  "language": "English"

}

Wittly’s output is as follows:

"Hi Alex! Let's work on punctuating compound sentences. Remember to use a comma before coordinating conjunctions like 'and' or 'but'. For example: 'I love pizza, and I enjoy pasta.' Do you understand this? Please reply with 'thumbs up' or 'thumbs down'."

Wittly is designed to adapt to each student’s unique needs. It can communicate in both English and Spanish, and students can choose a voice they find engaging. For those who prefer auditory learning, Wittly reads its answers aloud while highlighting the corresponding text, making the learning experience both dynamic and accessible.

The structured interactions with Wittly are recorded, allowing teachers to monitor student progress and identify areas where additional support may be needed. This makes sure teachers remain actively involved in the learning process and that Wittly’s interactions are always appropriate and aligned with educational objectives.

With Wittly as their learning companion, students can delve into complex concepts in language arts, math, and science through guided, interactive exchanges. Wittly supports their learning journey, making their time in Classworks more engaging and personalized, all within a safe and controlled environment.

The following example showcases the interactive experience with Wittly in action, demonstrating how students engage with personalized learning through guided interactions.

Data privacy and safety considerations

In the era of AI-powered education, protecting student data and providing safe interactions are paramount. Classworks has implemented rigorous measures to uphold the highest standards of privacy and safety in Wittly’s design and operation.

Ethical AI foundation

Classworks employs a human-in-the-loop (HITL) model, combining AI technology with human expertise and insight. Wittly uses advanced AI algorithms, overseen and enhanced by the expertise of human educators and engineers, to generate instructional recommendations.

Student data protection

A core tenet in developing Wittly was achieving personalized learning without compromising student privacy. We don’t share any personally identifiable information with Wittly. Anthropic’s Claude LLM is trained on a dataset of anonymous data, not data from the Classworks platform, providing complete student privacy. Furthermore, when engaging with Wittly, students select from various pre-created responses to indicate whether the differentiated instruction was helpful or if they need further assistance. This approach eliminates the risk of inappropriate conversations, maintaining a safe learning environment.

Amazon Bedrock enhances this protection by encrypting data both in transit and rest and preventing the sharing of prompts with any third parties, including Anthropic. Additionally, Amazon Bedrock doesn’t train models with Classworks’s data, so all interactions remain secure and private.

Conclusion

Amazon Bedrock represents a pivotal advancement in AI technology, offering vast opportunities for innovation and efficiency in education. At Classworks, we’re not just adopting this technology, we’re pioneering its application to craft exceptional, personalized learning experiences. Our commitment extends beyond students to empowering educators with cutting-edge resources that elevate learning outcomes.

Based on Wittly’s capabilities, we estimate that teachers could potentially save 15–25 hours per month. This time savings might come from reduced need for individual student support, decreased time spent on classroom management, and less after-hours support. These efficiency gains significantly enhance the learning environment, allowing teachers to focus more on high impact, tailored educational experiences.

As AI continues to evolve, we’re committed to refining our policies and practices to uphold the highest standards of safety, quality, and efficacy in educational technology. By embracing Amazon Bedrock, we can make sure Classworks remains at the forefront of delivering safe, impactful, and meaningful educational experiences to students and educators alike.

To learn more about how generative AI and Amazon Bedrock can revolutionize your educational platform by delivering personalized learning experiences, enhancing teacher capacity, and enforcing data privacy, visit Amazon Bedrock. Discover how you can use advanced AI to create innovative applications, streamline development processes, and provide impactful data insights for your users.

To learn more about Classworks and our groundbreaking generative AI capabilities, visit our website.

This is a guest post from Classworks. Classworks is an award-winning K–12 special education and tiered intervention platform that uses advanced technology and comprehensive data to deliver superior personalized learning experiences. The comprehensive solution includes academic screeners, math and reading interventions, specially designed instruction, progress monitoring, and powerful data. Validated by the National Center on Intensive Intervention (NCII) and endorsed by The Council of Administrators of Special Education (CASE), Classworks partners with districts nationwide to deliver data-driven personalized learning to students where they are ready to learn.

 


About the Authors

Jerry Henley, VP of Technology at Curriculum Advantage, leads the product technical vision, platform services, and support for Classworks. With 18 years in EdTech, he oversees innovation, roadmaps, and AI integration, enhancing personalized learning experiences for students and educators.

 

Hans Buchheim, VP of Engineering at Curriculum Advantage, has spent 25 years developing Classworks. He leads software architecture decisions, mentors junior developers, and ensures the product evolves to meet educator needs.

 

Roy Gunter, DevOps Engineer at Curriculum Advantage, manages cloud infrastructure and automation for Classworks. He focuses on system reliability, troubleshooting, and performance optimization to deliver an excellent user experience.

 

Gowtham Shankar is a Solutions Architect at Amazon Web Services (AWS). He is passionate about working with customers to design and implement cloud-native architectures to address business challenges effectively. Gowtham actively engages in various open source projects, collaborating with the community to drive innovation.

 

Dr. Changsha Ma is an AI/ML Specialist at AWS. She is a technologist with a PhD in Computer Science, a master’s degree in Education Psychology, and years of experience in data science and independent consulting in AI/ML. She is passionate about researching methodological approaches for machine and human intelligence. Outside of work, she loves hiking, cooking, hunting food, and spending time with friends and families

Read More

Fine-tune a BGE embedding model using synthetic data from Amazon Bedrock

Fine-tune a BGE embedding model using synthetic data from Amazon Bedrock

Have you ever faced the challenge of obtaining high-quality data for fine-tuning your machine learning (ML) models? Generating synthetic data can provide a robust solution, especially when real-world data is scarce or sensitive. For instance, when developing a medical search engine, obtaining a large dataset of real user queries and relevant documents is often infeasible due to privacy concerns surrounding personal health information. However, synthetic data generation techniques can be employed to create realistic query-document pairs that resemble authentic user searches and relevant medical content, enabling the training of accurate retrieval models while preserving user privacy.

In this post, we demonstrate how to use Amazon Bedrock to create synthetic data, fine-tune a BAAI General Embeddings (BGE) model, and deploy it using Amazon SageMaker.

Amazon Bedrock is a fully managed service that offers a choice of high-performing foundation models (FMs) from leading artificial intelligence (AI) companies like AI21 Labs, Anthropic, Cohere, Meta, Stability AI, and Amazon through a single API, along with a broad set of capabilities to build generative AI applications with security, privacy, and responsible AI.

You can find the full code associated with this post at the accompanying GitHub repository.

Solution overview

BGE stands for Beijing Academy of Artificial Intelligence (BAAI) General Embeddings. It is a family of embedding models with a BERT-like architecture, designed to produce high-quality embeddings from text data. The BGE models come in three sizes:

  • bge-large-en-v1.5: 1.34 GB, 1,024 embedding dimensions
  • bge-base-en-v1.5: 0.44 GB, 768 embedding dimensions
  • bge-small-en-v1.5: 0.13 GB, 384 embedding dimensions

For comparing two pieces of text, the BGE model functions as a bi-encoder architecture, processing each piece of text through the same model in parallel to obtain their embeddings.

Generating synthetic data can significantly enhance the performance of your models by providing ample, high-quality training data without the constraints of traditional data collection methods. This post guides you through generating synthetic data using Amazon Bedrock, fine-tuning a BGE model, evaluating its performance, and deploying it with SageMaker.

The high-level steps are as follows:

  1. Set up an Amazon SageMaker Studio environment with the necessary AWS Identity and Access Management (IAM) policies.
  2. Open SageMaker Studio.
  3. Create a Conda environment for dependencies.
  4. Generate synthetic data using Meta Llama 3 on Amazon Bedrock.
  5. Fine-tune the BGE embedding model with the generated data.
  6. Merge the model weights.
  7. Test the model locally.
  8. Evaluate and compare the fine-tuned model.
  9. Deploy the model using SageMaker and Hugging Face Text Embeddings Inference (TEI).
  10. Test the deployed model.

Prerequisites

First-time users need an AWS account and an IAM user role with the following permission policies attached:

  • AmazonSageMakerFullAccess
  • IAMFullAccess (or a custom IAM policy that grants iam:GetRole and iam:AttachRolePolicy permissions for the specific SageMaker execution role and the required policies: AmazonBedrockFullAccess, AmazonS3FullAccess, and AmazonEC2ContainerRegistryFullAccess)

Create a SageMaker Studio domain and user

Complete the following steps to create a SageMaker Studio domain and user:

  1. On the SageMaker console, under Admin configurations in the navigation pane, choose Domains.
  2. Choose Create domain.

SageMaker Domains

  1. Choose Set up for single user (Quick setup). Your domain, along with an IAM role with the AmazonSageMakerFullAccess policy, will be automatically created.
  2. After the domain is prepared, choose Add user.
  3. Provide a name for the new user profile and choose the IAM role (use the default role you created in step 4).
  4. Choose Next on the next three screens, then choose Submit.

After you add the user profile, update the IAM role.

  1. On the IAM console, choose Roles in the navigation pane.
  2. Navigate to the Domain settings page of your newly created domain and locate the IAM role created earlier (it should have a name similar to AmazonSageMaker-ExecutionRole-YYYYMMDDTHHMMSS).
  3. On the role details page, on the Add permissions drop down menu, choose Attach policies.
  4. Select the following policies and Add permissions to add them to the role.
    1. AmazonBedrockFullAccess
    2. AmazonS3FullAccess
    3. AmazonEC2ContainerRegistryFullAccess

Open SageMaker Studio

To open SageMaker studio, complete the following steps:

  1. On the SageMaker console, choose Studio in the navigation pane.
  2. On the SageMaker Studio landing page, select the newly created user profile and choose Open Studio.
  3. After you launch SageMaker Studio, choose JupyterLab.
  4. In the top-right corner, choose Create JupyterLab Space.
  5. Give the space a name, such as embedding-finetuning, and choose Create space.
  6. Change the instance type to ml.g5.2xlarge and the Storage (GB) value to 100.

You may need to request a service quota increase before being able to select the ml.g5.2xlarge instance type.

  1. Choose Run space and wait a few minutes for the space to start.
  2. Choose Open JupyterLab.

Set up a Conda environment in SageMaker Studio

Next, you create a Conda environment with the necessary dependencies for running the code in this post. You can use the environment.yml file provided in the code repository to create this.

  1. Open the previous terminal, or choose Terminal in Launcher to open a new one.
  2. Clone the code repository, and enter the directory:
    # TODO: replace this with final public version 
    git clone https://gitlab.aws.dev/austinmw/Embedding-Finetuning-Blog

  3. Create the Conda environment by running the following command (this step will take several minutes to complete):
    conda env create -f environment.yml

  4. Activate the environment by running the following commands one by one:
    conda init source ~/.bashrc conda activate ft-embedding-blog

  5. Add the newly created Conda environment to Jupyter:
    python -m ipykernel install --user --name=ft-embedding-blog

  6. From the Launcher, open the repository folder named embedding-finetuning-blog and open the file Embedding Blog.ipynb.
  7. On the Kernel drop down menu in the notebook, choose Change Kernel, then choose ft-embedding-blog.

You may need to refresh your browser if it doesn’t show up as available.

Now you have a Jupyter notebook that includes the necessary dependencies required to run the code in this post.

Generate synthetic data using Amazon Bedrock

We start by adapting LlamaIndex’s embedding model fine-tuning guide to use Amazon Bedrock to generate synthetic data for fine-tuning. We use the sample data and evaluation procedures outlined in this guide.

To generate synthetic data, we use the Meta Llama3-70B-Instruct model on Amazon Bedrock, which offers great a price and performance. The process involves the following steps:

  1. Download the training and validation data, which consists of PDFs from Uber and Lyft 10K documents. These PDFs will serve as the source for generating document chunks.
  2. Parse the PDFs into plain text chunks using LlamaIndex functionality. The Lyft corpus will be used as the training dataset, and the Uber corpus will be used as the evaluation dataset.
  3. Clean the parsed data by removing samples that are too short or contain special characters that could cause errors during training.
  4. Set up the large language model (LLM) Meta Llama3-70B-Instruct and define a prompt template for generating questions based on the context provided by the document chunks.
  5. Use the LLM to generate synthetic question answer pairs for each document chunk. The document chunks serve as the context, and the generated questions are designed to be answerable using the information within the corresponding chunk.
  6. Save the generated synthetic data in JSONL format, where each line is a dictionary containing the query (generated question), positive passages (the document chunk used as context), and negative passages (if available). This format is compatible with the FlagEmbedding library, which will be used for fine-tuning the BGE model.

By generating synthetic question-answer pairs using the Meta Llama3-70B-Instruct model and the document chunks from the Uber and Lyft datasets, you create a high-quality dataset that can be used to fine-tune the BGE embedding model for improved performance in retrieval tasks.

Fine-Tune the BGE embedding model

For fine-tuning, you can use the bge-base-en-v1.5 model, which offers a good balance between performance and resource requirements. You define retrieval instructions for the query to enhance the model’s performance during fine-tuning and inference.

Before fine-tuning, generate hard negatives using a predefined script available from the FlagEmbedding library. Hard negative mining is an essential step that helps improve the model’s ability to distinguish between similar but not identical text pairs. By including hard negatives in the training data, you encourage the model to learn more discriminative embeddings.

You then initiate the fine-tuning process using the FlagEmbedding library, which trains the model with InfoNCE contrastive loss. The library provides a convenient way to fine-tune the BGE model using the synthetic data you generated earlier. During fine-tuning, the model learns to produce embeddings that bring similar query-document pairs closer together in the embedding space while pushing dissimilar pairs further apart.

Merge the model weights

After fine-tuning, you can use the LM-Cocktail library to merge the fine-tuned weights with the original weights of the BGE model. LM-Cocktail creates new model parameters by calculating a weighted average of the parameters from two or more models. This process helps mitigate the problem of catastrophic forgetting, where the model might lose its previously learned knowledge during fine-tuning.

By merging the fine-tuned weights with the original weights, you obtain a model that benefits from the specialized knowledge acquired during fine-tuning while retaining the general language understanding capabilities of the original model. This approach often leads to improved performance compared to using either the fine-tuned or the original model alone.

Test the model locally

Before you evaluate the fine-tuned BGE model on the validation set, it’s a good idea to perform a quick local test to make sure the model behaves as expected. You can do this by comparing the cosine similarity scores for pairs of queries and documents that you expect to have high similarity and those that you expect to have low similarity.

To test the model, prepare two small sets of document-query pairs:

  • Similar document-query pairs – These are pairs where the document and query are closely related and should have a high cosine similarity score
  • Different document-query pairs – These are pairs where the document and query are not closely related and should have a lower cosine similarity score

Then use the fine-tuned BGE model to generate embeddings for each document and query in both sets of pairs. By calculating the cosine similarity between the document and query embeddings for each pair, you can assess how well the model captures the semantic similarity between them.

When comparing the cosine similarity scores, we expect to see higher scores for the similar document-query pairs compared to the different document-query pairs. This would indicate that the fine-tuned model is able to effectively distinguish between similar and dissimilar pairs, assigning higher similarity scores to the pairs that are more closely related.

If the local testing results align with your expectations, it provides a quick confirmation that the fine-tuned model is performing as intended. You can then move on to a more comprehensive evaluation of the model’s performance using the validation set.

However, if the local testing results are not satisfactory, it may be necessary to investigate further and identify potential issues with the fine-tuning process or the model architecture before proceeding to the evaluation step.

This local testing step serves as a quick sanity check to make sure the fine-tuned model is behaving reasonably before investing time and resources in a full evaluation on the validation set. It can help catch obvious issues early on and provide confidence in the model’s performance before moving forward with more extensive testing.

Evaluate the model

We evaluate the performance of the fine-tuned BGE model using two procedures:

  • Hit rate – This straightforward metric assesses the model’s performance by checking if the retrieved results for a given query include the relevant document. You calculate the hit rate by taking each query-document pair from the validation set, retrieving the top-K documents using the fine-tuned model, and verifying if the relevant document is present in the retrieved results.
  • InformationRetrievalEvaluator – This procedure, provided by the sentence-transformers library, offers a more comprehensive suite of metrics for detailed performance analysis. It evaluates the model on various information retrieval tasks and provides metrics such as Mean Average Precision (MAP), Normalized Discounted Cumulative Gain (NDCG), and more. However, InformationRetrievalEvaluator is only compatible with sentence-transformers

To get a better understanding of the fine-tuned model’s performance, you can compare it against the base (non-fine-tuned) BGE model and the Amazon Titan Text Embeddings V2 model on Amazon Bedrock. This comparison helps you assess the effectiveness of the fine-tuning process and determine if the fine-tuned model outperforms the baseline models.

By evaluating the model using both the hit rate and InformationRetrievalEvaluator (when applicable), you gain insights into its performance on different aspects of retrieval tasks and can make informed decisions about its suitability for your specific use case.

Deploy the model

To deploy the fine-tuned BGE model, you can deploy the Hugging Face Text Embedding Inference (TEI) container to SageMaker. TEI is a high-performance toolkit for deploying and serving popular text embeddings and sequence classification models, including support for FlagEmbedding models. It provides a fast and efficient serving framework for your fine-tuned model on SageMaker.

The deployment process involves the following steps:

  1. Upload the fine-tuned model to the Hugging Face Hub or Amazon Simple Storage Service (Amazon S3).
  2. Retrieve the new Hugging Face Embedding Container image URI.
  3. Deploy the model to SageMaker.
  4. Optionally, set up auto scaling for the endpoint to automatically adjust the number of instances based on the incoming request traffic. Auto scaling helps make sure the endpoint can handle varying workloads efficiently.

By deploying the fine-tuned BGE model using TEI on SageMaker, you can integrate it into your applications and use it for efficient text embedding and retrieval tasks. The deployment process outlined in this post provides a scalable and manageable solution for serving the model in production environments.

Test the deployed model

After you deploy the fine-tuned BGE model using TEI on SageMaker, you can test the model by sending requests to the SageMaker endpoint and evaluating the model’s responses.

To test the deployed model, you can run the model and optionally add instructions. If the model was fine-tuned with instructions for queries or passages, it’s important to match the instructions used during fine-tuning when performing inference. In this case, you used instructions for queries but not for passages, so you can follow the same approach during testing.

To test the deployed model, you send queries to the SageMaker endpoint using the tei_endpoint.predict() method provided by the SageMaker SDK. You prepare a batch of queries, optionally prepending any instructions used during fine-tuning, and pass them to the predict() method. The model generates embeddings for each query, which are returned in the response.

By examining the generated embeddings, you can assess the quality and relevance of the model’s output. You can compare the embeddings of similar queries and verify that they have high cosine similarity scores, indicating that the model accurately captures the semantic meaning of the queries.

Additionally, you can measure the average response time of the deployed model to evaluate its performance and make sure it adheres to the required latency constraints for your application.

Integrate the model with LangChain

Additionally, you can integrate the deployed BGE model with LangChain, a library for building applications with language models. To do this, you create a custom content handler that inherits from LangChain’s EmbeddingsContentHandler. This handler implements methods to convert input data into a format compatible with the SageMaker endpoint and converts the endpoint’s output into embeddings.

You then create a SagemakerEndpointEmbeddings instance, specifying the endpoint name, SageMaker runtime client, and custom content handler. This instance wraps the deployed BGE model and integrates it with LangChain workflows.

Using the embed_documents method of the SagemakerEndpointEmbeddings instance, you generate embeddings for documents or queries, which can be used for downstream tasks like similarity search, clustering, or classification.

Integrating the deployed BGE model with LangChain allows you to take advantage of LangChain’s features and abstractions to build sophisticated language model applications that utilize the fine-tuned BGE embeddings. Testing the integration makes sure the model performs as expected and can be seamlessly incorporated into real-world workflows and applications.

Clean up

After you’re finished with the deployed endpoint, don’t forget to delete it to prevent unexpected SageMaker costs.

Conclusion

In this post, we walked through the process of fine-tuning a BGE embedding model using synthetic data generated from Amazon Bedrock. We covered key steps, including generating high-quality synthetic data, fine-tuning the model, evaluating performance, and deploying the optimized model using Amazon SageMaker.

By using synthetic data and advanced fine-tuning techniques like hard negative mining and model merging, you can significantly enhance the performance of embedding models for your specific use cases. This approach is especially valuable when real-world data is limited or difficult to obtain.

To get started, we encourage you to experiment with the code and techniques demonstrated in this post. Adapt them to your own datasets and models to unlock performance improvements in your applications. You can find all the code used in this post in our GitHub repository.

Resources


About the Authors

austinmw photoAustin Welch is a Senior Applied Scientist at Amazon Web Services Generative AI Innovation Center.

bryost photoBryan Yost is a Principle Deep Learning Architect at Amazon Web Services Generative AI Innovation Center.

nmehdi photoMehdi Noori is a Senior Applied Scientist at Amazon Web Services Generative AI Innovation Center.

Read More

Boost post-call analytics with Amazon Q in QuickSight

Boost post-call analytics with Amazon Q in QuickSight

In today’s customer-centric business world, providing exceptional customer service is crucial for success. Contact centers play a vital role in shaping customer experiences, and analyzing post-call interactions can provide valuable insights to improve agent performance, identify areas for improvement, and enhance overall customer satisfaction.

Amazon Web Services (AWS) has AI and generative AI solutions that you can integrate into your existing contact centers to improve post-call analysis.

Post Call Analytics (PCA) is a solution that does most of the heavy lifting associated with providing an end-to-end solution that can process call recordings from your existing contact center. PCA provides actionable insights to spot emerging trends, identify agent coaching opportunities, and assess the general sentiment of calls.

Complementing PCA, we have Live call analytics with agent assist (LCA) for real-time analysis while calls are produced, providing AI and generative AI capabilities.

In this post, we show you how to unlock powerful post-call analytics and visualizations, empowering your organization to make data-driven decisions and drive continuous improvement.

Enrich and boost your post-call recording files with Amazon Q and Amazon Quicksight

Amazon QuickSight is a unified business intelligence (BI) service that provides modern interactive dashboards, natural language querying, paginated reports, machine learning (ML) insights, and embedded analytics at scale.

Amazon Q is a powerful, new capability in Amazon QuickSight that you can use to ask questions about your data using natural language and share presentation-ready data stories to communicate insights to others.

These capabilities can significantly enhance your post-call analytics workflow, making it easier to derive insights from your contact center data.

To get started using Amazon Q in QuickSight, first you will need Quicksight Enterprise Edition, which you can sign up for by following this process.

Amazon Q in QuickSight provides users a suite of new generative BI capabilities.

Depending on the user’s role, they will have access to different sets of capabilities. For instance a Reader Pro user can create data stories and executive summaries. If the user is an Author Pro user, they will also be able to create topics and build dashboards using natural language. The following figure shows the available roles and their capabilities.

The following are some key ways that Amazon Q in QuickSight can boost your post-call analytics productivity.

  • Quick insights: Instead of spending time building complex dashboards and visualizations, you can enable users to quickly get answers to your questions about call volumes, agent performance, customer sentiment, and more. Amazon Q in QuickSight understands the context of your data and generates relevant visualizations on the fly.
  • One-time analysis: With Amazon Q in QuickSight, you can perform one-time analysis on your post-call data without any prior setup. Ask your questions using natural language, and QuickSight will provide the relevant insights, allowing you to explore your data in new ways and uncover hidden patterns.
  • Natural language interface: Amazon Q in QuickSight has a natural language interface that makes it accessible to non-technical users. Business analysts, managers, and executives can ask questions about post-call data without needing to learn complex querying languages or data visualization tools.
  • Contextual recommendations: Amazon Q in QuickSight can provide contextual recommendations based on your questions and the data available. For example, if you ask about customer sentiment, it might suggest analyzing sentiment by agent, call duration, or other relevant dimensions.
  • Automated dashboards: Amazon Q can help accelerate dashboard development based on your questions, saving you the effort of manually building and maintaining dashboards for post-call analytics.

By using Amazon Q in QuickSight, your organization can streamline post-call analytics, enabling faster insights, better decision-making, and improved customer experiences. With its natural language interface and automated visualizations, Amazon Q empowers users at all levels to explore and understand post-call data more efficiently.

Let’s dive into a couple of the capabilities available to Pro users, such as building executive summaries and data stories for post-call analytics.

Executive summaries

When a user is just starting to explore a new dashboard that has been shared with them, it often takes time to familiarize themselves with what is contained in the dashboard and where they should be looking for key insights. Executive summaries are a great way to use AI to highlight key insights and draw the user’s attention to specific visuals that contain metrics worth looking into further.

You can build an executive summary on any dashboard that you have access to. Such as the dashboard shown in the following figure.

As shown in the following figure, you can change to another sheet, or even apply filters and regenerate the summary to get a fresh set of highlights for the filtered set of data.

The key benefits of using executive summaries include:

  • Automated insights: Amazon Q can automatically surface key insights and trends from your post-call data, making it possible to quickly create executive summaries that highlight the most important information.
  • Customized views: Executives can customize the visualizations and summaries generated by Amazon Q to align with their specific requirements and preferences, ensuring that the executive summaries are tailored to their needs.

Data storytelling

After a user has found an interesting trend or insight within a dashboard, they often need to communicate with others to drive a decision on what to do next. That decision might be made in a meeting or offline, but a presentation with key metrics and a structured narrative is often the basis for presenting the argument. This is exactly what data stories are designed to support. Rather than taking screenshots and pasting into a document or email, at which point you lose all governance and the data becomes static, stories in QuickSight are interactive, governed, and can be updated in a click.

To build a story, you always start from a dashboard. You then select visuals to support your story and input a prompt of what you want the story to be about. In the example, we want to generate a story to get insights and recommendations to improve call center operations (shown in the following figure).

As the following figure shows, after a few moments, you will see a fully structured story including visuals and insights, including recommendations for next steps.

Key benefits of using data stories:

  1. Narrative exploration: With Amazon Q, you can explore your post-call data through a narrative approach, asking follow-up questions based on the insights generated. This allows you to build a compelling data story that uncovers the underlying patterns and trends in your contact center operations.
  2. Contextual recommendations: Amazon Q can provide contextual recommendations for additional visualizations or analyses based on your questions and the data available. These recommendations can help you uncover new perspectives and enrich your data storytelling.
  3. Automated narratives: Amazon Q can generate automated narratives that explain the visualizations and insights, making it easier to communicate the data story to stakeholders who might not be familiar with the technical details.
  4. Interactive presentations: By integrating Amazon Q with QuickSight presentation mode, you can create interactive data storytelling experiences. Executives and stakeholders can ask questions during the presentation, and Amazon Q will generate visualizations and insights in real time, enabling a more engaging and dynamic data storytelling experience.

Conclusion

By using the capabilities of Amazon Q in QuickSight, you can uncover valuable insights from your call recordings and post-call analytics data. These insights can then inform data-driven decisions to improve customer experiences, optimize contact center operations, and drive overall business performance.

In the era of customer-centricity, post-call analytics has become a game-changer for contact center operations. By using the power of Amazon Q and Amazon QuickSight on top of your PCA data, you can unlock a wealth of insights, optimize agent performance, and deliver exceptional customer experiences. Embrace the future of customer service with cutting-edge AI and analytics solutions from AWS, and stay ahead of the competition in today’s customer-centric landscape.


About the Author

Daniel Martinez is a Solutions Architect in Iberia Enterprise, part of the worldwide commercial sales organization (WWCS) at AWS.

Read More

Create a next generation chat assistant with Amazon Bedrock, Amazon Connect, Amazon Lex, LangChain, and WhatsApp

Create a next generation chat assistant with Amazon Bedrock, Amazon Connect, Amazon Lex, LangChain, and WhatsApp

This post is co-written with Harrison Chase, Erick Friis and Linda Ye from LangChain.

Generative AI is set to revolutionize user experiences over the next few years. A crucial step in that journey involves bringing in AI assistants that intelligently use tools to help customers navigate the digital landscape. In this post, we demonstrate how to deploy a contextual AI assistant. Built using Amazon Bedrock Knowledge Bases, Amazon Lex, and Amazon Connect, with WhatsApp as the channel, our solution provides users with a familiar and convenient interface.

Amazon Bedrock Knowledge Bases gives foundation models (FMs) and agents contextual information from your company’s private data sources for Retrieval Augmented Generation (RAG) to deliver more relevant, accurate, and customized responses. It also offers a powerful solution for organizations seeking to enhance their generative AI–powered applications. This feature simplifies the integration of domain-specific knowledge into conversational AI through native compatibility with Amazon Lex and Amazon Connect. By automating document ingestion, chunking, and embedding, it eliminates the need to manually set up complex vector databases or custom retrieval systems, significantly reducing development complexity and time.

The result is improved accuracy in FM responses, with reduced hallucinations due to grounding in verified data. Cost efficiency is achieved through minimized development resources and lower operational costs compared to maintaining custom knowledge management systems. The solution’s scalability quickly accommodates growing data volumes and user queries thanks to AWS serverless offerings. It also uses the robust security infrastructure of AWS to maintain data privacy and regulatory compliance. With the ability to continuously update and add to the knowledge base, AI applications stay current with the latest information. By choosing Amazon Bedrock Knowledge Bases, organizations can focus on creating value-added AI applications while AWS handles the intricacies of knowledge management and retrieval, enabling faster deployment of more accurate and capable AI solutions with less effort.

Prerequisites

To implement this solution, you need the following:

Solution overview

This solution uses several key AWS AI services to build and deploy the AI assistant:

  • Amazon Bedrock – Amazon Bedrock is a fully managed service that offers a choice of high-performing FMs from leading AI companies such as AI21 Labs, Anthropic, Cohere, Meta, Mistral AI, Stability AI, and Amazon through a single API, along with a broad set of capabilities you need to build generative AI applications with security, privacy, and responsible AI
  • Amazon Bedrock Knowledge Bases – Gives the AI assistant contextual information from a company’s private data sources
  • Amazon OpenSearch Service – Works as vector store that is natively supported by Amazon Bedrock Knowledge Bases
  • Amazon Lex – Enables building the conversational interface for the AI assistant, including defining intents and slots
  • Amazon Connect – Powers the integration with WhatsApp to make the AI assistant available to users on the popular messaging application
  • AWS Lambda – Runs the code to integrate the services and implement the LangChain agent that forms the core logic of the AI assistant
  • Amazon API Gateway – Receives the incoming requests triggered from WhatsApp and routes the request to AWS Lambda for further processing
  • Amazon DynamoDB – Stores the messages received and generated to enable conversation memory
  • Amazon SNS – Handles the routing of the outgoing response from Amazon Connect
  • LangChain – Provides a powerful abstraction layer for building the LangChain agent that helps your FMs perform context-aware reasoning
  • LangSmith – Uploads agent traces to LangSmith for added observability, including debugging, monitoring, and testing and evaluation capabilities

The following diagram illustrates the architecture.

Solution Architecture

Flow description

Numbers in red on the right side of the diagram illustrate the data ingestion process:

  1. Upload files to Amazon Simple Storage Service (Amazon S3) Data Source
  2. New files trigger Lambda Function
  3. Lambda Function invokes sync operation of the knowledge base data source
  4. Amazon Bedrock Knowledge Bases fetches the data from Amazon S3, chunks it, and generates the embeddings through the FM of your selection
  5. Amazon Bedrock Knowledge Bases stores the embeddings in Amazon OpenSearch Service

Numbers on the left side of the diagram illustrate the messaging process:

  1. User initiates communication by sending a message through WhatsApp to the webhook hosted on .
  2. Amazon API Gateway routes the incoming message to the inbound message handler, executed on AWS Lambda.
  3. The inbound message handler records the user’s contact details in Amazon DynamoDB.
  4. For first-time users, the inbound message handler establishes a new session in Amazon Connect and logs it in DynamoDB. For returning users, it resumes their existing Amazon Connect session.
  5. Amazon Connect forwards the user’s message to Amazon Lex for natural language processing.
  6. Amazon Lex triggers the LangChain AI assistant, implemented as a Lambda function.
  7. The LangChain AI assistant retrieves the conversation history from DynamoDB.
  8. Using Amazon Bedrock Knowledge Bases, the LangChain AI assistant fetches relevant contextual information.
  9. The LangChain AI assistant compiles a prompt, incorporating context data and the user’s query, and submits it to a FM running on Amazon Bedrock.
  10. Amazon Bedrock processes the input and returns the model’s response to the LangChain AI assistant.
  11. The LangChain AI assistant relays the model’s response back to Amazon Lex.
  12. Amazon Lex transmits the model’s response to Amazon Connect.
  13. Amazon Connect publishes the model’s response to Amazon Simple Notification Service (Amazon SNS).
  14. Amazon SNS triggers the outbound message handler Lambda function.
  15. The outbound message handler retrieves the relevant chat contact information from Amazon DynamoDB.
  16. The outbound message handler dispatches the response to the user through Meta’s WhatsApp API.

Deploying this AI assistant involves three main steps:

  1. Create the knowledge base using Amazon Bedrock Knowledge Bases and ingest relevant product documentation, FAQs, knowledge articles, and other useful data that the AI assistant can use to answer user questions. The data should cover the key use cases and topics the AI assistant will support.
  2. Create a LangChain agent that powers the AI assistant’s logic. The agent is implemented in a Lambda function and uses the knowledge base as its primary tool to look up information. Deploying the agent with other resources is automated through the provided AWS CloudFormation template. See the list of resources in the next section.
  3. Create the Amazon Connect instance and configure the WhatsApp integration. This allows users to chat with the AI assistant using WhatsApp, providing a familiar interface and enabling rich interactions such as images and buttons. WhatsApp’s popularity improves the accessibility of the AI assistant.

Solution deployment

We’ve provided pre-built AWS CloudFormation templates that deploy everything you need in your AWS account.

  1. Sign in to the AWS console if you aren’t already.
  2. Choose the following Launch Stack button to open the CloudFormation console and create a new stack.
  3. Enter the following parameters:
    • StackName: Name your Stack, for example, WhatsAppAIStack
    • LangchainAPIKey: The API key generated through LangChain
Region Deploy button Template URL – use to upgrade existing stack to a new release AWS CDK stack to customize as needed
N. Virginia (us-east-1) Launch Stack button YML GitHub
  1. Check the box to acknowledge that you are creating AWS Identity and Access Management (IAM) resources and choose Create Stack.
  2. Wait for the stack creation to be complete in approximately 10 minutes, which will create the following:
  3. Upload files to the data source (Amazon S3) created for WhatsApp. As soon as you upload a file, the data source will synchronize automatically.
  4. To test the agent, on the Amazon Lex console, select the most recently created assistant. Choose English, choose Test, and send it a message.

Create the Amazon Connect instance and integrate WhatsApp

Configure Amazon Connect to integrate with your WhatsApp business account and enable the WhatsApp channel for the AI assistant:

  1. Navigate to Amazon Connect in the AWS console. If you haven’t already, create an instance. Copy your Instance ARN under Distribution settings. You will need this information later to link your WhatsApp business account.
  2. Choose your instance, then in the navigation panel, choose Flows. Scroll down and select Amazon Lex. Select your bot and choose Add Amazon Lex Bot.
  3. In the navigation panel, choose Overview. Under Access Information, choose Log in for emergency access.
  4. On the Amazon Connect console, under Routing in the navigation panel, choose Flows. Choose Create flow. Drag a Get customer input block onto the flow. Select the block. Select Text-to-speech or chat text and add an intro message such as, “Hello, how can I help you today?” Scroll down and choose Amazon Lex, then select the Amazon Lex bot you created in step 2.
  5. After you save the block, add another block called “Disconnect.” Drag the Entry arrow to the Get customer input and the Get customer input arrow to Disconnect. Choose Publish.
  6. After it’s published, choose Show additional flow information at the bottom of the navigation panel. Copy the flow’s Amazon Resource Name (ARN), which you will need to deploy the WhatsApp integration. The following screenshot shows the Amazon Connect console with the flow.

Connect Flow Diagram

  1. Deploy the WhatsApp integration as detailed in Provide WhatsApp messaging as a channel with Amazon Connect.

Testing the solution

Interact with the AI assistant through WhatsApp, as shown in the following video:

Clean up

To avoid incurring ongoing costs, delete the resources after you are done:

  1. Delete the CloudFormation stacks.
  2. Delete the Amazon Connect instance.

Conclusion

This post showed you how to create an intelligent conversational AI assistant by integrating Amazon Bedrock, Amazon Lex, and Amazon Connect and deploying it on WhatsApp.

The solution ingests relevant data into a knowledge base on Amazon Bedrock Knowledge Bases, implements a LangChain agent that uses the knowledge base to answer questions, and makes the agent available to users through WhatsApp. This provides an accessible, intelligent AI assistant that can guide users through your company’s products and services.

Possible next steps include customizing the AI assistant for your specific use case, expanding the knowledge base, and analyzing conversation logs using LangSmith to identify issues, improve errors, and break down performance bottlenecks in your FM call sequence.


About the Authors

Kenton Blacutt is an AI Consultant within the GenAI Innovation Center. He works hands-on with customers helping them solve real-world business problems with cutting edge AWS technologies, especially Amazon Q and Bedrock. In his free time, he likes to travel, experiment with new AI techniques, and run an occasional marathon.

Lifeth Álvarez is a Cloud Application Architect at Amazon. She enjoys working closely with others, embracing teamwork and autonomous learning. She likes to develop creative and innovative solutions, applying special emphasis on details. She enjoys spending time with family and friends, reading, playing volleyball, and teaching others.

Mani Khanuja is a Tech Lead – Generative AI Specialist, author of the book Applied Machine Learning and High Performance Computing on AWS, and a member of the Board of Directors for Women in Manufacturing Education Foundation Board. She leads machine learning projects in various domains such as computer vision, natural language processing, and generative AI. She speaks at internal and external conferences such as AWS re:Invent, Women in Manufacturing West, YouTube webinars, and GHC 23. In her free time, she likes to go for long runs along the beach.

Linda Ye leads product marketing at LangChain. Previously, she worked at Sentry, Splunk, and Harness, driving product and business value for technical audiences, and studied economics at Sanford. In her free time, Linda enjoys writing half-baked novels, playing tennis, and reading.

Erick Friis, Founding Engineer at LangChain, currently spends most of his time on the open source side of the company. He’s an ex-founder with a passion for language-based applications. He spends his free time outdoors on skis or training for triathlons.

Harrison Chase is the CEO and cofounder of LangChain, an open source framework and toolkit that helps developers build context-aware reasoning applications. Prior to starting LangChain, he led the ML team at Robus Intelligence, led the entity linking team at Kensho, and studied statistics and computer science at Harvard.

Read More