AV-CPL: Continuous Pseudo-Labeling for Audio-Visual Speech Recognition

*Work done during internship at Apple
Audio-visual speech contains synchronized audio and visual information that provides cross-modal supervision to learn representations for both automatic speech recognition (ASR) and visual speech recognition (VSR). We introduce continuous pseudo-labeling for audio-visual speech recognition (AV-CPL), a semi-supervised method to train an audio-visual speech recognition (AVSR) model on a combination of labeled and unlabeled videos with continuously regenerated pseudo-labels. Our models are trained for speech recognition from audio-visual inputs and can…Apple Machine Learning Research

APE: Active Prompt Engineering – Identifying Informative Few-Shot Examples for LLMs

Prompt engineering is an iterative procedure that often requires extensive manual efforts to formulate suitable instructions for effectively directing large language models (LLMs) in specific tasks. Incorporating few-shot examples is a vital and efficacious approach to provide LLMs with precise and tangible instructions, leading to improved LLM performance. Nonetheless, identifying the most informative demonstrations for LLMs is labor-intensive, frequently entailing sifting through an extensive search space. In this demonstration, we showcase an interactive tool called APE (Active Prompt…Apple Machine Learning Research

ToolSandbox: A Stateful, Conversational, Interactive Evaluation Benchmark for LLM Tool Use Capabilities

Recent large language models (LLMs) advancements sparked a growing research interest in tool assisted LLMs solving real-world challenges, which calls for comprehensive evaluation of tool-use capabilities. While previous works focused on either evaluating over stateless web services (RESTful API), based on a single turn user prompt, or an off-policy dialog trajectory, ToolSandbox includes stateful tool execution, implicit state dependencies between tools, a built-in user simulator supporting on-policy conversational evaluation and a dynamic evaluation strategy for intermediate and final…Apple Machine Learning Research

How Deltek uses Amazon Bedrock for question and answering on government solicitation documents

How Deltek uses Amazon Bedrock for question and answering on government solicitation documents

This post is co-written by Kevin Plexico and Shakun Vohra from Deltek.

Question and answering (Q&A) using documents is a commonly used application in various use cases like customer support chatbots, legal research assistants, and healthcare advisors. Retrieval Augmented Generation (RAG) has emerged as a leading method for using the power of large language models (LLMs) to interact with documents in natural language.

This post provides an overview of a custom solution developed by the AWS Generative AI Innovation Center (GenAIIC) for Deltek, a globally recognized standard for project-based businesses in both government contracting and professional services. Deltek serves over 30,000 clients with industry-specific software and information solutions.

In this collaboration, the AWS GenAIIC team created a RAG-based solution for Deltek to enable Q&A on single and multiple government solicitation documents. The solution uses AWS services including Amazon Textract, Amazon OpenSearch Service, and Amazon Bedrock. Amazon Bedrock is a fully managed service that offers a choice of high-performing foundation models (FMs) and LLMs from leading artificial intelligence (AI) companies like AI21 Labs, Anthropic, Cohere, Meta, Stability AI, and Amazon through a single API, along with a broad set of capabilities to build generative AI applications with security, privacy, and responsible AI.

Deltek is continuously working on enhancing this solution to better align it with their specific requirements, such as supporting file formats beyond PDF and implementing more cost-effective approaches for their data ingestion pipeline.

What is RAG?

RAG is a process that optimizes the output of LLMs by allowing them to reference authoritative knowledge bases outside of their training data sources before generating a response. This approach addresses some of the challenges associated with LLMs, such as presenting false, outdated, or generic information, or creating inaccurate responses due to terminology confusion. RAG enables LLMs to generate more relevant, accurate, and contextual responses by cross-referencing an organization’s internal knowledge base or specific domains, without the need to retrain the model. It provides organizations with greater control over the generated text output and offers users insights into how the LLM generates the response, making it a cost-effective approach to improve the capabilities of LLMs in various contexts.

The main challenge

Applying RAG for Q&A on a single document is straightforward, but applying the same across multiple related documents poses some unique challenges. For example, when using question answering on documents that evolve over time, it is essential to consider the chronological sequence of the documents if the question is about a concept that has transformed over time. Not considering the order could result in providing an answer that was accurate at a past point but is now outdated based on more recent information across the collection of temporally aligned documents. Properly handling temporal aspects is a key challenge when extending question answering from single documents to sets of interlinked documents that progress over the course of time.

Solution overview

As an example use case, we describe Q&A on two temporally related documents: a long draft request-for-proposal (RFP) document, and a related subsequent government response to a request-for-information (RFI response), providing additional and revised information.

The solution develops a RAG approach in two steps.

The first step is data ingestion, as shown in the following diagram. This includes a one-time processing of PDF documents. The application component here is a user interface with minor processing such as splitting text and calling the services in the background. The steps are as follows:

  1. The user uploads documents to the application.
  2. The application uses Amazon Textract to get the text and tables from the input documents.
  3. The text embedding model processes the text chunks and generates embedding vectors for each text chunk.
  4. The embedding representations of text chunks along with related metadata are indexed in OpenSearch Service.

The second step is Q&A, as shown in the following diagram. In this step, the user asks a question about the ingested documents and expects a response in natural language. The application component here is a user interface with minor processing such as calling different services in the background. The steps are as follows:

  1. The user asks a question about the documents.
  2. The application retrieves an embedding representation of the input question.
  3. The application passes the retrieved data from OpenSearch Service and the query to Amazon Bedrock to generate a response. The model performs a semantic search to find relevant text chunks from the documents (also called context). The embedding vector maps the question from text to a space of numeric representations.
  4. The question and context are combined and fed as a prompt to the LLM. The language model generates a natural language response to the user’s question.

We used Amazon Textract in our solution, which can convert PDFs, PNGs, JPEGs, and TIFFs into machine-readable text. It also formats complex structures like tables for easier analysis. In the following sections, we provide an example to demonstrate Amazon Textract’s capabilities.

OpenSearch is an open source and distributed search and analytics suite derived from Elasticsearch. It uses a vector database structure to efficiently store and query large volumes of data. OpenSearch Service currently has tens of thousands of active customers with hundreds of thousands of clusters under management processing hundreds of trillions of requests per month. We used OpenSearch Service and its underlying vector database to do the following:

  • Index documents into the vector space, allowing related items to be located in proximity for improved relevancy
  • Quickly retrieve related document chunks at the question answering step using approximate nearest neighbor search across vectors

The vector database inside OpenSearch Service enabled efficient storage and fast retrieval of related data chunks to power our question answering system. By modeling documents as vectors, we could find relevant passages even without explicit keyword matches.

Text embedding models are machine learning (ML) models that map words or phrases from text to dense vector representations. Text embeddings are commonly used in information retrieval systems like RAG for the following purposes:

  • Document embedding – Embedding models are used to encode the document content and map them to an embedding space. It is common to first split a document into smaller chunks such as paragraphs, sections, or fixed size chunks.
  • Query embedding – User queries are embedded into vectors so they can be matched against document chunks by performing semantic search.

For this post, we used the Amazon Titan model, Amazon Titan Embeddings G1 – Text v1.2, which intakes up to 8,000 tokens and outputs a numerical vector of 1,536 dimensions. The model is available through Amazon Bedrock.

Amazon Bedrock provides ready-to-use FMs from top AI companies like AI21 Labs, Anthropic, Cohere, Meta, Stability AI, and Amazon. It offers a single interface to access these models and build generative AI applications while maintaining privacy and security. We used Anthropic Claude v2 on Amazon Bedrock to generate natural language answers given a question and a context.

In the following sections, we look at the two stages of the solution in more detail.

Data ingestion

First, the draft RFP and RFI response documents are processed to be used at the Q&A time. Data ingestion includes the following steps:

  1. Documents are passed to Amazon Textract to be converted into text.
  2. To better enable our language model to answer questions about tables, we created a parser that converts tables from the Amazon Textract output into CSV format. Transforming tables into CSV improves the model’s comprehension. For instance, the following figures show part of an RFI response document in PDF format, followed by its corresponding extracted text. In the extracted text, the table has been converted to CSV format and sits among the rest of the text.
  3. For long documents, the extracted text may exceed the LLM’s input size limitation. In these cases, we can divide the text into smaller, overlapping chunks. The chunk sizes and overlap proportions may vary depending on the use case. We apply section-aware chunking, (perform chunking independently on each document section), which we discuss in our example use case later in this post.
  4. Some classes of documents may follow a standard layout or format. This structure can be used to optimize data ingestion. For example, RFP documents tend to have a certain layout with defined sections. Using the layout, each document section can be processed independently. Also, if a table of contents exists but is not relevant, it can potentially be removed. We provide a demonstration of detecting and using document structure later in this post.
  5. The embedding vector for each text chunk is retrieved from an embedding model.
  6. At the last step, the embedding vectors are indexed into an OpenSearch Service database. In addition to the embedding vector, the text chunk and document metadata such as document, document section name, or document release date are also added to the index as text fields. The document release date is useful metadata when documents are related chronologically, so that LLM can identify the most updated information. The following code snippet shows the index body:
index_body = {
    "embedding_vector": <embedding vector of a text chunk>,
    "text_chunk": <text chunk>,
    "document_name": <document name>,
    "section_name": <document section name>,
    "release_date": <document release date>,
    # more metadata can be added
}

Q&A

In the Q&A phrase, users can submit a natural language question about the draft RFP and RFI response documents ingested in the previous step. First, semantic search is used to retrieve relevant text chunks to the user’s question. Then, the question is augmented with the retrieved context to create a prompt. Finally, the prompt is sent to Amazon Bedrock for an LLM to generate a natural language response. The detailed steps are as follows:

  1. An embedding representation of the input question is retrieved from the Amazon Titan embedding model on Amazon Bedrock.
  2. The question’s embedding vector is used to perform semantic search on OpenSearch Service and find the top K relevant text chunks. The following is an example of a search body passed to OpenSearch Service. For more details see the OpenSearch documentation on structuring a search query.
search_body = {
    "size": top_K,
    "query": {
        "script_score": {
            "query": {
                "match_all": {}, # skip full text search
            },
            "script": {
                "lang": "knn",
                "source": "knn_score",
                "params": {
                    "field": "embedding-vector",
                    "query_value": question_embedding,
                    "space_type": "cosinesimil"
                }
            }
        }
    }
}

  1. Any retrieved metadata, such as section name or document release date, is used to enrich the text chunks and provide more information to the LLM, such as the following:
    def opensearch_result_to_context(os_res: dict) -> str:
        """
        Convert OpenSearch result to context
        Args:
        os_res (dict): Amazon OpenSearch results
        Returns:
        context (str): Context to be included in LLM's prompt
        """
        data = os_res["hits"]["hits"]
        context = []
        for item in data:
            text = item["_source"]["text_chunk"]
            doc_name = item["_source"]["document_name"]
            section_name = item["_source"]["section_name"]
            release_date = item["_source"]["release_date"]
            context.append(
                f"<<Context>>: [Document name: {doc_name}, Section name: {section_name}, Release date: {release_date}] {text}"
            )
        context = "n n ------ n n".join(context)
        return context

  2. The input question is combined with retrieved context to create a prompt. In some cases, depending on the complexity or specificity of the question, an additional chain-of-thought (CoT) prompt may need to be added to the initial prompt in order to provide further clarification and guidance to the LLM. The CoT prompt is designed to walk the LLM through the logical steps of reasoning and thinking that are required to properly understand the question and formulate a response. It lays out a type of internal monologue or cognitive path for the LLM to follow in order to comprehend the key information within the question, determine what kind of response is needed, and construct that response in an appropriate and accurate way. We use the following CoT prompt for this use case:
"""
Context below includes a few paragraphs from draft RFP and RFI response documents:

Context: {context}

Question: {question}

Think step by step:

1- Find all the paragraphs in the context that are relevant to the question.
2- Sort the paragraphs by release date.
3- Use the paragraphs to answer the question.

Note: Pay attention to the updated information based on the release dates.
"""
  1. The prompt is passed to an LLM on Amazon Bedrock to generate a response in natural language. We use the following inference configuration for the Anthropic Claude V2 model on Amazon Bedrock. The Temperature parameter is usually set to zero for reproducibility and also to prevent LLM hallucination. For regular RAG applications, top_k and top_p are usually set to 250 and 1, respectively. Set max_tokens_to_sample to maximum number of tokens expected to be generated (1 token is approximately 3/4 of a word). See Inference parameters for more details.
{
    "temperature": 0,
    "top_k": 250,
    "top_p": 1,
    "max_tokens_to_sample": 300,
    "stop_sequences": [“nnHuman:nn”]
}

Example use case

As a demonstration, we describe an example of Q&A on two related documents: a draft RFP document in PDF format with 167 pages, and an RFI response document in PDF format with 6 pages released later, which includes additional information and updates to the draft RFP.

The following is an example question asking if the project size requirements have changed, given the draft RFP and RFI response documents:

Have the original scoring evaluations changed? if yes, what are the new project sizes?

The following figure shows the relevant sections of the draft RFP document that contain the answers.

The following figure shows the relevant sections of the RFI response document that contain the answers.

For the LLM to generate the correct response, the retrieved context from OpenSearch Service should contain the tables shown in the preceding figures, and the LLM should be able to infer the order of the retrieved contents from metadata, such as release dates, and generate a readable response in natural language.

The following are the data ingestion steps:

  1. The draft RFP and RFI response documents are uploaded to Amazon Textract to extract text and tables as the content. Additionally, we used regular expression to identify document sections and table of contents (see the following figures, respectively). The table of contents can be removed for this use case because it doesn’t have any relevant information.

  2. We split each document section independently into smaller chunks with some overlaps. For this use case, we used a chunk size of 500 tokens with the overlap size of 100 tokens (1 token is approximately 3/4 a word). We used a BPE tokenizer, where each token corresponds to about 4 bytes.
  3. An embedding representation of each text chunk is obtained using the Amazon Titan Embeddings G1 – Text v1.2 model on Amazon Bedrock.
  4. Each text chunk is stored into an OpenSearch Service index along with metadata such as section name and document release date.

The Q&A steps are as follows:

  1. The input question is first transformed to a numeric vector using the embedding model. The vector representation used for semantic search and retrieval of relevant context in the next step.
  2. The top K relevant text chunk and metadata are retrieved from OpenSearch Service.
  3. The opensearch_result_to_context function and the prompt template (defined earlier) are used to create the prompt given the input question and retrieved context.
  4. The prompt is sent to the LLM on Amazon Bedrock to generate a response in natural language. The following is the response generated by Anthropic Claude v2, which matched with the information presented in the draft RFP and RFI response documents. The question was “Have the original scoring evaluations changed? If yes, what are the new project sizes?” Using CoT prompting, the model can correctly answer the question.

Key features

The solution contains the following key features:

  • Section-aware chunking – Identify document sections and split each section independently into smaller chunks with some overlaps to optimize data ingestion.
  • Table to CSV transformation – Convert tables extracted by Amazon Textract into CSV format to improve the language model’s ability to comprehend and answer questions about tables.
  • Adding metadata to index – Store metadata such as section name and document release date along with text chunks in the OpenSearch Service index. This allowed the language model to identify the most up-to-date or relevant information.
  • CoT prompt – Design a chain-of-thought prompt to provide further clarification and guidance to the language model on the logical steps needed to properly understand the question and formulate an accurate response.

These contributions helped improve the accuracy and capabilities of the solution for answering questions about documents. In fact, based on Deltek’s subject matter experts’ evaluations of LLM-generated responses, the solution achieved a 96% overall accuracy rate.

Conclusion

This post outlined an application of generative AI for question answering across multiple government solicitation documents. The solution discussed was a simplified presentation of a pipeline developed by the AWS GenAIIC team in collaboration with Deltek. We described an approach to enable Q&A on lengthy documents published separately over time. Using Amazon Bedrock and OpenSearch Service, this RAG architecture can scale for enterprise-level document volumes. Additionally, a prompt template was shared that uses CoT logic to guide the LLM in producing accurate responses to user questions. Although this solution is simplified, this post aimed to provide a high-level overview of a real-world generative AI solution for streamlining review of complex proposal documents and their iterations.

Deltek is actively refining and optimizing this solution to ensure it meets their unique needs. This includes expanding support for file formats other than PDF, as well as adopting more cost-efficient strategies for their data ingestion pipeline.

Learn more about prompt engineering and generative AI-powered Q&A in the Amazon Bedrock Workshop. For technical support or to contact AWS generative AI specialists, visit the GenAIIC webpage.

Resources

To learn more about Amazon Bedrock, see the following resources:

To learn more about OpenSearch Service, see the following resources:

See the following links for RAG resources on AWS:


About the Authors

Kevin Plexico is Senior Vice President of Information Solutions at Deltek, where he oversees research, analysis, and specification creation for clients in the Government Contracting and AEC industries. He leads the delivery of GovWin IQ, providing essential government market intelligence to over 5,000 clients, and manages the industry’s largest team of analysts in this sector. Kevin also heads Deltek’s Specification Solutions products, producing premier construction specification content including MasterSpec® for the AIA and SpecText.

Shakun Vohra is a distinguished technology leader with over 20 years of expertise in Software Engineering, AI/ML, Business Transformation, and Data Optimization. At Deltek, he has driven significant growth, leading diverse, high-performing teams across multiple continents. Shakun excels in aligning technology strategies with corporate goals, collaborating with executives to shape organizational direction. Renowned for his strategic vision and mentorship, he has consistently fostered the development of next-generation leaders and transformative technological solutions.

Amin Tajgardoon is an Applied Scientist at the AWS Generative AI Innovation Center. He has an extensive background in computer science and machine learning. In particular, Amin’s focus has been on deep learning and forecasting, prediction explanation methods, model drift detection, probabilistic generative models, and applications of AI in the healthcare domain.

Anila Joshi has more than a decade of experience building AI solutions. As an Applied Science Manager at AWS Generative AI Innovation Center, Anila pioneers innovative applications of AI that push the boundaries of possibility and accelerate the adoption of AWS services with customers by helping customers ideate, identify, and implement secure generative AI solutions.

Yash Shah and his team of scientists, specialists and engineers at AWS Generative AI Innovation Center, work with some of AWS most strategic customers on helping them realize art of the possible with Generative AI by driving business value. Yash has been with Amazon for more than 7.5 years now and has worked with customers across healthcare, sports, manufacturing and software across multiple geographic regions.

Jordan Cook is an accomplished AWS Sr. Account Manager with nearly two decades of experience in the technology industry, specializing in sales and data center strategy. Jordan leverages his extensive knowledge of Amazon Web Services and deep understanding of cloud computing to provide tailored solutions that enable businesses to optimize their cloud infrastructure, enhance operational efficiency, and drive innovation.

Read More

Golden Opportunities: California to Train Students, Educators in AI

Golden Opportunities: California to Train Students, Educators in AI

The State of California today announced a first-of-its-kind AI education initiative with NVIDIA.

The public-private collaboration supports the state’s goals in workforce training and economic development by giving universities, community colleges and adult education programs in California the resources to gain skills in generative AI.

“AI will continue to become more advanced and more prominent in all sectors, and California has the responsibility to support and prepare our students and faculties,” said Amy Tong, secretary of the California Government Operations Agency. “As a world leader in AI computing, NVIDIA is a natural partner to prepare the future of California’s workforce.”

Working With California Colleges and Universities

Through this initiative, California educators can gain certification through the NVIDIA Deep Learning Institute University Ambassador Program, which connects instructors with high-quality teaching kits, workshop content and NVIDIA GPU-accelerated workstations in the cloud.

“It’s always good to equip our professors and teachers because, as mentors to our youth, they are in the best position to help shape students’ career paths,” Tong said.

By empowering educators across the state with the skills to harness the latest AI technologies and NVIDIA GPUs, the initiative can prepare full-time students about to enter the workforce and it can train working professionals who are expanding their skills through community college or adult education courses.

“We want to train a workforce of the future, and also excite students and adults who are out of the workforce about opportunities for the future,” said Stewart Knox, secretary of the California Labor and Workforce Development Agency.

The state agencies are also exploring how internship and apprenticeship programs can offer students hands-on experience with AI skills.

Bolstering Efforts to Bridge the Digital Divide

NVIDIA is already working on multiple projects across California to make AI more accessible and understandable for students from a variety of backgrounds. The company’s educational initiatives and industry-spanning collaborations are helping students and professionals in biotechnology and life sciences, advanced manufacturing, media and entertainment, and other fields to gain proficiency in harnessing AI to support their work, enhance their productivity and drive innovation.

San José State University is evaluating how the NVIDIA Omniverse development platform could support the creation of digital twins — 3D virtual representations of real-world systems — for the city of San José. During the university’s annual Black Engineer Week in June, NVIDIA hosted dozens of students for a daylong program featuring tech demos and career advice discussions.

NVIDIA is embarking on several workforce, climate and community-based projects with schools in the University of California and California State University systems. One plans to train students on underwater data center technology, while another is working with California Black Media to train a large language model on nearly a century of journalism by Black journalists in the state.

The NVIDIA GTC AI conference, held earlier this year in San José, featured several sessions for educators to explore how to integrate generative AI and NVIDIA technologies into their curricula — as well as a panel discussion about the need for equitable access to AI education and resources.

Learn more about NVIDIA’s AI education resources.

Read More

ACL Conference 2024

Apple is sponsoring the annual meeting of the Association for Computational Linguistics (ACL), which takes place in person from August 11 to 16, in Bangkok, Thailand. ACL is a conference in the field of computational linguistics, covering a broad spectrum of diverse research areas that are concerned with computational approaches to natural language. Below is the schedule of Apple-sponsored workshops and events at ACL 2024.

Schedule
Stop by the Apple booth in Centara Grand and Bangkok Convention Center, Floor 22, Booth #1, from 9:00 – 17:30 (UTC+7) on August 12, 13 and 14.
Monday…Apple Machine Learning Research

Cisco achieves 50% latency improvement using Amazon SageMaker Inference faster autoscaling feature

Cisco achieves 50% latency improvement using Amazon SageMaker Inference faster autoscaling feature

This post is co-authored with Travis Mehlinger and Karthik Raghunathan from Cisco.

Webex by Cisco is a leading provider of cloud-based collaboration solutions which includes video meetings, calling, messaging, events, polling, asynchronous video and customer experience solutions like contact center and purpose-built collaboration devices. Webex’s focus on delivering inclusive collaboration experiences fuels our innovation, which leverages AI and Machine Learning, to remove the barriers of geography, language, personality, and familiarity with technology. Its solutions are underpinned with security and privacy by design. Webex works with the world’s leading business and productivity apps – including AWS.

Cisco’s Webex AI (WxAI) team plays a crucial role in enhancing these products with AI-driven features and functionalities, leveraging LLMs to improve user productivity and experiences. In the past year, the team has increasingly focused on building artificial intelligence (AI) capabilities powered by large language models (LLMs) to improve productivity and experience for users. Notably, the team’s work extends to Webex Contact Center, a cloud-based omni-channel contact center solution that empowers organizations to deliver exceptional customer experiences. By integrating LLMs, WxAI team enables advanced capabilities such as intelligent virtual assistants, natural language processing, and sentiment analysis, allowing Webex Contact Center to provide more personalized and efficient customer support. However, as these LLM models grew to contain hundreds of gigabytes of data, WxAI team faced challenges in efficiently allocating resources and starting applications with the embedded models. To optimize its AI/ML infrastructure, Cisco migrated its LLMs to Amazon SageMaker Inference, improving speed, scalability, and price-performance.

This blog post highlights how Cisco implemented faster autoscaling release reference. For more details on Cisco’s Use Cases, Solution & Benefits see How Cisco accelerated the use of generative AI with Amazon SageMaker Inference.

In this post, we will discuss the following:

  1. Overview of Cisco’s use-case and architecture
  2. Introduce new faster autoscaling feature
    1. Single Model real-time endpoint
    2. Deployment using Amazon SageMaker InferenceComponents
  3. Share results on the performance improvements Cisco saw with faster autoscaling feature for GenAI inference
  4. Next Steps

Cisco’s Use-case: Enhancing Contact Center Experiences

Webex is applying generative AI to its contact center solutions, enabling more natural, human-like conversations between customers and agents. The AI can generate contextual, empathetic responses to customer inquiries, as well as automatically draft personalized emails and chat messages. This helps contact center agents work more efficiently while maintaining a high level of customer service.

Architecture

Initially, WxAI embedded LLM models directly into the application container images running on Amazon Elastic Kubernetes Service (Amazon EKS). However, as the models grew larger and more complex, this approach faced significant scalability and resource utilization challenges. Operating the resource-intensive LLMs through the applications required provisioning substantial compute resources, which slowed down processes like allocating resources and starting applications. This inefficiency hampered WxAI’s ability to rapidly develop, test, and deploy new AI-powered features for the Webex portfolio.

To address these challenges, WxAI team turned to SageMaker Inference – a fully managed AI inference service that allows seamless deployment and scaling of models independently from the applications that use them. By decoupling the LLM hosting from the Webex applications, WxAI could provision the necessary compute resources for the models without impacting the core collaboration and communication capabilities.

“The applications and the models work and scale fundamentally differently, with entirely different cost considerations, by separating them rather than lumping them together, it’s much simpler to solve issues independently.”

– Travis Mehlinger, Principal Engineer at Cisco. 

This architectural shift has enabled Webex to harness the power of generative AI across its suite of collaboration and customer engagement solutions.

Today Sagemaker endpoint uses autoscaling with invocation per instance. However, it takes ~6 minutes to detect need for autoscaling.

Introducing new Predefined metric types for faster autoscaling

Cisco Webex AI team wanted to improve their inference auto scaling times, so they worked with Amazon SageMaker to improve inference.

Amazon SageMaker’s real-time inference endpoint offers a scalable, managed solution for hosting Generative AI models. This versatile resource can accommodate multiple instances, serving one or more deployed models for instant predictions. Customers have the flexibility to deploy either a single model or multiple models using SageMaker InferenceComponents on the same endpoint. This approach allows for efficient handling of diverse workloads and cost-effective scaling.

To optimize real-time inference workloads, SageMaker employs application automatic scaling (auto scaling). This feature dynamically adjusts both the number of instances in use and the quantity of model copies deployed (when using inference components), responding to real-time changes in demand. When traffic to the endpoint surpasses a predefined threshold, auto scaling increases the available instances and deploys additional model copies to meet the heightened demand. Conversely, as workloads decrease, the system automatically removes unnecessary instances and model copies, effectively reducing costs. This adaptive scaling ensures that resources are optimally utilized, balancing performance needs with cost considerations in real-time.

Working with Cisco, Amazon SageMaker releases new sub-minute high-resolution pre-defined metric type SageMakerVariantConcurrentRequestsPerModelHighResolution for faster autoscaling and reduced detection time. This newer high-resolution metric has shown to reduce scaling detection times by up to 6x (compared to existing SageMakerVariantInvocationsPerInstance metric) and thereby improving overall end-to-end inference latency by up to 50%, on endpoints hosting Generative AI models like Llama3-8B.

With this new release, SageMaker real-time endpoints also now emits new ConcurrentRequestsPerModel and ConcurrentRequestsPerModelCopy CloudWatch metrics as well, which are more suited for monitoring and scaling Amazon SageMaker endpoints hosting LLMs and FMs.

Cisco’s Evaluation of faster autoscaling feature for GenAI inference

Cisco evaluated Amazon SageMaker’s new pre-defined metric types for faster autoscaling on their Generative AI workloads. They observed up to a 50% latency improvement in end-to-end inference latency by using the new SageMakerequestsPerModelHighResolution metric, compared to the existing SageMakerVariantInvocationsPerInstance  metric.

The setup involved using their Generative AI models, on SageMaker’s real-time inference endpoints. SageMaker’s autoscaling feature dynamically adjusted both the number of instances and the quantity of model copies deployed to meet real-time changes in demand. The new high-resolution SageMakerVariantConcurrentRequestsPerModelHighResolution metric reduced scaling detection times by up to 6x, enabling faster autoscaling and lower latency.

In addition, SageMaker now emits new CloudWatch metrics, including ConcurrentRequestsPerModel and ConcurrentRequestsPerModelCopy, which are better suited for monitoring and scaling endpoints hosting large language models (LLMs) and foundation models (FMs). This enhanced autoscaling capability has been a game-changer for Cisco, helping to improve the performance and efficiency of their critical Generative AI applications.

We are really pleased with the performance improvements we’ve seen from Amazon SageMaker’s new autoscaling metrics. The higher-resolution scaling metrics have significantly reduced latency during initial load and scale-out on our Gen AI workloads. We’re excited to do a broader rollout of this feature across our infrastructure

– Travis Mehlinger, Principal Engineer at Cisco.

Cisco further plans to work with SageMaker inference to drive improvements in rest of the variables that impact autoscaling latencies. Like model download and load times.

Conclusion

Cisco’s Webex AI team is continuing to leverage Amazon SageMaker Inference to power generative AI experiences across its Webex portfolio. Evaluation with faster autoscaling from SageMaker has shown Cisco up to 50% latency improvements in its GenAI inference endpoints. As WxAI team continues to push the boundaries of AI-driven collaboration, its partnership with Amazon SageMaker will be crucial in informing upcoming improvements and advanced GenAI inference capabilities. With this new feature Cisco looks forward to further optimizing its AI Inference performance by rolling it broadly in multiple regions and delivering even more impactful generative AI features to its customers.


About the Authors

Travis Mehlinger is a Principal Software Engineer in the Webex Collaboration AI group, where he helps teams develop and operate cloud-native AI and ML capabilities to support Webex AI features for customers around the world.In his spare time, Travis enjoys cooking barbecue, playing video games, and traveling around the US and UK to race go karts.

Karthik Raghunathan is the Senior Director for Speech, Language, and Video AI in the Webex Collaboration AI Group. He leads a multidisciplinary team of software engineers, machine learning engineers, data scientists, computational linguists, and designers who develop advanced AI-driven features for the Webex collaboration portfolio. Prior to Cisco, Karthik held research positions at MindMeld (acquired by Cisco), Microsoft, and Stanford University.

Praveen Chamarthi is a Senior AI/ML Specialist with Amazon Web Services. He is passionate about AI/ML and all things AWS. He helps customers across the Americas to scale, innovate, and operate ML workloads efficiently on AWS. In his spare time, Praveen loves to read and enjoys sci-fi movies.

Saurabh Trikande is a Senior Product Manager for Amazon SageMaker Inference. He is passionate about working with customers and is motivated by the goal of democratizing AI. He focuses on core challenges related to deploying complex AI applications, multi-tenant models, cost optimizations, and making deployment of Generative AI models more accessible. In his spare time, Saurabh enjoys hiking, learning about innovative technologies, following TechCrunch and spending time with his family.

Ravi Thakur is a Sr Solutions Architect Supporting Strategic Industries at AWS, and is based out of Charlotte, NC. His career spans diverse industry verticals, including banking, automotive, telecommunications, insurance, and energy. Ravi’s expertise shines through his dedication to solving intricate business challenges on behalf of customers, utilizing distributed, cloud-native, and well-architected design patterns. His proficiency extends to microservices, containerization, AI/ML, Generative AI, and more. Today, Ravi empowers AWS Strategic Customers on personalized digital transformation journeys, leveraging his proven ability to deliver concrete, bottom-line benefits.

Read More

How Cisco accelerated the use of generative AI with Amazon SageMaker Inference

How Cisco accelerated the use of generative AI with Amazon SageMaker Inference

This post is co-authored with Travis Mehlinger and Karthik Raghunathan from Cisco.

Webex by Cisco is a leading provider of cloud-based collaboration solutions, including video meetings, calling, messaging, events, polling, asynchronous video, and customer experience solutions like contact center and purpose-built collaboration devices. Webex’s focus on delivering inclusive collaboration experiences fuels their innovation, which uses artificial intelligence (AI) and machine learning (ML), to remove the barriers of geography, language, personality, and familiarity with technology. Its solutions are underpinned with security and privacy by design. Webex works with the world’s leading business and productivity apps—including AWS.

Cisco’s Webex AI (WxAI) team plays a crucial role in enhancing these products with AI-driven features and functionalities, using large language models (LLMs) to improve user productivity and experiences. In the past year, the team has increasingly focused on building AI capabilities powered by LLMs to improve productivity and experience for users. Notably, the team’s work extends to Webex Contact Center, a cloud-based omni-channel contact center solution that empowers organizations to deliver exceptional customer experiences. By integrating LLMs, the WxAI team enables advanced capabilities such as intelligent virtual assistants, natural language processing (NLP), and sentiment analysis, allowing Webex Contact Center to provide more personalized and efficient customer support. However, as these LLM models grew to contain hundreds of gigabytes of data, the WxAI team faced challenges in efficiently allocating resources and starting applications with the embedded models. To optimize its AI/ML infrastructure, Cisco migrated its LLMs to Amazon SageMaker Inference, improving speed, scalability, and price-performance.

This post highlights how Cisco implemented new functionalities and migrated existing workloads to Amazon SageMaker inference components for their industry-specific contact center use cases. By integrating generative AI, they can now analyze call transcripts to better understand customer pain points and improve agent productivity. Cisco has also implemented conversational AI experiences, including chatbots and virtual agents that can generate human-like responses, to automate personalized communications based on customer context. Additionally, they are using generative AI to extract key call drivers, optimize agent workflows, and gain deeper insights into customer sentiment. Cisco’s adoption of SageMaker Inference has enabled them to streamline their contact center operations and provide more satisfying, personalized interactions that address customer needs.

In this post, we discuss the following:

  • Cisco’s business use cases and outcomes
  • How Cisco accelerated the use of generative AI powered by LLMs for their contact center use cases with the help of SageMaker Inference
  • Cisco’s generative AI inference architecture, which is built as a robust and secure foundation, using various services and features such as SageMaker Inference, Amazon Bedrock, Kubernetes, Prometheus, Grafana, and more
  • How Cisco uses an LLM router and auto scaling to route requests to appropriate LLMs for different tasks while simultaneously scaling their models for resiliency and performance efficiency.
  • How the solutions in this post impacted Cisco’s business roadmap and strategic partnership with AWS
  • How Cisco helped SageMaker Inference build new capabilities to deploy generative AI applications at scale

Enhancing collaboration and customer engagement with generative AI: Webex’s AI-powered solutions

In this section, we discuss Cisco’s AI-powered use cases.

Meeting summaries and insights

For Webex Meetings, the platform uses generative AI to automatically summarize meeting recordings and transcripts. This extracts the key takeaways and action items, helping distributed teams stay informed even if they missed a live session. The AI-generated summaries provide a concise overview of important discussions and decisions, allowing employees to quickly get up to speed. Beyond summaries, Webex’s generative AI capabilities also surface intelligent insights from meeting content. This includes identifying action items, highlighting critical decisions, and generating personalized meeting notes and to-do lists for each participant. These insights help make meetings more productive and hold attendees accountable.

Enhancing contact center experiences

Webex is also applying generative AI to its contact center solutions, enabling more natural, human-like conversations between customers and agents. The AI can generate contextual, empathetic responses to customer inquiries, as well as automatically draft personalized emails and chat messages. This helps contact center agents work more efficiently while maintaining a high level of customer service.

Webex customers realize positive outcomes with generative AI

Webex’s adoption of generative AI is driving tangible benefits for customers. Clients using the platform’s AI-powered meeting summaries and insights have reported productivity gains. Webex customers using the platform’s generative AI for contact centers have handled hundreds of thousands of calls with improved customer satisfaction and reduced handle times, enabling more natural, empathetic conversations between agents and clients. Webex’s strategic integration of generative AI is empowering users to work smarter and deliver exceptional experiences.

For more details on how Webex is harnessing generative AI to enhance collaboration and customer engagement, see Webex | Exceptional Experiences for Every Interaction on the Webex blog.

Using SageMaker Inference to optimize resources for Cisco

Cisco’s WxAI team is dedicated to delivering advanced collaboration experiences powered by cutting-edge ML. The team develops a comprehensive suite of AI and ML features for the Webex ecosystem, including audio intelligence capabilities like noise removal and optimizing speaker voices, language intelligence for transcription and translation, and video intelligence features like virtual backgrounds. At the forefront of WxAI’s innovations is the AI-powered Webex Assistant, a virtual assistant that provides voice-activated control and seamless meeting support in multiple languages. To build these sophisticated capabilities, WxAI uses LLMs, which can contain up to hundreds of gigabytes of training data.

Initially, WxAI embedded LLM models directly into the application container images running on Amazon Elastic Kubernetes Service (Amazon EKS). However, as the models grew larger and more complex, this approach faced significant scalability and resource utilization challenges. Operating the resource-intensive LLMs through the applications required provisioning substantial compute resources, which slowed down processes like allocating resources and starting applications. This inefficiency hampered WxAI’s ability to rapidly develop, test, and deploy new AI-powered features for the Webex portfolio. To address these challenges, the WxAI team turned to SageMaker Inference—a fully managed AI inference service that allows seamless deployment and scaling of models independently from the applications that use them. By decoupling the LLM hosting from the Webex applications, WxAI could provision the necessary compute resources for the models without impacting the core collaboration and communication capabilities.

 “The applications and the models work and scale fundamentally differently, with entirely different cost considerations; by separating them rather than lumping them together, it’s much simpler to solve issues independently.”

– Travis Mehlinger, Principal Engineer at Cisco.

This architectural shift has enabled Webex to harness the power of generative AI across its suite of collaboration and customer engagement solutions.

Solution overview: Improving efficiency and reducing costs by migrating to SageMaker Inference

To address the scalability and resource utilization challenges faced with embedding LLMs directly into their applications, the WxAI team migrated to SageMaker Inference. By taking advantage of this fully managed service for deploying LLMs, Cisco unlocked significant performance and cost-optimization opportunities. Key benefits include the ability to deploy multiple LLMs behind a single endpoint for faster scaling and improved response latencies, as well as cost savings. Additionally, the WxAI team implemented an LLM proxy to simplify access to LLMs for Webex teams, enable centralized data collection, and reduce operational overhead. With SageMaker Inference, Cisco can efficiently manage and scale their LLM deployments, harnessing the power of generative AI across the Webex portfolio while maintaining optimal performance, scalability, and cost-effectiveness.

The following diagram illustrates the WxAI architecture on AWS.

The architecture is built on a robust and secure AWS foundation:

  • The architecture uses AWS services like Application Load Balancer, AWS WAF, and EKS clusters for seamless ingress, threat mitigation, and containerized workload management.
  • The LLM proxy (a microservice deployed on an EKS pod as part of the Service VPC) simplifies the integration of LLMs for Webex teams, providing a streamlined interface and reducing operational overhead. The LLM proxy supports LLM deployments on SageMaker Inference, Amazon Bedrock, or other LLM providers for Webex teams.
  • The architecture uses SageMaker Inference for optimized model deployment, auto scaling, and routing mechanisms.
  • The system integrates Loki for logging, Amazon Managed Service for Prometheus for metrics, and Grafana for unified visualization, seamlessly integrated with Cisco SSO.
  • The Data VPC houses the data layer components, including Amazon ElastiCache for caching and Amazon Relational Database Service (Amazon RDS) for database services, providing efficient data access and management.

Use case overview: Contact center topic analytics

A key focus area for the WxAI team is to enhance the capabilities of the Webex Contact Center platform. A typical Webex Contact Center installation has hundreds of agents handling many interactions through various channels like phone calls and digital channels. Webex’s AI-powered Topic Analytics feature extracts the key reasons customers are calling about by analyzing aggregated historical interactions and clustering them into meaningful topic categories, as shown in the following screenshot. The contact center administrator can then use these insights to optimize operations, enhance agent performance, and ultimately deliver a more satisfactory customer experience.

The Topic Analytics feature is powered by a pipeline of three models: a call driver extraction model, a topic clustering model, and a topic labeling model, as illustrated in the following diagram.

The model details are as follows:

  • Call driver extraction – This generative model summarizes the primary reason or intent (referred to as the call driver) behind a customer’s call. Accurate automatic tagging of calls with call drivers helps contact center supervisors and administrators quickly understand the primary reason for any historical call. One of the key considerations when solving this problem was selecting the right model to balance quality and operational costs. The WxAI team chose the FLAN T5 model on SageMaker Inference and instruction fine-tuned it for extracting call drivers from call transcripts. FLAN-T5 is a powerful text-to-text transfer transformer model that performs various natural language understanding and generation tasks. This workload had a global footprint deployed in us-east-2, eu-west-2, eu-central-1, ap-southeast-1, ap-southeast-2, ap-northeast-1, and ca-central-1 AWS
  • Topic clustering – Although automatically tagging every contact center interaction with its call driver is a useful feature in itself, analyzing these call drivers in an aggregated fashion over a large batch of calls can uncover even more interesting trends and insights. The topic clustering model achieves this by clustering all the individually extracted call drivers from a large batch of calls into different topic clusters. It does this by creating a semantic embedding for each call driver and employing an unsupervised hierarchical clustering technique that operates on the vector embeddings. This results in distinct and coherent topic clusters where semantically similar call drivers are grouped together.
  • Topic labeling – The topic labeling model is a generative model that creates a descriptive name to serve as the label for each topic cluster. Several LLMs were prompt-tuned and evaluated in a few-shot setting to choose the ideal model for the label generation task. Finally, Llama2-13b-chat, with its ability to better capture contextual nuances and semantics of natural language conversation, was used for its accuracy, performance, and cost-effectiveness. Additionally, Llama2-13b-chat was deployed and used on SageMaker inference components, while maintaining relatively low operating costs compared to other LLMs, by using specific hardware like g4dn and g5

This solution also used the auto scaling capabilities of SageMaker to dynamically adjust the number of instances based on a desired minimum of 1 endpoint and maximum of 30. This approach provides efficient resource utilization while maintaining high throughput, allowing the WxAI platform to handle batch jobs overnight and scale to hundreds of inferences per minute during peak hours. By deploying the model on SageMaker Inference with auto scaling, WxAI team was able to deliver reliable and accurate responses to customer interactions for their Topic Analytics use case.

By accurately pinpointing the call driver, the system can suggest appropriate actions, resources, and next steps to the agent, streamlining the customer support process, further leading to personalized and accurate responses to customer questions.

To handle fluctuating demand and optimize resource utilization, the WxAI team implemented auto scaling for their SageMaker Inference endpoints. They configured the endpoints to scale from a minimum to a maximum instance count based on GPU utilization. Additionally, the LLM proxy routed requests between the different LLMs deployed on SageMaker Inference. This proxy abstracts the complexities of communicating with various LLM providers and enables centralized data collection and analysis. This led to enhanced generative AI workflows, optimized latency, and personalized use case implementations.

Benefits

Through the strategic adoption of AWS AI services, Cisco’s WxAI team has realized significant benefits, enabling them to build cutting-edge, AI-powered collaboration capabilities more rapidly and cost-effectively:

  • Improved development and deployment cycle time – By decoupling models from applications, the team has streamlined processes like bug fixes, integration testing, and feature rollouts across environments, accelerating their overall development velocity.
  • Simplified engineering and delivery – The clear separation of concerns between the lean application layer and resource-intensive model layer has simplified engineering efforts and delivery, allowing the team to focus on innovation rather than infrastructure complexities.
  • Reduced costs – By using fully managed services like SageMaker Inference, the team has offloaded infrastructure management overhead. Additionally, capabilities like asynchronous inference and multi-model endpoints have enabled significant cost optimization without compromising performance or availability.
  • Scalability and performance – Services like SageMaker Inference and Amazon Bedrock, combined with technologies like NVIDIA Triton Inference Server on SageMaker, have empowered the WxAI team to scale their AI/ML workloads reliably and deliver high-performance inference for demanding use cases.
  • Accelerated innovation – The partnership with AWS has given the WxAI team access to cutting-edge AI services and expertise, enabling them to rapidly prototype and deploy innovative capabilities like the AI-powered Webex Assistant and advanced contact center AI features.

Cisco’s contributions to SageMaker Inference: Enhancing generative AI inference capabilities

Building upon the success of their strategic migration to SageMaker Inference, Cisco has been instrumental in partnering with the SageMaker Inference team to build and enhance key generative AI capabilities within the SageMaker platform. Since the early days of generative AI, Cisco has provided the SageMaker Inference team with valuable inputs and expertise, enabling the introduction of several new features and optimizations:

  • Cost and performance optimizations for generative AI inference – Cisco helped the SageMaker Inference team develop innovative techniques to optimize the use of accelerators, enabling SageMaker Inference to reduce foundation model (ML) deployment costs by 50% on average and latency by 20% on average with inference components. This breakthrough delivers significant cost savings and performance improvements for customers running generative AI workloads on SageMaker.
  • Scaling improvements for generative AI inference – Cisco’s expertise in distributed systems and auto scaling has also helped the SageMaker team develop advanced capabilities to better handle the scaling requirements of generative AI models. These improvements reduce auto scaling times by up to 40% and auto scaling detection by 6 times, so customers can rapidly scale their generative AI workloads on SageMaker to meet spikes in demand without compromising performance.
  • Streamlined generative AI model deployment for inference – Recognizing the need for simplified generative AI model deployment, Cisco collaborated with AWS to introduce the ability to deploy open source LLMs and FMs with just a few clicks. This user-friendly functionality removes the complexity traditionally associated with deploying these advanced models, empowering more customers to harness the power of generative AI.
  • Simplified inference deployment for Kubernetes customers – Cisco’s deep expertise in Kubernetes and container technologies helped the SageMaker team develop new Kubernetes Operator-based inference capabilities. These innovations make it straightforward for customers running applications on Kubernetes to deploy and manage generative AI models, reducing LLM deployment costs by 50% on average.
  • Using NVIDIA Triton Inference Server for generative AI – Cisco worked with AWS to integrate the NVIDIA Triton Inference Server, a high-performance model serving container managed by SageMaker, to power generative AI inference on SageMaker Inference. This enabled the WxAI team to scale their AI/ML workloads reliably and deliver high-performance inference for demanding generative AI use cases.
  • Packaging generative AI models more efficiently – To further simplify the generative AI model lifecycle, Cisco worked with AWS to enhance the capabilities in SageMaker for packaging LLMs and FMs for deployment. These improvements make it straightforward to prepare and deploy these generative AI models, accelerating their adoption and integration.
  • Improved documentation for generative AI – Recognizing the importance of comprehensive documentation to support the growing generative AI ecosystem, Cisco collaborated with the AWS team to enhance the SageMaker documentation. This includes detailed guides, best practices, and reference materials tailored specifically for generative AI use cases, helping customers quickly ramp up their generative AI initiatives on the SageMaker platform.

By closely partnering with the SageMaker Inference team, Cisco has played a pivotal role in driving the rapid evolution of generative AI Inference capabilities in SageMaker. The features and optimizations introduced through this collaboration are empowering AWS customers to unlock the transformative potential of generative AI with greater ease, cost-effectiveness, and performance.

“Our partnership with the SageMaker Inference product team goes back to the early days of generative AI, and we believe the features we have built in collaboration, from cost optimizations to high-performance model deployment, will broadly help other enterprises rapidly adopt and scale generative AI workloads on SageMaker, unlocking new frontiers of innovation and business transformation.”

– Travis Mehlinger, Principal Engineer at Cisco.

Conclusion

By using AWS services like SageMaker Inference and Amazon Bedrock for generative AI, Cisco’s WxAI team has been able to optimize their AI/ML infrastructure, enabling them to build and deploy AI-powered features more efficiently, reliably, and cost-effectively. This strategic approach has unlocked significant benefits for Cisco in deploying and scaling its generative AI capabilities for the Webex platform. Cisco’s own journey with generative AI, as showcased in this post, offers valuable lessons and insights for other uses of SageMaker Inference.

Recognizing the impact of generative AI, Cisco has played a crucial role in shaping the future of these capabilities within SageMaker Inference. By providing valuable insights and hands-on collaboration, Cisco has helped AWS develop a range of powerful features that are making generative AI more accessible and scalable for organizations. From optimizing infrastructure costs and performance to streamlining model deployment and scaling, Cisco’s contributions have been instrumental in enhancing the SageMaker Inference service.

Moving forward, the Cisco-AWS partnership aims to drive further advancements in areas like conversational and generative AI inference. As generative AI adoption accelerates across industries, Cisco’s Webex platform is designed to scale and streamline user experiences through various use cases discussed in this post and beyond. You can expect to see ongoing innovation from this collaboration in SageMaker Inference capabilities, as Cisco and SageMaker Inference continue to push the boundaries of what’s possible in the world of AI.

For more information on Webex Contact Center’s Topic Analytics feature and related AI capabilities, refer to The Webex Advantage: Navigating Customer Experience in the Age of AI on the Webex blog.


About the Authors

Travis Mehlinger is a Principal Software Engineer in the Webex Collaboration AI group, where he helps teams develop and operate cloud-centered AI and ML capabilities to support Webex AI features for customers around the world. In his spare time, Travis enjoys cooking barbecue, playing video games, and traveling around the US and UK to race go-karts.

Karthik Raghunathan is the Senior Director for Speech, Language, and Video AI in the Webex Collaboration AI Group. He leads a multidisciplinary team of software engineers, machine learning engineers, data scientists, computational linguists, and designers who develop advanced AI-driven features for the Webex collaboration portfolio. Prior to Cisco, Karthik held research positions at MindMeld (acquired by Cisco), Microsoft, and Stanford University.

Saurabh Trikande is a Senior Product Manager for Amazon SageMaker Inference. He is passionate about working with customers and is motivated by the goal of democratizing machine learning. He focuses on core challenges related to deploying complex ML applications, multi-tenant ML models, cost optimizations, and making deployment of deep learning models more accessible. In his spare time, Saurabh enjoys hiking, learning about innovative technologies, following TechCrunch and spending time with his family.

Ravi Thakur is a Senior Solutions Architect at AWS, based in Charlotte, NC. He specializes in solving complex business challenges using distributed, cloud-centered, and well-architected patterns. Ravi’s expertise includes microservices, containerization, AI/ML, and generative AI. He empowers AWS strategic customers on digital transformation journeys, delivering bottom-line benefits. In his spare time, Ravi enjoys motorcycle rides, family time, reading, movies, and traveling.

Amit Arora is an AI and ML Specialist Architect at Amazon Web Services, helping enterprise customers use cloud-based machine learning services to rapidly scale their innovations. He is also an adjunct lecturer in the MS data science and analytics program at Georgetown University in Washington D.C.

Madhur Prashant is an AI and ML Solutions Architect at Amazon Web Services. He is passionate about the intersection of human thinking and generative AI. His interests lie in generative AI, specifically building solutions that are helpful and harmless, and most of all optimal for customers. Outside of work, he loves doing yoga, hiking, spending time with his twin, and playing the guitar.

Read More

Discover insights from Box with the Amazon Q Box connector

Discover insights from Box with the Amazon Q Box connector

Seamless access to content and insights is crucial for delivering exceptional customer experiences and driving successful business outcomes. Box, a leading cloud content management platform, serves as a central repository for diverse digital assets and documents in many organizations. An enterprise Box account typically contains a wealth of materials, including documents, presentations, knowledge articles, and more. However, extracting meaningful information from the vast amount of Box data can be challenging without the right tools and capabilities. Employees in roles such as customer support, project management, and product management require the ability to effortlessly query Box content, uncover relevant insights, and make informed decisions that address customer needs effectively.

Building a generative artificial intelligence (AI)-powered conversational application that is seamlessly integrated with your enterprise’s relevant data sources requires time, money, and people. First, you need to develop connectors to those data sources. Next, you need to index this data to make it available for a Retrieval Augmented Generation (RAG) approach where relevant passages are delivered with high accuracy to a large language model (LLM). To do this, you need to select an index that provides the capabilities to index the content for semantic and vector search, build the infrastructure to retrieve and rank the answers, and build a feature-rich web application. You also need to hire and staff a large team to build, maintain, and manage such a system.

Amazon Q Business is a fully managed generative AI-powered assistant that can answer questions, provide summaries, generate content, and securely complete tasks based on data and information in your enterprise systems. Amazon Q Business can help you get fast, relevant answers to pressing questions, solve problems, generate content, and take action using the data and expertise found in your company’s information repositories, code, and enterprise systems (such as Box, among others). Amazon Q provides out-of-the-box native data source connectors that can index content into a built-in retriever and uses an LLM to provide accurate, well-written answers. A data source connector is a component of Amazon Q that helps integrate and synchronize data from multiple repositories into one index.

Amazon Q Business offers multiple prebuilt connectors to a large number of data sources, including Box Content Cloud, Atlassian Confluence, Amazon Simple Storage Service (Amazon S3), Microsoft SharePoint, Salesforce, and many more, and helps you create your generative AI solution with minimal configuration. For a full list of Amazon Q Business supported data source connectors, see Amazon Q Business connectors.

In this post, we guide you through the process of configuring and integrating Amazon Q for Business with your Box Content Cloud. This will enable your support, project management, product management, leadership, and other teams to quickly obtain accurate answers to their questions from the documents stored in your Box account.

Find accurate answers from Box documents using Amazon Q Business

After you integrate Amazon Q Business with Box, you can ask questions based on the documents stored in your Box account. For example:

  • Natural language search – You can search for information within documents located in any folder by using conversational language, simplifying the process of finding desired data without the need to remember specific keywords or filters.
  • Summarization – You can ask Amazon Q Business to summarize contents of documents to meet your needs. This enables you to quickly understand the main points and find relevant information in your documents without having to scan through individual document descriptions manually.

Overview of the Box connector for Amazon Q Business

To crawl and index contents in Box, you can configure the Amazon Q Business Box connector as a data source in your Amazon Q Business application. When you connect Amazon Q Business to a data source and initiate the sync process, Amazon Q Business crawls and indexes documents from the data source into its index.

Types of documents

Let’s look at what are considered as documents in the context of the Amazon Q business Box connector. A document is a collection of information that consists of a title, the content (or the body), metadata (data about the document), and access control list (ACL) information to make sure answers are provided from documents that the user has access to.

The Amazon Q Business Box connector supports crawling of the following entities in Box:

  • Files – Each file is considered a single document
  • Comments – Each comment is considered a single document
  • Tasks – Each task is considered a single document
  • Web links – Each web link is considered a single document

Additionally, Box users can create custom objects and custom metadata fields. Amazon Q supports the crawling and indexing of these custom objects and custom metadata.

The Amazon Q Business Box connector also supports the indexing of a rich set of metadata from the various entities in Box. It further provides the ability to map these source metadata fields to Amazon Q index fields for indexing this metadata. These field mappings allow you to map Box field names to Amazon Q index field names. There are two types of metadata fields that Amazon Q connectors support:

  • Reserved or default fields – These are required with each document, such as the title, creation date, or author
  • Custom metadata fields – These are fields created in the data source in addition to what the data source already provides

Refer to Box data source connector field mappings for more information.

Authentication

Before you index the content from Box, you need to first establish a secure connection between the Amazon Q Business connector for Box with your Box cloud instance. To establish a secure connection, you need to authenticate with the data source. Let’s look at the supported authentication mechanisms for the Box connector.

The Amazon Q Box connector supports tokens with JWT authentication by Box as the authentication method. This authentication approach requires the configuration of several parameters, including the Box client ID, client secret, public key ID, private key, and passphrase. By implementing this token-based JWT authentication, the Amazon Q Business assistant can securely connect to and interact with data stored within the Box platform on behalf of your organization.

Refer to JWT Auth in the Box Developer documentation for more information on setting up and managing JWT tokens in Box.

Supported box subscriptions

To integrate Amazon Q Business with Box using the Box connector, access to Box Enterprise or Box Enterprise Plus plans is required. Both plans provide the necessary capabilities to create a custom application, download a JWT token as an administrator, and then configure the connector to ingest relevant data from Box.

Secure querying with ACL crawling, identity crawling, and User Store

The success of Amazon Q Business applications hinges on two key factors: making sure end-users only see responses generated from documents they have access to, and maintaining the privacy and security of each user’s conversation history. Amazon Q Business achieves this by validating the user’s identity every time they access the application, and using this to restrict tasks and answers to the user’s authorized documents. This is accomplished through the integration of AWS IAM Identity Center, which serves as the authoritative identity source and validates users. You can configure IAM Identity Center to use your enterprise identity provider (IdP)—such as Okta or Microsoft Entra ID—as the identity source.

ACLs and identity crawling are enabled by default and can’t be disabled. The Box connector automatically retrieves user identities and ACLs from the connected data sources. This allows Amazon Q Business to filter chat responses based on the end-user’s document access level, so they only see the information they are authorized to view. If you need to index documents without ACLs, you must explicitly mark them as public in your data source. For more information on how the Amazon Q Business connector crawls Box ACLs, refer to How Amazon Q Business connector crawls Box ACLs.

In the Box platform, an administrative user can provision additional user accounts and assign varying permission levels, such as viewer, editor, or co-owner, to files or folders. Fine-grained access is further enhanced through the Amazon Q User Store, which is an Amazon Q data source connector feature that streamlines user and group management across all the data sources attached to your application. This granular permission mapping enables Amazon Q Business to efficiently enforce access controls based on the user’s identity and permissions within the Box environment. For more information on the Amazon Q Business User store, refer to Understanding Amazon Q Business User Store.

Solution overview

In this post, we walk through the steps to configure a Box connector for an Amazon Q Business application. We use an existing Amazon Q application and configure the Box connector to sync data from specific Box folders, map relevant Box fields to the Amazon Q index, initiate the data sync, and then query the ingested Box data using the Amazon Q web experience.

As part of querying the Amazon Q Business application, we cover how to ask natural language questions on documents present in your Box folders and get back relevant results and insights using Amazon Q Business.

Prerequisites

For this walkthrough, you need the following:

Create users in IAM Identity Center

For this post, you need to create three sample users in IAM Identity Center. One user will act as the admin user; the other two will serve as department-specific users. This is to simulate the configuration of user-level access control on distinct folders within your Box account. Make sure to use the same email addresses when creating the users in your Box account.

Complete the following steps to create the users in IAM Identity Center:

  1. On the IAM Identity Center console, choose Users in the navigation pane.
  2. Choose Add user.
  3. For Username, enter a user name. For example, john_doe.
  4. For Password, select Send an email to this user with password setup instructions.
  5. For Email address and Confirm email address, enter your email address.
  6. For First name and Last name, enter John and Doe, respectively. You can also provide your preferred first and last names if necessary.
  7. Keep all other fields as default and choose Next.

  1. On the Add user to groups page, keep everything as default and choose Next.
  2. Verify the details on the Review and add user page, then choose Add user.

The user will get an email containing a link to join IAM Identity Center.

  1. Choose Accept Invitation and set up a password for your user. Remember to note it down for testing the Amazon Q Business application later.
  2. If required by your organization, complete the multi-factor authentication (MFA) setup for this user to enhance security during sign-in.
  3. Confirm that you can log in as the first user using the credentials you created in the previous step.
  4. Repeat the previous steps to create your second department-specific user. Use a different email address for this user. For example, set Username as mary_major, First name as Mary, and Last name as Major. Alternatively, you can use your own values if preferred.
  5. Verify that you can log in as the second user using the credentials you created in the previous step.
  6. Repeat the previous steps to create the third user, who will serve as the admin. Use your Box admin user’s email address for this account, and choose your preferred user name, first name, and last name. For this example, saanvi_sarkar will act as the admin user.
  7. Confirm that you can log in as the admin user using the credentials you created in the previous step.

This concludes the setup of all three users in the IAM Identity Center, each with unique email addresses.

Create two users in your Box account

For this example, you need two demo users in your Box account in addition to the admin user. Complete the following steps to create these two demo users, using the same email addresses you used when setting up these users in IAM Identity Center:

  1. Log in to your Box Enterprise Admin Console as an admin user.
  2. Choose Users & Groups in the navigation pane.

On the Managed Users tab, the admin user is listed by default.

  1. To create your first department-specific user, choose Add Users, then choose Add Users Manually.

  1. Enter the same name and email address that you used while creating this first department-specific user in IAM Identity Center. For example, use John Doe for Name and his email address for Email. You don’t need to specify groups or folders.
  2. Select the acknowledgement check box to agree to the payment method for adding this new user to your Box account.
  3. Choose Next.

  1. On the Add Users page, choose Complete to agree and add this new user to your Box account.
  2. To create your second department-specific user, choose Add Users, then choose Add Users Manually.
  3. Enter the same name and email address that you used while creating this second department-specific user in IAM Identity Center. For example, use Mary Major for Name and her email address for Email. You don’t need to specify groups or folders.

You now have all three users provisioned in your Box account.

Create a custom Box application for Amazon Q

Before you configure the Box data source connector in Amazon Q Business, you create a custom Box application in your Box account.

Complete the following steps to create an application and configure its authentication method:

  1. Log in to your Box Enterprise Developer Console as an admin user.
  2. Choose My Apps in the navigation pane.
  3. Choose Create New App.
  4. Choose Custom App.

  1. For App name, enter a name for your app. For example, AmazonQConnector.
  2. For Purpose, choose Other.
  3. For Please specify, enter Other.
  4. Leave the other options blank and choose Next.

  1. For Authentication Method, select Server Authentication (with JWT).
  2. Choose Create App.

  1. In My Apps, choose your created app and go to the Configuration
  2. In the App Access Level section, choose App + Enterprise Access.

  1. In the Application Scopes section, select the following permissions:
    1. Write all files and folders stored in Box
    2. Manage users
    3. Manage groups
    4. Manage enterprise properties

  1. In the Advanced Features section, select Make API calls using the as-user header.
  2. In the Add and Manage Public Keys section, choose Generate a Public/Private Keypair.

  1. Complete the two-step verification process and choose OK to download the JSON file to your computer.

  1. Choose Save Changes.
  2. On the Authorization tab, choose Review and Submit.

  1. In the Review App Authorization Submission pop-up, for App description, enter AmazonQConnector and choose Submit.

Your Box Enterprise owner needs to approve the application before you can use it. Complete the following steps to complete the authorization:

  1. Log in to your Box Enterprise Admin Console as the admin user.
  2. Choose Apps in the navigation pane and choose the Customs App Manager tab to view the apps that need to be authorized.
  3. Choose the AmazonQConnector app that says Pending Authorization.
  4. Choose the options menu (three dots) and choose Authorize App.

  1. Choose Authorize in the Authorize App pop-up.

This will authorize your AmazonQConnector application and change the status to Authorized.

You can review the downloaded JSON file in your computer’s downloads directory. It contains the client ID, client secret, public key ID, private key, passphrase, and enterprise ID, which you’ll need when creating the Box data source in a later step.

Add sample documents to your Box account

In this step, upload sample documents to your Box account. Later, you use the Amazon Q Box data source connector to crawl and index these documents.

  1. Download the zip file to your computer.
  2. Extract the files to a folder called AWS_Whitepapers.

  1. Log in to your Box Enterprise account as an admin user.
  2. Upload the AWS_Whitepapers folder to your Box account.

At the time of writing, this folder contains 6 folders and 60 files within them.

Set user-specific permissions on folders in your Box account

In this step, you set up user-level access control for two users on two separate folders in your Box account.

For this ACL simulation, consider the two department-specific users created earlier. Assume John is part of the machine learning (ML) team, so he needs access only to the Machine_Learning folder contents, whereas Mary belongs to the database team, so she needs access only to the Databases folder contents.

Log in to your Box account as an admin and grant viewer access to each user for their respective folders, as shown in the following screenshots. This restricts them to see only their assigned folder’s contents.

The Machine_Learning folder is accessible to the owner and user John Doe only.

The Databases folder is accessible to the owner and user Mary Major only.

Configure the Box connector for your Amazon Q Business application

Complete the following steps to configure your Box connector for Amazon Q Business:

  1. On the Amazon Q Business console, choose Applications in the navigation pane.
  2. Select the application you want to add the Box connector to.
  3. On the Actions menu, choose Edit.

  1. On the Update application page, leave all values unchanged and choose Update.

  1. On the Update retriever page, leave all values unchanged and choose Next.

  1. On the Connect data sources page, on the All tab, search for Box.
  2. Choose the plus sign next to the Box connector.

  1. On the Add data source page, for Data source name, enter a name, for example, box-data-source.
  2. Open the JSON file you downloaded from the Box Developer Console.

The file contains values for clientID, clientSecret, publicKeyID, privateKey, passphrase, and enterpriseID.

  1. In the Source section, for Box enterprise ID, enter the value of the enterpriseID key from the JSON file.

  1. For Authorization, no change is needed because by default the ACLs are set to ON for the Box data source connector.
  2. In the Authentication section, under AWS Secrets Manager secret, choose Create and add a new secret.
  3. For Secret name, enter a name for the secret, for example, connector. The prefix QBusiness-Box- is automatically added for you.
  4. For the remaining fields, enter the corresponding values from the downloaded JSON file.
  5. Choose Save to add the secret.

  1. In the Configure VPC and Security group section, use the default setting (No VPC) for this post.
  2. Identity crawling is enabled by default, so no changes are necessary.

  1. In the IAM role section, choose Create a new role (Recommended) and enter a role name, for example, box-role.

For more information on the required permissions to include in the IAM role, see IAM roles for data sources.

  1. In the Sync scope section, in addition to file contents, you can include Box web links, comments, and tasks to your index. Use the default setting (unchecked) for this post.
  2. In the Additional configuration section, you can choose to include or exclude regular expression (regex) patterns. These regex patterns can be applied based on the file name, file type, or file path. For this demo, we skip the regex patterns configuration.

  1. In the Sync mode section, select New, modified, or deleted content sync.
  2. In the Sync run schedule section, choose Run on demand.

  1. In the Field Mappings section, keep the default settings.

After you complete the retriever creation, you can modify field mappings and add custom field attributes. You can access field mapping by editing the data source.

  1. Choose Add data source and wait for the retriever to get created.

It can take a few seconds for the required roles and the connector to be created.

After the data source is created, you’re redirected to the Connect data sources page to add more data sources as needed.

  1. For this walkthrough, choose Next.
  2. In the Update groups and users section, choose Add groups and users to add the groups and users from IAM Identity Center set up by your administrator.

  1. In the Add or assign users and groups pop-up, select Assign existing users and groups to add existing users configured in your connected IAM Identity Center and choose Next.

Optionally, if you have permissions to add users to connected IAM Identity Center, you can select Add new users.

  1. On the Assign users and groups page, choose Get Started.
  2. In the search box, enter John Doe and choose his user name.

  1. Add the second user, Mary Major, by entering her name in the search box.

  1. Optionally, you can add the admin user to this application.
  2. Choose Assign to add these users to this Amazon Q app.
  3. In the Groups and users section, choose the Users tab, where you will see no subscriptions configured currently.
  4. Choose Manage access and subscriptions to configure the subscription.

  1. On the Manage access and subscriptions page, choose the Users
  2. Select your users.
  3. Choose Change subscription and choose Update subscription tier.

  1. On the Confirm subscription change page, for New subscription, choose Business Pro.
  2. Choose Confirm.

  1. Verify the changed subscription for all three users, then choose Done.

  1. Choose Update application to complete adding and setting up the Box data connector for Amazon Q Business.

Configure Box field mappings

To help you structure data for retrieval and chat filtering, Amazon Q Business crawls data source document attributes or metadata and maps them to fields in your Amazon Q index. Amazon Q has reserved fields that it uses when querying your application. When possible, Amazon Q automatically maps these built-in fields to attributes in your data source.

If a built-in field doesn’t have a default mapping, or if you want to map additional index fields, use the custom field mappings to specify how a data source attribute maps to your Amazon Q application.

  1. On the Amazon Q Business console, choose your application.
  2. Under Data sources, select your data source.
  3. On the Actions menu, choose Edit.

  1. In the Field mappings section, select the required fields to crawl under Files and folders, Comments, Tasks, and Web Links that are available and choose Update.

 When selecting all items, make sure you navigate through each page by choosing the page numbers and selecting Select All on every page to include all mapped items.

Index sample documents from the Box account

The Box connector setup for Amazon Q is now complete. Because you configured the data source sync schedule to run on demand, you need to start it manually.

In the Data sources section, choose the data source box-data-source and choose Sync now.

The Current sync state changes to Syncing – crawling, then to Syncing – indexing.

After a few minutes, the Current sync state changes to Idle, the Last sync status changes to Successful, and the Sync run history section shows more details, including the number of documents added.

As shown in the following screenshot, Amazon Q has successfully scanned and added all 60 files from the AWS_Whitepapers Box folder.

Query Box data using the Amazon Q web experience

Now that the data synchronization is complete, you can start exploring insights from Amazon Q. In the newly created Amazon Q application, choose Customize web experience to open a new tab with a preview of the UI and options to customize according to your needs.

You can customize the Title, Subtitle, and Welcome message as needed, which will be reflected in the UI.

For this walkthrough, we use the defaults and choose View web experience to be redirected to the login page for the Amazon Q application.

  1. Log in to the application as your first department-specific user, John Doe, using the credentials for the user that were added to the Amazon Q application.

When the login is successful, you’ll be redirected to the Amazon Q assistant UI, where you can start asking questions using natural language and get insights from your Box index.

  1. Enter a prompt in the Amazon Q Business AI assistant at the bottom, such as “What AWS AI/ML service can I use to convert text from one language to another?” Press Enter or choose the arrow icon to generate the response. You can also try your own prompts.

Because John Doe has access to the Machine_Learning folder, Amazon Q Business successfully processed his query that was related to ML and displayed the response. You can choose Sources to view the source files contributing to the response, enhancing its authenticity.

  1. Let’s attempt a different prompt related to the Databases folder, which John doesn’t have access to. Enter the prompt “How to reduce the amount of read traffic and connections to my Amazon RDS database?” or choose your own database-related prompt. Press Enter or choose the arrow icon to generate the response.

As anticipated, you’ll receive a response from the Amazon Q Business application indicating it couldn’t generate a reply from the documents John can access. Because John lacks access to the Databases folder, the Amazon Q Business application couldn’t generate a response.

  1. Go back to the Amazon Q Business Applications page and choose your application again.
  2. This time, open the web experience URL in private mode to initiate a new session, avoiding interference with the previous session.
  3. Log in as Mary Major, the second department-specific user. Use her user name, password, and any MFA you set up initially.
  4. Enter a prompt in the Amazon Q Business AI assistant at the bottom, such as “How to reduce the amount of read traffic and connections to my Amazon RDS database?” Press Enter or choose the arrow icon to generate the response. You can also try your own prompts.

Because Mary has access to the Databases folder, Amazon Q Business successfully processed her query that was related to databases and displayed the response. You can choose Sources to view the source files that contributed in generating the response.

  1. Now, let’s attempt a prompt that contains information from the Machine_Learning folder, which Mary isn’t authorized to access. Enter the prompt “What AWS AI/ML service can I use to convert text from one language to another?” or choose your own ML-related prompt.

As anticipated, the Amazon Q Business application will indicate it couldn’t generate a response because Mary lacks access to the Machine_Learning folder.

The preceding test scenarios illustrate the functionality of the Amazon Q Box connector in crawling and indexing documents along with their associated ACLs. With this mechanism, only users with the relevant permissions can access the respective folders and files within the linked Box account.

Congratulations! You’ve effectively utilized Amazon Q to unveil answers and insights derived from the content indexed from your Box account.

Frequently asked questions

In this section, we provide guidance to frequently asked questions.

Amazon Q Business is unable to answer your questions

If you get the response “Sorry, I could not find relevant information to complete your request,” this may be due to a few reasons:

  • No permissions – ACLs applied to your Box account don’t allow you to query certain data sources. If this is the case, reach out to your application administrator to make sure your ACLs are configured to access the data sources.
  • Data connector sync failed – Your data connector may have failed to sync information from the source to the Amazon Q Business application. Verify the data connector’s sync run schedule and sync history to confirm the sync is successful.
  • Incorrect regex pattern – Validate the correct definition of the regex include or exclude pattern when setting up the Box data source.

If none of these reasons apply to your use case, open a support case and work with your technical account manager to get this resolved.

How to generate responses from authoritative data sources

If you want Amazon Q Business to only generate responses from authoritative data sources, the use of guardrails can be highly beneficial. Within the application settings, you can specify the authorized data repositories, such as content management systems and knowledge bases, from which the assistant is permitted to retrieve and synthesize information. By defining these approved data sources as guardrails, you can instruct Amazon Q Business to only use reliable, up-to-date, and trustworthy information, eliminating the risk of incorporating data from non-authoritative or potentially unreliable sources.

Additionally, Amazon Q Business offers the capability to define content filters as part of Guardrails for Amazon Bedrock. These filters can specify the types of content, topics, or keywords deemed appropriate and aligned with your organization’s policies and standards. By incorporating these content-based guardrails, you can further refine the assistant’s responses to make sure they align with your authoritative information and messaging. The integration of Amazon Q Business with IAM Identity Center also serves as a critical guardrail, allowing you to validate user identities and align ACLs to make sure end-users only receive responses based on their authorized data access.

Amazon Q Business responds using old (stale) data even though your data source is updated

If you find that Amazon Q Business is responding with outdated or stale data, you can use the relevance tuning and boosting features to surface the latest documents. The relevance tuning functionality allows you to adjust the weightings assigned to various document attributes, such as recency, to prioritize the most recent information. Boosting can also be used to explicitly elevate the ranking of the latest documents, making sure they are prominently displayed in the assistant’s responses. For more information on relevance tuning, refer to Boosting chat responses using relevance tuning.

Additionally, it’s important to review the sync schedule and status for your data connectors. Verifying the sync frequency and the last successful sync run can help identify any issues with data freshness. Adjusting the sync schedule or running manual syncs, as needed, can help keep the data up to date and improve the relevance of the Amazon Q Business responses. For more information, refer to Sync run schedule.

Clean up

To prevent incurring additional costs, it’s essential to clean up and remove any resources created during the implementation of this solution. Specifically, you should delete the Amazon Q application, which will consequently remove the associated index and data connectors. However, any IAM roles and secrets created during the Amazon Q application setup process need to be removed separately. Failing to clean up these resources may result in ongoing charges, so it’s crucial to take the necessary steps to completely remove all components related to this solution.

Complete the following steps to delete the Amazon Q application, secret, and IAM role:

  1. On the Amazon Q Business console, select the application that you created.
  2. On the Actions menu, choose Delete and confirm the deletion.
  3. On the Secrets Manager console, select the secret that was created for the Box connector.
  4. On the Actions menu, choose Delete.
  5. Select the waiting period as 7 days and choose Schedule deletion.
  6. On the IAM console, select the role that was created during the Amazon Q application creation.
  7. Choose Delete and confirm the deletion.
  8. Delete the AWS_Whitepapers folder and its contents from your Box
  9. Delete the two demo users that you created in your Box Enterprise account.
  10. On the IAM Identity Center console, choose Users in the navigation pane.
  11. Select the three demo users that you created and choose Delete users to remove these users.

Conclusion

The Amazon Q Box connector allows organizations to seamlessly integrate their Box files into the powerful generative AI capabilities of Amazon Q. By following the steps outlined in this post, you can quickly configure the Box connector as a data source for Amazon Q and initiate synchronization of your Box information. The native field mapping options enable you to customize exactly which Box data to include in Amazon Q’s index.

Amazon Q can serve as a powerful assistant capable of providing rich insights and summaries about your Box files directly from natural language queries.

The Amazon Q Box integration represents a valuable tool for software teams to gain AI-driven visibility into their organization’s document repository. By bridging Box’s industry-leading content management with Amazon’s cutting-edge generative AI, teams can drive productivity, make better informed decisions, and unlock deeper insights into their organization’s knowledge base. As generative AI continues advancing, integrations like this will become critical for organizations aiming to deliver streamlined, data-driven software development lifecycles.

To learn more about the Amazon Q connector for Box, refer to Connecting Box to Amazon Q.


About the Author

Maran Chandrasekaran is a Senior Solutions Architect at Amazon Web Services, working with our enterprise customers. Outside of work, he loves to travel and ride his motorcycle in Texas Hill Country.

Senthil Kamala Rathinam is a Solutions Architect at Amazon Web Services specializing in data and analytics. He is passionate about helping customers design and build modern data platforms. In his free time, Senthil loves to spend time with his family and play badminton.

Vijai Gandikota is a Principal Product Manager in the Amazon Q and Amazon Kendra organization of Amazon Web Services. He is responsible for the Amazon Q and Amazon Kendra connectors, ingestion, security, and other aspects of the Amazon Q and Amazon Kendra services.

Read More