India Enterprises Serve Over a Billion Local Language Speakers Using LLMs Built With NVIDIA AI

India Enterprises Serve Over a Billion Local Language Speakers Using LLMs Built With NVIDIA AI

Namaste, vanakkam, sat sri akaal — these are just three forms of greeting in India, a country with 22 constitutionally recognized languages and over 1,500 more recorded by the country’s census. Around 10% of its residents speak English, the internet’s most common language.

As India, the world’s most populous country, forges ahead with rapid digitalization efforts, its enterprises and local startups are developing multilingual AI models that enable more Indians to interact with technology in their primary language. It’s a case study in sovereign AI — the development of domestic AI infrastructure that is built on local datasets and reflects a region’s specific dialects, cultures and practices.

These projects are building language models for Indic languages and English that can power customer service AI agents for businesses, rapidly translate content to broaden access to information, and enable services to more easily reach a diverse population of over 1.4 billion individuals.

To support initiatives like these, NVIDIA has released a small language model for Hindi, India’s most prevalent language with over half a billion speakers. Now available as an NVIDIA NIM microservice, the model, dubbed Nemotron-4-Mini-Hindi-4B, can be easily deployed on any NVIDIA GPU-accelerated system for optimized performance.

Tech Mahindra, an Indian IT services and consulting company, is the first to use the Nemotron Hindi NIM microservice to develop an AI model called Indus 2.0, which is focused on Hindi and dozens of its dialects. Indus 2.0 harnesses Tech Mahindra’s high-quality fine-tuning data to further boost model accuracy, unlocking opportunities for clients in banking, education, healthcare and other industries to deliver localized services.

Tech Mahindra will showcase Indus 2.0 at the NVIDIA AI Summit, taking place Oct. 23-25 in Mumbai. The company also uses NVIDIA NeMo to develop its sovereign large language model (LLM) platform, TeNo.

NVIDIA NIM Makes AI Adoption for Hindi as Easy as Ek, Do, Teen

The Nemotron Hindi model has 4 billion parameters and is derived from Nemotron-4 15B, a 15-billion parameter multilingual language model developed by NVIDIA. The model was pruned, distilled and trained with a combination of real-world Hindi data, synthetic Hindi data and an equal amount of English data using NVIDIA NeMo, an end-to-end, cloud-native framework and suite of microservices for developing generative AI.

The dataset was created with NVIDIA NeMo Curator, which improves generative AI model accuracy by processing high-quality multimodal data at scale for training and customization. NeMo Curator uses NVIDIA RAPIDS libraries to accelerate data processing pipelines on multi-node GPU systems, lowering processing time and total cost of ownership. It also provides pre-built pipelines and building blocks for synthetic data generation, data filtering, classification and deduplication to process high-quality data.

After fine-tuning with NeMo, the final model leads on multiple accuracy benchmarks for AI models with up to 8 billion parameters. Packaged as a NIM microservice, it can be easily harnessed to support use cases across industries such as education, retail and healthcare.

It’s available as part of the NVIDIA AI Enterprise software platform, which gives businesses access to additional resources, including technical support and enterprise-grade security, to streamline AI development for production environments.

Bevy of Businesses Serves Multilingual Population

Innovators, major enterprises and global systems integrators across India are building customized language models using NVIDIA NeMo.

Companies in the NVIDIA Inception program for cutting-edge startups are using NeMo to develop AI models for several Indic languages.

Sarvam AI offers enterprise customers speech-to-text, text-to-speech, translation and data parsing models. The company developed Sarvam 1, India’s first homegrown, multilingual LLM, which was trained from scratch on domestic AI infrastructure powered by NVIDIA H100 Tensor Core GPUs.

Sarvam 1 — developed using NVIDIA AI Enterprise software including NeMo Curator and NeMo Framework — supports English and 10 major Indian languages, including Bengali, Marathi, Tamil and Telugu.

Sarvam AI also uses NVIDIA NIM microservices, NVIDIA Riva for conversational AI, NVIDIA TensorRT-LLM software and NVIDIA Triton Inference Server to optimize and deploy conversational AI agents with sub-second latency.

Another Inception startup, Gnani.ai, built a multilingual speech-to-speech LLM that powers AI customer service assistants that handle around 10 million real-time voice interactions daily for over 150 banking, insurance and financial services companies across India and the U.S. The model supports 14 languages and was trained on over 14 million hours of conversational speech data using NVIDIA Hopper GPUs and NeMo Framework.

Gnani.ai uses TensorRT-LLM, Triton Inference Server and Riva NIM microservices to optimize its AI for virtual customer service assistants and speech analytics.

Large enterprises building LLMs with NeMo include:

  • Flipkart, a major Indian ecommerce company majority-owned by Walmart, is integrating NeMo Guardrails, an open-source toolkit that enables developers to add programmable guardrails to LLMs, to enhance the safety of its conversational AI systems.
  • Krutrim, part of the Ola Group of businesses that includes one of India’s top ride-booking platforms, is developing a multilingual Indic foundation model using Mistral NeMo 12B, a state-of-the-art LLM developed by Mistral AI and NVIDIA.
  • Zoho Corporation, a global technology company based in Chennai, will use NVIDIA TensorRT-LLM and NVIDIA Triton Inference Server to optimize and deliver language models for its over 700,000 customers. The company will use NeMo running on NVIDIA Hopper GPUs to pretrain narrow, small, medium and large models from scratch for over 100 business applications.

India’s top global systems integrators are also offering NVIDIA NeMo-accelerated solutions to their customers.

  • Infosys will work on specific tools and solutions using the NVIDIA AI stack. The company’s center of excellence is also developing AI-powered small language models that will be offered to customers as a service.
  • Tata Consultancy Services has developed AI solutions based on NVIDIA NIM Agent Blueprints for the telecommunications, retail, manufacturing, automotive and financial services industries. TCS’ offerings include NeMo-powered, domain-specific language models that can be customized to address customer queries and answer company-specific questions for employees for all enterprise functions such as IT, HR or field operations.
  • Wipro is using NVIDIA AI Enterprise software including NIM Agent Blueprints and NeMo to help businesses easily develop custom conversational AI solutions such as digital humans to support customer service interactions.

Wipro and TCS also use NeMo Curator’s synthetic data generation pipelines to generate data in languages other than English to customize LLMs for their clients.

To learn more about NVIDIA’s collaboration with businesses and developers in India, watch the replay of company founder and CEO Jensen Huang’s fireside chat at the NVIDIA AI Summit.

Read More

Combining Machine Learning and Homomorphic Encryption in the Apple Ecosystem

At Apple, we believe privacy is a fundamental human right. Our work to protect user privacy is informed by a set of privacy principles, and one of those principles is to prioritize using on-device processing. By performing computations locally on a user’s device, we help minimize the amount of data that is shared with Apple or other entities. Of course, a user may request on-device experiences powered by machine learning (ML) that can be enriched by looking up global knowledge hosted on servers. To uphold our commitment to privacy while delivering these experiences, we have implemented a…Apple Machine Learning Research

Unlocking generative AI for enterprises: How SnapLogic powers their low-code Agent Creator using Amazon Bedrock

Unlocking generative AI for enterprises: How SnapLogic powers their low-code Agent Creator using Amazon Bedrock

This post is cowritten with Greg Benson, Aaron Kesler and David Dellsperger from SnapLogic.

The landscape of enterprise application development is undergoing a seismic shift with the advent of generative AI. SnapLogic, a leader in generative integration and automation, has introduced the industry’s first low-code generative AI development platform, Agent Creator, designed to democratize AI capabilities across all organizational levels. Agent Creator is a no-code visual tool that empowers business users and application developers to create sophisticated large language model (LLM) powered applications and agents without programming expertise.

This intuitive platform enables the rapid development of AI-powered solutions such as conversational interfaces, document summarization tools, and content generation apps through a drag-and-drop interface. By using SnapLogic’s library of more than 800 pre-built connectors and data transformation capabilities, users can seamlessly integrate various data sources and AI models, dramatically accelerating the development process compared to traditional coding methods. This innovative platform empowers employees, regardless of their coding skills, to create generative AI processes and applications through a low-code visual designer.

Pre-built templates tailored to various use cases are included, significantly enhancing both employee and customer experiences. Agent Creator is a versatile extension to the SnapLogic platform that is compatible with modern databases, APIs, and even legacy mainframe systems, fostering seamless integration across various data environments. Its low-code interface drastically reduces the time needed to develop generative AI applications.

Agent Creator

Creating enterprise-grade, LLM-powered applications and integrations that meet security, governance, and compliance requirements has traditionally demanded the expertise of programmers and data scientists. Not anymore! SnapLogic’s Agent Creator revolutionizes this landscape by empowering everyone to create generative AI–powered applications and automations without any coding. Enterprises can use SnapLogic’s Agent Creator to store their knowledge in vector databases and create powerful generative AI solutions that augment LLMs with relevant enterprise-specific knowledge, a framework also known as Retrieval Augmented Generation (RAG). This capability accelerates business operations by providing a toolkit for users to create departmental chat assistants, add LLM-powered search to portals, automate processes involving documents, and much more. Additionally, this platform offers:

  • LLM-powered processes and apps in minutes – Agent Creator empowers enterprise users to create custom LLM-powered workflows without coding. Whether your HR department needs a Q&A workflow for employee benefits, your legal team needs a contract redlining solution, or your analysts need a research report analysis engine, Agent Creator provides the tools and flexibility to build it all.
  • Automate intelligent document processing (IDP) – Agent Creator can extract valuable data from invoices, purchase orders, resumes, insurance claims, loan applications, and other unstructured sources automatically. The IDP solution uses the power of LLMs to automate tedious document-centric processes, freeing up your team for higher-value work.
  • Boost productivity – Empowers knowledge workers with the ability to automatically and reliably summarize reports and articles, quickly find answers, and extract valuable insights from unstructured data. Agent Creator’s low-code approach allows anyone to use the power of AI to automate tedious portions of their work, regardless of their technical expertise.

The following demo shows Agent Creator in action.

To deliver these robust features, Agent Creator uses Amazon Bedrock, a foundational platform that provides managed infrastructure to use state-of-the-art foundation models (FMs). This eliminates the complexities of setting up and maintaining the underlying hardware and software so SnapLogic can focus on innovation and application development rather than infrastructure management.

What is Amazon Bedrock

Amazon Bedrock is a fully managed service that provides access to high-performing FMs from leading AI startups and Amazon through a unified API, making it easier for enterprises to develop generative AI applications. Users can choose from a wide range of FMs to find the best fit for their use case. With Amazon Bedrock, organizations can experiment with and evaluate top models, customize them with their data using techniques like fine-tuning and RAG, and build intelligent agents that use enterprise systems and data sources. The serverless experience offered by Amazon Bedrock enables quick deployment, private customization, and secure integration of these models into applications without the need to manage underlying infrastructure. Key features include experimenting with prompts, augmenting response generation with data sources, creating reasoning agents, adapting models to specific tasks, and improving application efficiency with provisioned throughput, providing a robust and scalable solution for enterprise AI needs. The robust capabilities and unified API of Amazon Bedrock make it an ideal foundation for developing enterprise-grade AI applications.

By using the Amazon Bedrock high-performing FMs, secure customization options, and seamless integration features, SnapLogic’s Agent Creator maximizes its potential to deliver powerful, low-code AI solutions. This integration not only enhances the Agent Creator’s ability to create and deploy sophisticated AI models quickly but also makes them scalable, secure, and efficient.

Why Agent Creator uses Amazon Bedrock

SnapLogic’s Agent Creator uses Amazon Bedrock to deliver a powerful, low-code generative AI development platform that meets the unique needs of its enterprise customers. By integrating Amazon Bedrock, Agent Creator benefits from several key advantages:

  • Access to top-tier FMs – Amazon Bedrock provides access to high-performing FMs from leading AI providers through a unified API. Agent Creator offers enterprises the ability to experiment with and deploy sophisticated AI models without the complexity of managing the underlying infrastructure.
  • Seamless customization and integration –The serverless architecture of Amazon Bedrock frees up the time of Agent Creator developers so they can focus on innovation and rapid development. It facilitates the seamless customization of FMs with enterprise-specific data using advanced techniques like prompt engineering and RAG so outputs are relevant and accurate.
  • Enhanced security and compliance – Security and compliance are paramount for enterprise AI applications. SnapLogic uses Amazon Bedrock to build its platform, capitalizing on the proximity to data already stored in Amazon Web Services (AWS). Because of this strategic decision, SnapLogic can offer enhanced security and compliance measures while significantly reducing latency for its customers. By processing data closer to where it resides, SnapLogic promotes faster, more efficient operations that meet stringent regulatory requirements, ultimately delivering a superior experience for businesses relying on their data integration and management solutions. Because Amazon Bedrock offers robust features to meet these requirements, Agent Creator adheres to stringent security protocols and governance standards, giving enterprises confidence in their generative AI deployments.
  • Accelerated development and deployment – With Amazon Bedrock, Agent Creator empowers users to quickly experiment with various FMs, accelerating the development cycle. The managed infrastructure streamlines the testing and deployment process, enabling rapid iteration and implementation of intelligent applications.
  • Scalability and performance – Generative AI applications built using Agent Creator are scalable and performant because of Amazon Bedrock. It can handle large volumes of data and interactions, which is crucial for enterprises requiring robust applications. Provisioned throughput options enable efficient model inference, promoting smooth operation even under heavy usage.

By harnessing the capabilities of Amazon Bedrock, SnapLogic’s Agent Creator delivers a comprehensive, low-code solution that allows enterprises to capitalize on the transformative potential of generative AI. This integration simplifies the development process while enhancing the capabilities, security, and scalability of AI applications, driving significant business value and innovation.

Solution approach

Agent Creator integrates Amazon Bedrock, Anthropic’s Claude, and Amazon OpenSearch Service vector databases to deliver a comprehensive and powerful low-code visual interface for building generative AI solutions. At its core, Amazon Bedrock provides the foundational infrastructure for robust performance, security, and scalability for deploying machine learning (ML) models. This foundational layer is critical for managing the complexities of AI model deployment, and therefore SnapLogic can offer a seamless user experience. This integrated architecture not only supports advanced AI functionalities but also makes it easy to use. By abstracting the complexities of generative AI development and providing a user-friendly visual interface, Agent Creator offers enterprises the ability to use powerful AWS generative AI services without needing deep technical knowledge.

Control plane and data plane implementation

SnapLogic’s Agent Creator platform follows a decoupled architecture, separating the control plane and data plane for enhanced security and scalability.

Control plane

The control plane is responsible for managing and orchestrating the various components of the platform. The control plane is hosted and managed by SnapLogic, meaning that customers don’t have to worry about the underlying infrastructure and can focus on their core business requirements. SnapLogic’s control plane comprises several components that manage and orchestrate the platform’s operations. Here are some key components:

  • Designer – A visual interface where users can design, build, and configure integrations and data flows
  • Manager – A centralized management console for monitoring, scheduling, and controlling the execution of integrations and data pipelines
  • Monitor – A comprehensive reporting and analytics dashboard that provides insights into the performance, usage, and health of the platform
  • API management (APIM) – A component that manages and secures the exposure of integrations and data services as APIs, providing seamless integration with external applications and systems.

By separating the control plane from the data plane, SnapLogic offers a scalable and secure architecture so customers can use generative AI capabilities while maintaining control over their data within their own virtual private cloud (VPC) environment.

Data plane

The data plane is where the actual data processing and integration take place. To address customers’ requirements about data privacy and sovereignty, SnapLogic deploys the data plane within the customer’s VPC on AWS. This approach means that customer data never leaves their controlled environment, providing an extra layer of security and compliance. By using Amazon Bedrock, SnapLogic can invoke generative AI models directly from the customer’s VPC, enabling real-time processing and analysis of customer data without needing to move it outside the secure environment. The integration with Amazon Bedrock is achieved through the Amazon Bedrock InvokeModel APIs. SnapLogic’s data plane, running within the customer’s VPC, calls these APIs to invoke the desired generative AI models hosted on Amazon Bedrock.

Functional components

The solution comprises the following functional components:

  • Vector Database Snap Pack – Manages the reading and writing of data to vector databases. This pack is crucial for maintaining the integrity and accessibility of the enterprise-specific knowledge stored in the OpenSearch vector database.
  • Chunker Snap – Segments large texts into manageable pieces. This functionality is important for processing large documents so the AI can handle and analyze text effectively.
  • Embedding Snap – Converts text segments into vectors. This step is vital for integrating enterprise-specific knowledge into AI prompts, enhancing the relevance and accuracy of AI responses.
  • LLM Snap Pack – Facilitates interactions with Claude and other language models. The AI can generate responses and perform tasks based on the processed and retrieved data.
  • Prompt Generator Snap – Enriches queries with the most relevant data so the AI prompts are contextually accurate and tailored to the specific needs of the enterprise.
  • Pre-Built Pipeline Patterns for indexing and retrieving – To streamline the deployment of intelligent applications, Agent Creator includes pre-built pipeline patterns. These patterns simplify common tasks such as indexing, retrieving data, and processing documents so AI-driven solutions can be deployed without the need for deep technical expertise.
  • Frontend Starter Kit – To simplify the deployment of user-facing applications, Agent Creator includes a Frontend Starter Kit. This kit provides pre-built components and templates for creating intuitive and responsive interfaces. Enterprises can quickly develop and deploy chat assistant UI applications, and applications not only function well but also provide a seamless and engaging user experience.

Data flow and control flow

In the architecture of Agent Creator, the interaction between Agent Creator platform, Amazon Bedrock, OpenSearch Service, and Anthropic’s Claude involves a sophisticated and efficient management of data flow and control flow. By effectively managing the data and control flows between Agent Creator and AWS services, SnapLogic provides a robust, secure, and efficient platform for developing and deploying enterprise-grade solutions. This architecture supports advanced integration functionalities and offers a seamless, user-friendly experience, making it a valuable tool for enterprise customers.

Data flow

Here is an example of this data flow for an Agent Creator pipeline that involves data ingestion, preprocessing, and vectorization using Chunker and Embedding Snaps. The resulting vectors are stored in OpenSearch Service databases for efficient retrieval and querying. When a query is initiated, relevant vectors are retrieved to augment the query with context-specific data, and the enriched query is processed by the LLM Snap Pack to generate responses.

The data flow follows these steps:

  1. Data ingestion and preprocessing – Enterprise data is ingested from various sources such as documents, databases, and APIs. Chunker Snap processes large texts and documents by segmenting them into smaller, manageable chunks to make them compatible with downstream processing steps.
  2. Vectorization – The text chunks are passed to the Embedding Snap, which converts them into vector representations using embedding models. These vectors are numerical representations that capture the semantic meaning of the text. The resulting vectors are stored in OpenSearch Service vector databases, which manage and index these vectors for efficient retrieval and querying.
  3. Data retrieval and augmentation – When a query is initiated, the Vector Database Snap Pack retrieves relevant vectors from OpenSearch Service using similarity search algorithms to match the query with stored vectors. The retrieved vectors augment the initial query with context-specific enterprise data, enhancing its relevance.
  4. PromptResponse generation – The Prompt Generator Snap refines the final query so it’s well-formed and optimized for the language model. The language model generates a response, which is then postprocessed, if necessary, before delivery.
  5. Interaction with LLMs – The augmented query is forwarded to the LLM Snap Pack, which interacts with Anthropic’s Claude and other integrated language models. This interaction generates responses based on the enriched query.

Control flow

The control flow in Agent Creator is orchestrated between the control plane and the data plane. The control plane hosts the user environment, stores configuration settings and user-created assets, and provides access to various components. The data plane executes pipelines, connecting to cloud-based or on-premises data endpoints, with the control plane orchestrating the workflow across interconnected snaps. Here is an example of this control flow for a Agent Creator.

The control flow follows these steps:

  1. Initiating requests – Users initiate requests using Agent Creator’s low-code visual interface, specifying tasks such as creating Q&A assistants or automating document processing. Pre-built UI components such as the Frontend Starter Kit capture user inputs and streamline the interaction process.
  2. Orchestrating pipelines – Agent Creator orchestrates workflows using interconnected snaps, each performing a specific function such as ingestion, chunking, vectorization, or querying. The architecture employs an event-driven model, where the completion of one snap triggers the next step in the workflow.
  3. Managing interactions with AWS services – Agent Creator communicates with AWS services, including Amazon Bedrock and OpenSearch Service, and Anthropic’s Claude in Amazon Bedrock, through secure API calls. The serverless infrastructure of Amazon Bedrock manages the execution of ML models, resulting in a scalable and reliable application.
  4. Observability – Robust mechanisms are in place for handling errors during data processing or model inference. Errors are logged and notifications are sent to system administrators for resolution. Continuous logging and monitoring provide transparency and facilitate troubleshooting. Logs are centrally stored and analyzed to maintain system integrity.
  5. Final output delivery – The generated AI responses are delivered to end user applications or interfaces, integrated into SnapLogic’s dashboards. User feedback is collected to continuously improve AI models and processing pipelines, enhancing overall system performance.

Use cases

You can use the SnapLogic Agent Creator for many different use cases. The next paragraphs illustrate just a few.

IDP on quarterly reports

A leading pharmaceutical data provider empowered their analysts by using Agent Creator and AutoIDP to automate data extraction on pharmaceutical drugs. By processing their portfolio of quarterly reports through LLMs, they could ask standardized questions to extract information that was previously gathered manually. This automation not only reduced errors but also saved significant time and resources, leading to a 35% reduction in costs and a centralized pool of reusable data assets, providing a single source of truth for their entire organization.

Automating market intelligence insights

A global telecommunications company used Agent Creator to process a multitude of RSS feeds, extracting only business-relevant information. This data was then integrated into Salesforce as a real-time feed of market insights. As the customer noted, “This automation allows us to filter and synthesize crucial data, delivering targeted, real-time insights to our sales teams, enhancing their productivity without the need for individual AI licenses.”

Agent Creator Amazon Bedrock roadmap

Development and improvement are ongoing for Agent Creator, with several enhancements released recently and more to come in the future.

Recent releases

Extended support for more Amazon Bedrock capabilities was made available with the August 2024 release. Support for retrieving and generating against Amazon Bedrock and Amazon Bedrock Knowledge Bases through snap orchestration was added as well as support for invoking Amazon Bedrock Agents. Continual enhancements for new models and additional authentication mechanisms have been released supporting AWS Identity and Access Management (IAM) role authentication and cross-account IAM role authentication. All Agent Creator LLM Snaps have also been updated to support a more raw request payload, adding support to specify entire conversations (for continued conversations) as well as the ability to specify prompts beyond just text.

Support for the Amazon Bedrock Converse API was released recently. With the Amazon Bedrock Converse API support, Agent Creator is able to support models beyond Amazon Titan and Anthropic’s Claude. This comes with added support for multi-modal prompt capabilities, which is delivered through new Snaps to orchestrate the building of these more complex payloads.

Conclusion

SnapLogic has revolutionized enterprise AI with its Agent Creator, the industry’s first low-code generative AI development platform. By integrating advanced generative AI services such as Amazon Bedrock and OpenSearch Service vector databases and cutting edge LLMs such as Anthropic’s Claude, SnapLogic empowers enterprise users, from product to sales to marketing, to create sophisticated generative AI–driven applications without deep technical expertise. This platform reduces dependency on specialized programmers and accelerates innovation by streamlining the generative AI development process with pre-built pipeline patterns and a Frontend Starter Kit.

Agent Creator offers robust performance, security, and scalability so enterprises can use powerful generative AI tools for competitive advantage. By pioneering this comprehensive approach, SnapLogic not only addresses current enterprise needs but also positions organizations to harness Amazon Bedrock for future advancements in generative AI technology, driving significant business value and operational efficiency for our enterprise customers.

To use Agent Creator effectively, schedule a demo of SnapLogic’s Agent Creator  to learn how it can address your specific use cases. Identify potential pilot projects, such as creating departmental Q&A assistants, automating document processing, or putting an LLM to work for you behind the scenes. Prepare to store your enterprise knowledge in vector databases, which Agent Creator can use to augment LLMs with your specific information through RAG. Begin with a small project, such as creating a departmental Q&A assistant, to demonstrate the value of Agent Creator and use this success to build momentum for larger initiatives. To learn more about how to make best use of Amazon Bedrock, refer to the Amazon Bedrock Documentation.


About the authors

Asheesh Goja is Principal Solutions Architect at AWS. Prior to AWS, Asheesh worked at prominent organizations such as Cisco and UPS, where he spearheaded initiatives to accelerate the adoption of several emerging technologies. His expertise spans ideation, co-design, incubation, and venture product development. Asheesh holds a wide portfolio of hardware and software patents, including a real-time C++ DSL, IoT hardware devices, Computer Vision and Edge AI prototypes. As an active contributor to the emerging fields of Generative AI and Edge AI, Asheesh shares his knowledge and insights through tech blogs and as a speaker at various industry conferences and forums.

Dhawal PatelDhawal Patel is a Principal Machine Learning Architect at AWS. He has worked with organizations ranging from large enterprises to mid-sized startups on problems related to distributed computing, and Artificial Intelligence. He focuses on Deep learning including NLP and Computer Vision domains. He helps customers achieve high performance model inference on SageMaker.

Greg Benson is a Professor of Computer Science at the University of San Francisco and Chief Scientist at SnapLogic. He joined the USF Department of Computer Science in 1998 and has taught undergraduate and graduate courses including operating systems, computer architecture, programming languages, distributed systems, and introductory programming. Greg has published research in the areas of operating systems, parallel computing, and distributed systems. Since joining SnapLogic in 2010, Greg has helped design and implement several key platform features including cluster processing, big data processing, the cloud architecture, and machine learning. He currently is working on Generative AI for data integration.

Aaron Kesler is the Senior Product Manager for AI products and services at SnapLogic, Aaron applies over ten years of product management expertise to pioneer AI/ML product development and evangelize services across the organization. He is the author of the upcoming book “What’s Your Problem?” aimed at guiding new product managers through the product management career. His entrepreneurial journey began with his college startup, STAK, which was later acquired by Carvertise with Aaron contributing significantly to their recognition as Tech Startup of the Year 2015 in Delaware. Beyond his professional pursuits, Aaron finds joy in golfing with his father, exploring new cultures and foods on his travels, and practicing the ukulele.

David Dellsperger is a Senior Staff Software Engineer and Technical Lead of the Agent Creator product at SnapLogic. David has been working as a Software Engineer emphasizing in Machine Learning and AI for over a decade previously focusing on AI in Healthcare and now focusing on the SnapLogic Agent Creator. David spends his time outside of work playing video games and spending quality time with his yellow lab, Sudo

Read More

Next-generation learning experience using Amazon Bedrock and Anthropic’s Claude: Innovation from Classworks

Next-generation learning experience using Amazon Bedrock and Anthropic’s Claude: Innovation from Classworks

This post is co-written with Jerry Henley, Hans Buchheim and Roy Gunter from Classworks.

Classworks is an online teacher and student platform that includes academic screening, progress monitoring, and specially designed instruction for reading and math for grades K–12. Classworks’s unique ability to ingest student assessment data from various sources, analyze it, and automatically deliver a customized learning progression for each student sets them apart. Although this evidence-based model has significantly impacted student growth, supporting diverse learning needs in a classroom of 25 students working independently remains challenging. Teachers often find themselves torn between assisting individual students and delivering group instruction, ultimately hindering the learning experience for all.

To address the challenges of personalized learning and teacher workload, Classworks introduces Wittly by Classworks, an AI-powered learning assistant built on Amazon Bedrock, a fully managed service that makes it straightforward to build generative AI applications.

Wittly’s innovative approach centers on two key aspects:

  • Harnessing Anthropic’s Claude in Amazon Bedrock for advanced AI capabilities – Wittly uses Amazon Bedrock to seamlessly integrate with Anthropic’s Claude Sonnet 3.5, a state-of-the-art large language model (LLM). This powerful combination enables Wittly to provide tailored learning support and foster self-directed learning environments at scale.
  • Personalization and teacher empowerment – This comprises two objectives:
    • Personalized learning – Through AI-driven differentiated instruction, Wittly adapts to individual student needs, enhancing their learning experience.
    • Reduced teacher workload – By reducing the workload, Wittly allows educators to concentrate on high-impact student support, facilitating better educational outcomes.

In this post, we discuss how Classworks uses Amazon Bedrock and Anthropic’s Claude Sonnet to deliver next-generation differentiated learning with Wittly.

Powering differentiated learning with Amazon Bedrock

The ability to deliver differentiated learning to a classroom of diverse learners is transformative. Engaging students with instruction tailored to their current learning skills accelerates mastery and fosters critical thinking and independent problem-solving. However, providing such personalized instruction to an entire classroom is labor-intensive and time-consuming for teachers.

Wittly uses generative AI to offer explanations of each skill at a student’s interest level in various ways. When students encounter challenging concepts, Wittly provides clear, concise guidance tailored to their learning style and language preferences, enabling them to grasp concepts at their own pace and overcome obstacles independently. With the scalable infrastructure of Amazon Bedrock, Wittly handles diverse classroom needs simultaneously, making personalized instruction a reality for every student.

Amazon Bedrock serves as the cornerstone of Wittly’s AI capabilities, offering several key advantages:

  • Single API access – Simplifies integration with Anthropic’s Claude foundation models (FMs), allowing for straightforward updates and potential expansion to other models in the future. This unified interface accelerates development cycles by reducing the complexity of working with multiple AI models. It also future proofs Wittly’s AI infrastructure, enabling seamless adoption of new models of capabilities as they become available, without significant code changes.
  • Serverless architecture – Eliminates the need for infrastructure management, enabling Classworks to focus on educational content and user experience. This approach provides automatic scaling to handle varying loads, from individual student sessions to entire school districts accessing the platform simultaneously. It also optimizes costs by allocating resources based on actual usage rather than maintaining constant capacity. The reduced operational overhead allows Wittly’s team to dedicate more time and resources to enhancing the core educational features of the platform.

Combining cutting-edge AI technology with thoughtful implementation and robust safeguards, Wittly represents a significant leap forward in personalized digital learning assistance. The system’s architecture, powered by Amazon Bedrock and Anthropic’s Claude Sonnet 3.5, enables Wittly to adapt to individual student needs while maintaining high standards of safety, privacy, and educational efficacy. By integrating these advanced technologies, Wittly not only enhances the learning experience but also makes sure it’s accessible, secure, and tailored to the unique requirements of every student.

Increasing teacher capacity and bandwidth

Meeting the diverse needs of students in a single classroom, particularly during intervention periods or in resource rooms, can be overwhelming. By differentiating instruction for students learning independently, Wittly saves valuable teacher time. Students can seek clarification and guidance from Wittly before asking for the teacher’s help, fostering a self-directed learning environment that eases the teacher’s burden.

This approach is particularly beneficial when a teacher delivers small group lessons while others learn independently. Knowing that interactive explanations are available to students learning each concept is a significant relief for teachers managing diverse ability levels in a classroom. By harnessing the powerful capabilities of Anthropic’s Claude Sonnet 3.5, Wittly creates a more efficient, personalized learning ecosystem that benefits both students and teachers.

Solution overview

The following diagram illustrates the solution architecture.

 

The solution consists of the following key components:

  • Wittly interface – The frontend component where students interact with the learning assistant is designed to be intuitive and engaging.
  • Classworks API – This API manages the data exchange and serves as the central hub for communication between various system components.
  • Wittly AI assistant prompt – Students receive a tailored prompt for the AI based on the student’s first name, grade level, learning objectives, and conversation history.
  • Student common misconception prompt – This prompt actively identifies potential misconceptions related to the current learning objective, enhancing the student experience.
  • Anthropic’s Claude on Amazon Bedrock – Amazon Bedrock orchestrates AI interactions, providing a fully managed service that simplifies the integration of the state-of-the-art Anthropic’s Claude models.

Monitoring the Wittly platform

In the rapidly evolving landscape of AI-powered education, robust monitoring isn’t only beneficial—it’s essential. Classworks recognizes this criticality and has developed a comprehensive monitoring strategy for the Wittly platform. This approach is pivotal in maintaining the highest standards of performance, optimizing resource allocation, and continually refining the user experience. More specifically, the Wittly platform monitors the following metrics:

  • Token usage – By tracking overall token consumption and visualizing usage patterns by feature and user type, we can plan resources efficiently and manage costs effectively.
  • Request volume – Monitoring API calls helps us detect unusual spikes and analyze usage patterns, enabling predictive scaling decisions and providing system reliability.
  • Response times – Measuring and analyzing latency, breaking down response times by query complexity and user segments. This allows us to identify and address performance bottlenecks promptly.
  • Costs – Implementing detailed cost tracking and modeling for various usage scenarios supports our budget management and pricing strategies, leading to sustainable growth.
  • Quality metrics – Logging and analyzing user feedback, along with correlating satisfaction metrics with model performance, guides our continuous improvement efforts.
  • Error tracking – Setting up alerts for critical errors and performing advanced error categorization and trend analysis helps us integrate seamlessly with our development workflow and maintain system integrity.
  • User engagement – Visualizing user journeys and feature adoption rates through monitoring feature usage informs our product development priorities, enhancing the overall user experience.
  • System health – By tracking overall system performance, we gain a holistic view of system dependencies, supporting proactive maintenance and maintaining a stable platform.

To achieve this, we use Amazon CloudWatch to capture key performance data, such as average latency and token counts. This information is then seamlessly integrated into our Grafana dashboard for real-time visualization and analysis. The following screenshot showcases our monitoring dashboard created using Grafana, which visually represents these critical metrics and provides actionable insights. Grafana is an open-source platform for monitoring and observability, enabling users to query, visualize, and understand their data through customizable dashboards.

This comprehensive monitoring framework enables Classworks to deliver exceptional value to our users by optimizing AI-powered features and maintaining high performance standards. With cutting-edge tools like Grafana for data collection, alerting, and in-depth visualization and analysis, we can adapt and expand our monitoring capabilities in tandem with the growing complexity of our AI integration.

Engaging with Wittly: A student’s experience

As students embark on their Classworks activities, they are greeted by Wittly, their AI-powered learning assistant, integrated seamlessly into the Classworks instructional toolbar. When students encounter challenging concepts or need additional help, they can choose the Wittly icon to open an interactive chat window.

Unlike other AI chat-based systems that rely on open-ended questions, Wittly offers a set of pre-created AI response options. This guided approach makes sure conversations remain focused and relevant to the current activity. When Wittly provides explanations or poses questions, students can select from the provided responses, indicating their understanding or need for further clarification.

The student engagement workflow includes the following steps:

  1. Wittly is called when a student needs help with a specific activity in Classworks.
  2. Each Classworks activity focuses on a particular skill or concept, and we’ve tagged all activities with learning objectives for the specific activity.
  3. When a student accesses Wittly, we send key pieces of information, including the student’s first name, the learning objective of the activity they’re working on, and the language preference of the student.
  4. Wittly generates a personalized response to help the student. This typically includes a greeting using the student’s name, an explanation of the concept, an example related to the learning objective, and a prompt asking if the explanation helped the student understand the concept.

The following is a sample interaction, starting with the input sent to Wittly:

{

  "student_name": "Alex",

  "learning_objective": "Identify and use proper punctuation in compound sentences",

  "language": "English"

}

Wittly’s output is as follows:

"Hi Alex! Let's work on punctuating compound sentences. Remember to use a comma before coordinating conjunctions like 'and' or 'but'. For example: 'I love pizza, and I enjoy pasta.' Do you understand this? Please reply with 'thumbs up' or 'thumbs down'."

Wittly is designed to adapt to each student’s unique needs. It can communicate in both English and Spanish, and students can choose a voice they find engaging. For those who prefer auditory learning, Wittly reads its answers aloud while highlighting the corresponding text, making the learning experience both dynamic and accessible.

The structured interactions with Wittly are recorded, allowing teachers to monitor student progress and identify areas where additional support may be needed. This makes sure teachers remain actively involved in the learning process and that Wittly’s interactions are always appropriate and aligned with educational objectives.

With Wittly as their learning companion, students can delve into complex concepts in language arts, math, and science through guided, interactive exchanges. Wittly supports their learning journey, making their time in Classworks more engaging and personalized, all within a safe and controlled environment.

The following example showcases the interactive experience with Wittly in action, demonstrating how students engage with personalized learning through guided interactions.

Data privacy and safety considerations

In the era of AI-powered education, protecting student data and providing safe interactions are paramount. Classworks has implemented rigorous measures to uphold the highest standards of privacy and safety in Wittly’s design and operation.

Ethical AI foundation

Classworks employs a human-in-the-loop (HITL) model, combining AI technology with human expertise and insight. Wittly uses advanced AI algorithms, overseen and enhanced by the expertise of human educators and engineers, to generate instructional recommendations.

Student data protection

A core tenet in developing Wittly was achieving personalized learning without compromising student privacy. We don’t share any personally identifiable information with Wittly. Anthropic’s Claude LLM is trained on a dataset of anonymous data, not data from the Classworks platform, providing complete student privacy. Furthermore, when engaging with Wittly, students select from various pre-created responses to indicate whether the differentiated instruction was helpful or if they need further assistance. This approach eliminates the risk of inappropriate conversations, maintaining a safe learning environment.

Amazon Bedrock enhances this protection by encrypting data both in transit and rest and preventing the sharing of prompts with any third parties, including Anthropic. Additionally, Amazon Bedrock doesn’t train models with Classworks’s data, so all interactions remain secure and private.

Conclusion

Amazon Bedrock represents a pivotal advancement in AI technology, offering vast opportunities for innovation and efficiency in education. At Classworks, we’re not just adopting this technology, we’re pioneering its application to craft exceptional, personalized learning experiences. Our commitment extends beyond students to empowering educators with cutting-edge resources that elevate learning outcomes.

Based on Wittly’s capabilities, we estimate that teachers could potentially save 15–25 hours per month. This time savings might come from reduced need for individual student support, decreased time spent on classroom management, and less after-hours support. These efficiency gains significantly enhance the learning environment, allowing teachers to focus more on high impact, tailored educational experiences.

As AI continues to evolve, we’re committed to refining our policies and practices to uphold the highest standards of safety, quality, and efficacy in educational technology. By embracing Amazon Bedrock, we can make sure Classworks remains at the forefront of delivering safe, impactful, and meaningful educational experiences to students and educators alike.

To learn more about how generative AI and Amazon Bedrock can revolutionize your educational platform by delivering personalized learning experiences, enhancing teacher capacity, and enforcing data privacy, visit Amazon Bedrock. Discover how you can use advanced AI to create innovative applications, streamline development processes, and provide impactful data insights for your users.

To learn more about Classworks and our groundbreaking generative AI capabilities, visit our website.

This is a guest post from Classworks. Classworks is an award-winning K–12 special education and tiered intervention platform that uses advanced technology and comprehensive data to deliver superior personalized learning experiences. The comprehensive solution includes academic screeners, math and reading interventions, specially designed instruction, progress monitoring, and powerful data. Validated by the National Center on Intensive Intervention (NCII) and endorsed by The Council of Administrators of Special Education (CASE), Classworks partners with districts nationwide to deliver data-driven personalized learning to students where they are ready to learn.

 


About the Authors

Jerry Henley, VP of Technology at Curriculum Advantage, leads the product technical vision, platform services, and support for Classworks. With 18 years in EdTech, he oversees innovation, roadmaps, and AI integration, enhancing personalized learning experiences for students and educators.

 

Hans Buchheim, VP of Engineering at Curriculum Advantage, has spent 25 years developing Classworks. He leads software architecture decisions, mentors junior developers, and ensures the product evolves to meet educator needs.

 

Roy Gunter, DevOps Engineer at Curriculum Advantage, manages cloud infrastructure and automation for Classworks. He focuses on system reliability, troubleshooting, and performance optimization to deliver an excellent user experience.

 

Gowtham Shankar is a Solutions Architect at Amazon Web Services (AWS). He is passionate about working with customers to design and implement cloud-native architectures to address business challenges effectively. Gowtham actively engages in various open source projects, collaborating with the community to drive innovation.

 

Dr. Changsha Ma is an AI/ML Specialist at AWS. She is a technologist with a PhD in Computer Science, a master’s degree in Education Psychology, and years of experience in data science and independent consulting in AI/ML. She is passionate about researching methodological approaches for machine and human intelligence. Outside of work, she loves hiking, cooking, hunting food, and spending time with friends and families

Read More

Fine-tune a BGE embedding model using synthetic data from Amazon Bedrock

Fine-tune a BGE embedding model using synthetic data from Amazon Bedrock

Have you ever faced the challenge of obtaining high-quality data for fine-tuning your machine learning (ML) models? Generating synthetic data can provide a robust solution, especially when real-world data is scarce or sensitive. For instance, when developing a medical search engine, obtaining a large dataset of real user queries and relevant documents is often infeasible due to privacy concerns surrounding personal health information. However, synthetic data generation techniques can be employed to create realistic query-document pairs that resemble authentic user searches and relevant medical content, enabling the training of accurate retrieval models while preserving user privacy.

In this post, we demonstrate how to use Amazon Bedrock to create synthetic data, fine-tune a BAAI General Embeddings (BGE) model, and deploy it using Amazon SageMaker.

Amazon Bedrock is a fully managed service that offers a choice of high-performing foundation models (FMs) from leading artificial intelligence (AI) companies like AI21 Labs, Anthropic, Cohere, Meta, Stability AI, and Amazon through a single API, along with a broad set of capabilities to build generative AI applications with security, privacy, and responsible AI.

You can find the full code associated with this post at the accompanying GitHub repository.

Solution overview

BGE stands for Beijing Academy of Artificial Intelligence (BAAI) General Embeddings. It is a family of embedding models with a BERT-like architecture, designed to produce high-quality embeddings from text data. The BGE models come in three sizes:

  • bge-large-en-v1.5: 1.34 GB, 1,024 embedding dimensions
  • bge-base-en-v1.5: 0.44 GB, 768 embedding dimensions
  • bge-small-en-v1.5: 0.13 GB, 384 embedding dimensions

For comparing two pieces of text, the BGE model functions as a bi-encoder architecture, processing each piece of text through the same model in parallel to obtain their embeddings.

Generating synthetic data can significantly enhance the performance of your models by providing ample, high-quality training data without the constraints of traditional data collection methods. This post guides you through generating synthetic data using Amazon Bedrock, fine-tuning a BGE model, evaluating its performance, and deploying it with SageMaker.

The high-level steps are as follows:

  1. Set up an Amazon SageMaker Studio environment with the necessary AWS Identity and Access Management (IAM) policies.
  2. Open SageMaker Studio.
  3. Create a Conda environment for dependencies.
  4. Generate synthetic data using Meta Llama 3 on Amazon Bedrock.
  5. Fine-tune the BGE embedding model with the generated data.
  6. Merge the model weights.
  7. Test the model locally.
  8. Evaluate and compare the fine-tuned model.
  9. Deploy the model using SageMaker and Hugging Face Text Embeddings Inference (TEI).
  10. Test the deployed model.

Prerequisites

First-time users need an AWS account and an IAM user role with the following permission policies attached:

  • AmazonSageMakerFullAccess
  • IAMFullAccess (or a custom IAM policy that grants iam:GetRole and iam:AttachRolePolicy permissions for the specific SageMaker execution role and the required policies: AmazonBedrockFullAccess, AmazonS3FullAccess, and AmazonEC2ContainerRegistryFullAccess)

Create a SageMaker Studio domain and user

Complete the following steps to create a SageMaker Studio domain and user:

  1. On the SageMaker console, under Admin configurations in the navigation pane, choose Domains.
  2. Choose Create domain.

SageMaker Domains

  1. Choose Set up for single user (Quick setup). Your domain, along with an IAM role with the AmazonSageMakerFullAccess policy, will be automatically created.
  2. After the domain is prepared, choose Add user.
  3. Provide a name for the new user profile and choose the IAM role (use the default role you created in step 4).
  4. Choose Next on the next three screens, then choose Submit.

After you add the user profile, update the IAM role.

  1. On the IAM console, choose Roles in the navigation pane.
  2. Navigate to the Domain settings page of your newly created domain and locate the IAM role created earlier (it should have a name similar to AmazonSageMaker-ExecutionRole-YYYYMMDDTHHMMSS).
  3. On the role details page, on the Add permissions drop down menu, choose Attach policies.
  4. Select the following policies and Add permissions to add them to the role.
    1. AmazonBedrockFullAccess
    2. AmazonS3FullAccess
    3. AmazonEC2ContainerRegistryFullAccess

Open SageMaker Studio

To open SageMaker studio, complete the following steps:

  1. On the SageMaker console, choose Studio in the navigation pane.
  2. On the SageMaker Studio landing page, select the newly created user profile and choose Open Studio.
  3. After you launch SageMaker Studio, choose JupyterLab.
  4. In the top-right corner, choose Create JupyterLab Space.
  5. Give the space a name, such as embedding-finetuning, and choose Create space.
  6. Change the instance type to ml.g5.2xlarge and the Storage (GB) value to 100.

You may need to request a service quota increase before being able to select the ml.g5.2xlarge instance type.

  1. Choose Run space and wait a few minutes for the space to start.
  2. Choose Open JupyterLab.

Set up a Conda environment in SageMaker Studio

Next, you create a Conda environment with the necessary dependencies for running the code in this post. You can use the environment.yml file provided in the code repository to create this.

  1. Open the previous terminal, or choose Terminal in Launcher to open a new one.
  2. Clone the code repository, and enter the directory:
    # TODO: replace this with final public version 
    git clone https://gitlab.aws.dev/austinmw/Embedding-Finetuning-Blog

  3. Create the Conda environment by running the following command (this step will take several minutes to complete):
    conda env create -f environment.yml

  4. Activate the environment by running the following commands one by one:
    conda init source ~/.bashrc conda activate ft-embedding-blog

  5. Add the newly created Conda environment to Jupyter:
    python -m ipykernel install --user --name=ft-embedding-blog

  6. From the Launcher, open the repository folder named embedding-finetuning-blog and open the file Embedding Blog.ipynb.
  7. On the Kernel drop down menu in the notebook, choose Change Kernel, then choose ft-embedding-blog.

You may need to refresh your browser if it doesn’t show up as available.

Now you have a Jupyter notebook that includes the necessary dependencies required to run the code in this post.

Generate synthetic data using Amazon Bedrock

We start by adapting LlamaIndex’s embedding model fine-tuning guide to use Amazon Bedrock to generate synthetic data for fine-tuning. We use the sample data and evaluation procedures outlined in this guide.

To generate synthetic data, we use the Meta Llama3-70B-Instruct model on Amazon Bedrock, which offers great a price and performance. The process involves the following steps:

  1. Download the training and validation data, which consists of PDFs from Uber and Lyft 10K documents. These PDFs will serve as the source for generating document chunks.
  2. Parse the PDFs into plain text chunks using LlamaIndex functionality. The Lyft corpus will be used as the training dataset, and the Uber corpus will be used as the evaluation dataset.
  3. Clean the parsed data by removing samples that are too short or contain special characters that could cause errors during training.
  4. Set up the large language model (LLM) Meta Llama3-70B-Instruct and define a prompt template for generating questions based on the context provided by the document chunks.
  5. Use the LLM to generate synthetic question answer pairs for each document chunk. The document chunks serve as the context, and the generated questions are designed to be answerable using the information within the corresponding chunk.
  6. Save the generated synthetic data in JSONL format, where each line is a dictionary containing the query (generated question), positive passages (the document chunk used as context), and negative passages (if available). This format is compatible with the FlagEmbedding library, which will be used for fine-tuning the BGE model.

By generating synthetic question-answer pairs using the Meta Llama3-70B-Instruct model and the document chunks from the Uber and Lyft datasets, you create a high-quality dataset that can be used to fine-tune the BGE embedding model for improved performance in retrieval tasks.

Fine-Tune the BGE embedding model

For fine-tuning, you can use the bge-base-en-v1.5 model, which offers a good balance between performance and resource requirements. You define retrieval instructions for the query to enhance the model’s performance during fine-tuning and inference.

Before fine-tuning, generate hard negatives using a predefined script available from the FlagEmbedding library. Hard negative mining is an essential step that helps improve the model’s ability to distinguish between similar but not identical text pairs. By including hard negatives in the training data, you encourage the model to learn more discriminative embeddings.

You then initiate the fine-tuning process using the FlagEmbedding library, which trains the model with InfoNCE contrastive loss. The library provides a convenient way to fine-tune the BGE model using the synthetic data you generated earlier. During fine-tuning, the model learns to produce embeddings that bring similar query-document pairs closer together in the embedding space while pushing dissimilar pairs further apart.

Merge the model weights

After fine-tuning, you can use the LM-Cocktail library to merge the fine-tuned weights with the original weights of the BGE model. LM-Cocktail creates new model parameters by calculating a weighted average of the parameters from two or more models. This process helps mitigate the problem of catastrophic forgetting, where the model might lose its previously learned knowledge during fine-tuning.

By merging the fine-tuned weights with the original weights, you obtain a model that benefits from the specialized knowledge acquired during fine-tuning while retaining the general language understanding capabilities of the original model. This approach often leads to improved performance compared to using either the fine-tuned or the original model alone.

Test the model locally

Before you evaluate the fine-tuned BGE model on the validation set, it’s a good idea to perform a quick local test to make sure the model behaves as expected. You can do this by comparing the cosine similarity scores for pairs of queries and documents that you expect to have high similarity and those that you expect to have low similarity.

To test the model, prepare two small sets of document-query pairs:

  • Similar document-query pairs – These are pairs where the document and query are closely related and should have a high cosine similarity score
  • Different document-query pairs – These are pairs where the document and query are not closely related and should have a lower cosine similarity score

Then use the fine-tuned BGE model to generate embeddings for each document and query in both sets of pairs. By calculating the cosine similarity between the document and query embeddings for each pair, you can assess how well the model captures the semantic similarity between them.

When comparing the cosine similarity scores, we expect to see higher scores for the similar document-query pairs compared to the different document-query pairs. This would indicate that the fine-tuned model is able to effectively distinguish between similar and dissimilar pairs, assigning higher similarity scores to the pairs that are more closely related.

If the local testing results align with your expectations, it provides a quick confirmation that the fine-tuned model is performing as intended. You can then move on to a more comprehensive evaluation of the model’s performance using the validation set.

However, if the local testing results are not satisfactory, it may be necessary to investigate further and identify potential issues with the fine-tuning process or the model architecture before proceeding to the evaluation step.

This local testing step serves as a quick sanity check to make sure the fine-tuned model is behaving reasonably before investing time and resources in a full evaluation on the validation set. It can help catch obvious issues early on and provide confidence in the model’s performance before moving forward with more extensive testing.

Evaluate the model

We evaluate the performance of the fine-tuned BGE model using two procedures:

  • Hit rate – This straightforward metric assesses the model’s performance by checking if the retrieved results for a given query include the relevant document. You calculate the hit rate by taking each query-document pair from the validation set, retrieving the top-K documents using the fine-tuned model, and verifying if the relevant document is present in the retrieved results.
  • InformationRetrievalEvaluator – This procedure, provided by the sentence-transformers library, offers a more comprehensive suite of metrics for detailed performance analysis. It evaluates the model on various information retrieval tasks and provides metrics such as Mean Average Precision (MAP), Normalized Discounted Cumulative Gain (NDCG), and more. However, InformationRetrievalEvaluator is only compatible with sentence-transformers

To get a better understanding of the fine-tuned model’s performance, you can compare it against the base (non-fine-tuned) BGE model and the Amazon Titan Text Embeddings V2 model on Amazon Bedrock. This comparison helps you assess the effectiveness of the fine-tuning process and determine if the fine-tuned model outperforms the baseline models.

By evaluating the model using both the hit rate and InformationRetrievalEvaluator (when applicable), you gain insights into its performance on different aspects of retrieval tasks and can make informed decisions about its suitability for your specific use case.

Deploy the model

To deploy the fine-tuned BGE model, you can deploy the Hugging Face Text Embedding Inference (TEI) container to SageMaker. TEI is a high-performance toolkit for deploying and serving popular text embeddings and sequence classification models, including support for FlagEmbedding models. It provides a fast and efficient serving framework for your fine-tuned model on SageMaker.

The deployment process involves the following steps:

  1. Upload the fine-tuned model to the Hugging Face Hub or Amazon Simple Storage Service (Amazon S3).
  2. Retrieve the new Hugging Face Embedding Container image URI.
  3. Deploy the model to SageMaker.
  4. Optionally, set up auto scaling for the endpoint to automatically adjust the number of instances based on the incoming request traffic. Auto scaling helps make sure the endpoint can handle varying workloads efficiently.

By deploying the fine-tuned BGE model using TEI on SageMaker, you can integrate it into your applications and use it for efficient text embedding and retrieval tasks. The deployment process outlined in this post provides a scalable and manageable solution for serving the model in production environments.

Test the deployed model

After you deploy the fine-tuned BGE model using TEI on SageMaker, you can test the model by sending requests to the SageMaker endpoint and evaluating the model’s responses.

To test the deployed model, you can run the model and optionally add instructions. If the model was fine-tuned with instructions for queries or passages, it’s important to match the instructions used during fine-tuning when performing inference. In this case, you used instructions for queries but not for passages, so you can follow the same approach during testing.

To test the deployed model, you send queries to the SageMaker endpoint using the tei_endpoint.predict() method provided by the SageMaker SDK. You prepare a batch of queries, optionally prepending any instructions used during fine-tuning, and pass them to the predict() method. The model generates embeddings for each query, which are returned in the response.

By examining the generated embeddings, you can assess the quality and relevance of the model’s output. You can compare the embeddings of similar queries and verify that they have high cosine similarity scores, indicating that the model accurately captures the semantic meaning of the queries.

Additionally, you can measure the average response time of the deployed model to evaluate its performance and make sure it adheres to the required latency constraints for your application.

Integrate the model with LangChain

Additionally, you can integrate the deployed BGE model with LangChain, a library for building applications with language models. To do this, you create a custom content handler that inherits from LangChain’s EmbeddingsContentHandler. This handler implements methods to convert input data into a format compatible with the SageMaker endpoint and converts the endpoint’s output into embeddings.

You then create a SagemakerEndpointEmbeddings instance, specifying the endpoint name, SageMaker runtime client, and custom content handler. This instance wraps the deployed BGE model and integrates it with LangChain workflows.

Using the embed_documents method of the SagemakerEndpointEmbeddings instance, you generate embeddings for documents or queries, which can be used for downstream tasks like similarity search, clustering, or classification.

Integrating the deployed BGE model with LangChain allows you to take advantage of LangChain’s features and abstractions to build sophisticated language model applications that utilize the fine-tuned BGE embeddings. Testing the integration makes sure the model performs as expected and can be seamlessly incorporated into real-world workflows and applications.

Clean up

After you’re finished with the deployed endpoint, don’t forget to delete it to prevent unexpected SageMaker costs.

Conclusion

In this post, we walked through the process of fine-tuning a BGE embedding model using synthetic data generated from Amazon Bedrock. We covered key steps, including generating high-quality synthetic data, fine-tuning the model, evaluating performance, and deploying the optimized model using Amazon SageMaker.

By using synthetic data and advanced fine-tuning techniques like hard negative mining and model merging, you can significantly enhance the performance of embedding models for your specific use cases. This approach is especially valuable when real-world data is limited or difficult to obtain.

To get started, we encourage you to experiment with the code and techniques demonstrated in this post. Adapt them to your own datasets and models to unlock performance improvements in your applications. You can find all the code used in this post in our GitHub repository.

Resources


About the Authors

austinmw photoAustin Welch is a Senior Applied Scientist at Amazon Web Services Generative AI Innovation Center.

bryost photoBryan Yost is a Principle Deep Learning Architect at Amazon Web Services Generative AI Innovation Center.

nmehdi photoMehdi Noori is a Senior Applied Scientist at Amazon Web Services Generative AI Innovation Center.

Read More

Boost post-call analytics with Amazon Q in QuickSight

Boost post-call analytics with Amazon Q in QuickSight

In today’s customer-centric business world, providing exceptional customer service is crucial for success. Contact centers play a vital role in shaping customer experiences, and analyzing post-call interactions can provide valuable insights to improve agent performance, identify areas for improvement, and enhance overall customer satisfaction.

Amazon Web Services (AWS) has AI and generative AI solutions that you can integrate into your existing contact centers to improve post-call analysis.

Post Call Analytics (PCA) is a solution that does most of the heavy lifting associated with providing an end-to-end solution that can process call recordings from your existing contact center. PCA provides actionable insights to spot emerging trends, identify agent coaching opportunities, and assess the general sentiment of calls.

Complementing PCA, we have Live call analytics with agent assist (LCA) for real-time analysis while calls are produced, providing AI and generative AI capabilities.

In this post, we show you how to unlock powerful post-call analytics and visualizations, empowering your organization to make data-driven decisions and drive continuous improvement.

Enrich and boost your post-call recording files with Amazon Q and Amazon Quicksight

Amazon QuickSight is a unified business intelligence (BI) service that provides modern interactive dashboards, natural language querying, paginated reports, machine learning (ML) insights, and embedded analytics at scale.

Amazon Q is a powerful, new capability in Amazon QuickSight that you can use to ask questions about your data using natural language and share presentation-ready data stories to communicate insights to others.

These capabilities can significantly enhance your post-call analytics workflow, making it easier to derive insights from your contact center data.

To get started using Amazon Q in QuickSight, first you will need Quicksight Enterprise Edition, which you can sign up for by following this process.

Amazon Q in QuickSight provides users a suite of new generative BI capabilities.

Depending on the user’s role, they will have access to different sets of capabilities. For instance a Reader Pro user can create data stories and executive summaries. If the user is an Author Pro user, they will also be able to create topics and build dashboards using natural language. The following figure shows the available roles and their capabilities.

The following are some key ways that Amazon Q in QuickSight can boost your post-call analytics productivity.

  • Quick insights: Instead of spending time building complex dashboards and visualizations, you can enable users to quickly get answers to your questions about call volumes, agent performance, customer sentiment, and more. Amazon Q in QuickSight understands the context of your data and generates relevant visualizations on the fly.
  • One-time analysis: With Amazon Q in QuickSight, you can perform one-time analysis on your post-call data without any prior setup. Ask your questions using natural language, and QuickSight will provide the relevant insights, allowing you to explore your data in new ways and uncover hidden patterns.
  • Natural language interface: Amazon Q in QuickSight has a natural language interface that makes it accessible to non-technical users. Business analysts, managers, and executives can ask questions about post-call data without needing to learn complex querying languages or data visualization tools.
  • Contextual recommendations: Amazon Q in QuickSight can provide contextual recommendations based on your questions and the data available. For example, if you ask about customer sentiment, it might suggest analyzing sentiment by agent, call duration, or other relevant dimensions.
  • Automated dashboards: Amazon Q can help accelerate dashboard development based on your questions, saving you the effort of manually building and maintaining dashboards for post-call analytics.

By using Amazon Q in QuickSight, your organization can streamline post-call analytics, enabling faster insights, better decision-making, and improved customer experiences. With its natural language interface and automated visualizations, Amazon Q empowers users at all levels to explore and understand post-call data more efficiently.

Let’s dive into a couple of the capabilities available to Pro users, such as building executive summaries and data stories for post-call analytics.

Executive summaries

When a user is just starting to explore a new dashboard that has been shared with them, it often takes time to familiarize themselves with what is contained in the dashboard and where they should be looking for key insights. Executive summaries are a great way to use AI to highlight key insights and draw the user’s attention to specific visuals that contain metrics worth looking into further.

You can build an executive summary on any dashboard that you have access to. Such as the dashboard shown in the following figure.

As shown in the following figure, you can change to another sheet, or even apply filters and regenerate the summary to get a fresh set of highlights for the filtered set of data.

The key benefits of using executive summaries include:

  • Automated insights: Amazon Q can automatically surface key insights and trends from your post-call data, making it possible to quickly create executive summaries that highlight the most important information.
  • Customized views: Executives can customize the visualizations and summaries generated by Amazon Q to align with their specific requirements and preferences, ensuring that the executive summaries are tailored to their needs.

Data storytelling

After a user has found an interesting trend or insight within a dashboard, they often need to communicate with others to drive a decision on what to do next. That decision might be made in a meeting or offline, but a presentation with key metrics and a structured narrative is often the basis for presenting the argument. This is exactly what data stories are designed to support. Rather than taking screenshots and pasting into a document or email, at which point you lose all governance and the data becomes static, stories in QuickSight are interactive, governed, and can be updated in a click.

To build a story, you always start from a dashboard. You then select visuals to support your story and input a prompt of what you want the story to be about. In the example, we want to generate a story to get insights and recommendations to improve call center operations (shown in the following figure).

As the following figure shows, after a few moments, you will see a fully structured story including visuals and insights, including recommendations for next steps.

Key benefits of using data stories:

  1. Narrative exploration: With Amazon Q, you can explore your post-call data through a narrative approach, asking follow-up questions based on the insights generated. This allows you to build a compelling data story that uncovers the underlying patterns and trends in your contact center operations.
  2. Contextual recommendations: Amazon Q can provide contextual recommendations for additional visualizations or analyses based on your questions and the data available. These recommendations can help you uncover new perspectives and enrich your data storytelling.
  3. Automated narratives: Amazon Q can generate automated narratives that explain the visualizations and insights, making it easier to communicate the data story to stakeholders who might not be familiar with the technical details.
  4. Interactive presentations: By integrating Amazon Q with QuickSight presentation mode, you can create interactive data storytelling experiences. Executives and stakeholders can ask questions during the presentation, and Amazon Q will generate visualizations and insights in real time, enabling a more engaging and dynamic data storytelling experience.

Conclusion

By using the capabilities of Amazon Q in QuickSight, you can uncover valuable insights from your call recordings and post-call analytics data. These insights can then inform data-driven decisions to improve customer experiences, optimize contact center operations, and drive overall business performance.

In the era of customer-centricity, post-call analytics has become a game-changer for contact center operations. By using the power of Amazon Q and Amazon QuickSight on top of your PCA data, you can unlock a wealth of insights, optimize agent performance, and deliver exceptional customer experiences. Embrace the future of customer service with cutting-edge AI and analytics solutions from AWS, and stay ahead of the competition in today’s customer-centric landscape.


About the Author

Daniel Martinez is a Solutions Architect in Iberia Enterprise, part of the worldwide commercial sales organization (WWCS) at AWS.

Read More

Create a next generation chat assistant with Amazon Bedrock, Amazon Connect, Amazon Lex, LangChain, and WhatsApp

Create a next generation chat assistant with Amazon Bedrock, Amazon Connect, Amazon Lex, LangChain, and WhatsApp

This post is co-written with Harrison Chase, Erick Friis and Linda Ye from LangChain.

Generative AI is set to revolutionize user experiences over the next few years. A crucial step in that journey involves bringing in AI assistants that intelligently use tools to help customers navigate the digital landscape. In this post, we demonstrate how to deploy a contextual AI assistant. Built using Amazon Bedrock Knowledge Bases, Amazon Lex, and Amazon Connect, with WhatsApp as the channel, our solution provides users with a familiar and convenient interface.

Amazon Bedrock Knowledge Bases gives foundation models (FMs) and agents contextual information from your company’s private data sources for Retrieval Augmented Generation (RAG) to deliver more relevant, accurate, and customized responses. It also offers a powerful solution for organizations seeking to enhance their generative AI–powered applications. This feature simplifies the integration of domain-specific knowledge into conversational AI through native compatibility with Amazon Lex and Amazon Connect. By automating document ingestion, chunking, and embedding, it eliminates the need to manually set up complex vector databases or custom retrieval systems, significantly reducing development complexity and time.

The result is improved accuracy in FM responses, with reduced hallucinations due to grounding in verified data. Cost efficiency is achieved through minimized development resources and lower operational costs compared to maintaining custom knowledge management systems. The solution’s scalability quickly accommodates growing data volumes and user queries thanks to AWS serverless offerings. It also uses the robust security infrastructure of AWS to maintain data privacy and regulatory compliance. With the ability to continuously update and add to the knowledge base, AI applications stay current with the latest information. By choosing Amazon Bedrock Knowledge Bases, organizations can focus on creating value-added AI applications while AWS handles the intricacies of knowledge management and retrieval, enabling faster deployment of more accurate and capable AI solutions with less effort.

Prerequisites

To implement this solution, you need the following:

Solution overview

This solution uses several key AWS AI services to build and deploy the AI assistant:

  • Amazon Bedrock – Amazon Bedrock is a fully managed service that offers a choice of high-performing FMs from leading AI companies such as AI21 Labs, Anthropic, Cohere, Meta, Mistral AI, Stability AI, and Amazon through a single API, along with a broad set of capabilities you need to build generative AI applications with security, privacy, and responsible AI
  • Amazon Bedrock Knowledge Bases – Gives the AI assistant contextual information from a company’s private data sources
  • Amazon OpenSearch Service – Works as vector store that is natively supported by Amazon Bedrock Knowledge Bases
  • Amazon Lex – Enables building the conversational interface for the AI assistant, including defining intents and slots
  • Amazon Connect – Powers the integration with WhatsApp to make the AI assistant available to users on the popular messaging application
  • AWS Lambda – Runs the code to integrate the services and implement the LangChain agent that forms the core logic of the AI assistant
  • Amazon API Gateway – Receives the incoming requests triggered from WhatsApp and routes the request to AWS Lambda for further processing
  • Amazon DynamoDB – Stores the messages received and generated to enable conversation memory
  • Amazon SNS – Handles the routing of the outgoing response from Amazon Connect
  • LangChain – Provides a powerful abstraction layer for building the LangChain agent that helps your FMs perform context-aware reasoning
  • LangSmith – Uploads agent traces to LangSmith for added observability, including debugging, monitoring, and testing and evaluation capabilities

The following diagram illustrates the architecture.

Solution Architecture

Flow description

Numbers in red on the right side of the diagram illustrate the data ingestion process:

  1. Upload files to Amazon Simple Storage Service (Amazon S3) Data Source
  2. New files trigger Lambda Function
  3. Lambda Function invokes sync operation of the knowledge base data source
  4. Amazon Bedrock Knowledge Bases fetches the data from Amazon S3, chunks it, and generates the embeddings through the FM of your selection
  5. Amazon Bedrock Knowledge Bases stores the embeddings in Amazon OpenSearch Service

Numbers on the left side of the diagram illustrate the messaging process:

  1. User initiates communication by sending a message through WhatsApp to the webhook hosted on .
  2. Amazon API Gateway routes the incoming message to the inbound message handler, executed on AWS Lambda.
  3. The inbound message handler records the user’s contact details in Amazon DynamoDB.
  4. For first-time users, the inbound message handler establishes a new session in Amazon Connect and logs it in DynamoDB. For returning users, it resumes their existing Amazon Connect session.
  5. Amazon Connect forwards the user’s message to Amazon Lex for natural language processing.
  6. Amazon Lex triggers the LangChain AI assistant, implemented as a Lambda function.
  7. The LangChain AI assistant retrieves the conversation history from DynamoDB.
  8. Using Amazon Bedrock Knowledge Bases, the LangChain AI assistant fetches relevant contextual information.
  9. The LangChain AI assistant compiles a prompt, incorporating context data and the user’s query, and submits it to a FM running on Amazon Bedrock.
  10. Amazon Bedrock processes the input and returns the model’s response to the LangChain AI assistant.
  11. The LangChain AI assistant relays the model’s response back to Amazon Lex.
  12. Amazon Lex transmits the model’s response to Amazon Connect.
  13. Amazon Connect publishes the model’s response to Amazon Simple Notification Service (Amazon SNS).
  14. Amazon SNS triggers the outbound message handler Lambda function.
  15. The outbound message handler retrieves the relevant chat contact information from Amazon DynamoDB.
  16. The outbound message handler dispatches the response to the user through Meta’s WhatsApp API.

Deploying this AI assistant involves three main steps:

  1. Create the knowledge base using Amazon Bedrock Knowledge Bases and ingest relevant product documentation, FAQs, knowledge articles, and other useful data that the AI assistant can use to answer user questions. The data should cover the key use cases and topics the AI assistant will support.
  2. Create a LangChain agent that powers the AI assistant’s logic. The agent is implemented in a Lambda function and uses the knowledge base as its primary tool to look up information. Deploying the agent with other resources is automated through the provided AWS CloudFormation template. See the list of resources in the next section.
  3. Create the Amazon Connect instance and configure the WhatsApp integration. This allows users to chat with the AI assistant using WhatsApp, providing a familiar interface and enabling rich interactions such as images and buttons. WhatsApp’s popularity improves the accessibility of the AI assistant.

Solution deployment

We’ve provided pre-built AWS CloudFormation templates that deploy everything you need in your AWS account.

  1. Sign in to the AWS console if you aren’t already.
  2. Choose the following Launch Stack button to open the CloudFormation console and create a new stack.
  3. Enter the following parameters:
    • StackName: Name your Stack, for example, WhatsAppAIStack
    • LangchainAPIKey: The API key generated through LangChain
Region Deploy button Template URL – use to upgrade existing stack to a new release AWS CDK stack to customize as needed
N. Virginia (us-east-1) Launch Stack button YML GitHub
  1. Check the box to acknowledge that you are creating AWS Identity and Access Management (IAM) resources and choose Create Stack.
  2. Wait for the stack creation to be complete in approximately 10 minutes, which will create the following:
  3. Upload files to the data source (Amazon S3) created for WhatsApp. As soon as you upload a file, the data source will synchronize automatically.
  4. To test the agent, on the Amazon Lex console, select the most recently created assistant. Choose English, choose Test, and send it a message.

Create the Amazon Connect instance and integrate WhatsApp

Configure Amazon Connect to integrate with your WhatsApp business account and enable the WhatsApp channel for the AI assistant:

  1. Navigate to Amazon Connect in the AWS console. If you haven’t already, create an instance. Copy your Instance ARN under Distribution settings. You will need this information later to link your WhatsApp business account.
  2. Choose your instance, then in the navigation panel, choose Flows. Scroll down and select Amazon Lex. Select your bot and choose Add Amazon Lex Bot.
  3. In the navigation panel, choose Overview. Under Access Information, choose Log in for emergency access.
  4. On the Amazon Connect console, under Routing in the navigation panel, choose Flows. Choose Create flow. Drag a Get customer input block onto the flow. Select the block. Select Text-to-speech or chat text and add an intro message such as, “Hello, how can I help you today?” Scroll down and choose Amazon Lex, then select the Amazon Lex bot you created in step 2.
  5. After you save the block, add another block called “Disconnect.” Drag the Entry arrow to the Get customer input and the Get customer input arrow to Disconnect. Choose Publish.
  6. After it’s published, choose Show additional flow information at the bottom of the navigation panel. Copy the flow’s Amazon Resource Name (ARN), which you will need to deploy the WhatsApp integration. The following screenshot shows the Amazon Connect console with the flow.

Connect Flow Diagram

  1. Deploy the WhatsApp integration as detailed in Provide WhatsApp messaging as a channel with Amazon Connect.

Testing the solution

Interact with the AI assistant through WhatsApp, as shown in the following video:

Clean up

To avoid incurring ongoing costs, delete the resources after you are done:

  1. Delete the CloudFormation stacks.
  2. Delete the Amazon Connect instance.

Conclusion

This post showed you how to create an intelligent conversational AI assistant by integrating Amazon Bedrock, Amazon Lex, and Amazon Connect and deploying it on WhatsApp.

The solution ingests relevant data into a knowledge base on Amazon Bedrock Knowledge Bases, implements a LangChain agent that uses the knowledge base to answer questions, and makes the agent available to users through WhatsApp. This provides an accessible, intelligent AI assistant that can guide users through your company’s products and services.

Possible next steps include customizing the AI assistant for your specific use case, expanding the knowledge base, and analyzing conversation logs using LangSmith to identify issues, improve errors, and break down performance bottlenecks in your FM call sequence.


About the Authors

Kenton Blacutt is an AI Consultant within the GenAI Innovation Center. He works hands-on with customers helping them solve real-world business problems with cutting edge AWS technologies, especially Amazon Q and Bedrock. In his free time, he likes to travel, experiment with new AI techniques, and run an occasional marathon.

Lifeth Álvarez is a Cloud Application Architect at Amazon. She enjoys working closely with others, embracing teamwork and autonomous learning. She likes to develop creative and innovative solutions, applying special emphasis on details. She enjoys spending time with family and friends, reading, playing volleyball, and teaching others.

Mani Khanuja is a Tech Lead – Generative AI Specialist, author of the book Applied Machine Learning and High Performance Computing on AWS, and a member of the Board of Directors for Women in Manufacturing Education Foundation Board. She leads machine learning projects in various domains such as computer vision, natural language processing, and generative AI. She speaks at internal and external conferences such as AWS re:Invent, Women in Manufacturing West, YouTube webinars, and GHC 23. In her free time, she likes to go for long runs along the beach.

Linda Ye leads product marketing at LangChain. Previously, she worked at Sentry, Splunk, and Harness, driving product and business value for technical audiences, and studied economics at Sanford. In her free time, Linda enjoys writing half-baked novels, playing tennis, and reading.

Erick Friis, Founding Engineer at LangChain, currently spends most of his time on the open source side of the company. He’s an ex-founder with a passion for language-based applications. He spends his free time outdoors on skis or training for triathlons.

Harrison Chase is the CEO and cofounder of LangChain, an open source framework and toolkit that helps developers build context-aware reasoning applications. Prior to starting LangChain, he led the ML team at Robus Intelligence, led the entity linking team at Kensho, and studied statistics and computer science at Harvard.

Read More