Cisco achieves 50% latency improvement using Amazon SageMaker Inference faster autoscaling feature

Cisco achieves 50% latency improvement using Amazon SageMaker Inference faster autoscaling feature

This post is co-authored with Travis Mehlinger and Karthik Raghunathan from Cisco.

Webex by Cisco is a leading provider of cloud-based collaboration solutions which includes video meetings, calling, messaging, events, polling, asynchronous video and customer experience solutions like contact center and purpose-built collaboration devices. Webex’s focus on delivering inclusive collaboration experiences fuels our innovation, which leverages AI and Machine Learning, to remove the barriers of geography, language, personality, and familiarity with technology. Its solutions are underpinned with security and privacy by design. Webex works with the world’s leading business and productivity apps – including AWS.

Cisco’s Webex AI (WxAI) team plays a crucial role in enhancing these products with AI-driven features and functionalities, leveraging LLMs to improve user productivity and experiences. In the past year, the team has increasingly focused on building artificial intelligence (AI) capabilities powered by large language models (LLMs) to improve productivity and experience for users. Notably, the team’s work extends to Webex Contact Center, a cloud-based omni-channel contact center solution that empowers organizations to deliver exceptional customer experiences. By integrating LLMs, WxAI team enables advanced capabilities such as intelligent virtual assistants, natural language processing, and sentiment analysis, allowing Webex Contact Center to provide more personalized and efficient customer support. However, as these LLM models grew to contain hundreds of gigabytes of data, WxAI team faced challenges in efficiently allocating resources and starting applications with the embedded models. To optimize its AI/ML infrastructure, Cisco migrated its LLMs to Amazon SageMaker Inference, improving speed, scalability, and price-performance.

This blog post highlights how Cisco implemented faster autoscaling release reference. For more details on Cisco’s Use Cases, Solution & Benefits see How Cisco accelerated the use of generative AI with Amazon SageMaker Inference.

In this post, we will discuss the following:

  1. Overview of Cisco’s use-case and architecture
  2. Introduce new faster autoscaling feature
    1. Single Model real-time endpoint
    2. Deployment using Amazon SageMaker InferenceComponents
  3. Share results on the performance improvements Cisco saw with faster autoscaling feature for GenAI inference
  4. Next Steps

Cisco’s Use-case: Enhancing Contact Center Experiences

Webex is applying generative AI to its contact center solutions, enabling more natural, human-like conversations between customers and agents. The AI can generate contextual, empathetic responses to customer inquiries, as well as automatically draft personalized emails and chat messages. This helps contact center agents work more efficiently while maintaining a high level of customer service.

Architecture

Initially, WxAI embedded LLM models directly into the application container images running on Amazon Elastic Kubernetes Service (Amazon EKS). However, as the models grew larger and more complex, this approach faced significant scalability and resource utilization challenges. Operating the resource-intensive LLMs through the applications required provisioning substantial compute resources, which slowed down processes like allocating resources and starting applications. This inefficiency hampered WxAI’s ability to rapidly develop, test, and deploy new AI-powered features for the Webex portfolio.

To address these challenges, WxAI team turned to SageMaker Inference – a fully managed AI inference service that allows seamless deployment and scaling of models independently from the applications that use them. By decoupling the LLM hosting from the Webex applications, WxAI could provision the necessary compute resources for the models without impacting the core collaboration and communication capabilities.

“The applications and the models work and scale fundamentally differently, with entirely different cost considerations, by separating them rather than lumping them together, it’s much simpler to solve issues independently.”

– Travis Mehlinger, Principal Engineer at Cisco. 

This architectural shift has enabled Webex to harness the power of generative AI across its suite of collaboration and customer engagement solutions.

Today Sagemaker endpoint uses autoscaling with invocation per instance. However, it takes ~6 minutes to detect need for autoscaling.

Introducing new Predefined metric types for faster autoscaling

Cisco Webex AI team wanted to improve their inference auto scaling times, so they worked with Amazon SageMaker to improve inference.

Amazon SageMaker’s real-time inference endpoint offers a scalable, managed solution for hosting Generative AI models. This versatile resource can accommodate multiple instances, serving one or more deployed models for instant predictions. Customers have the flexibility to deploy either a single model or multiple models using SageMaker InferenceComponents on the same endpoint. This approach allows for efficient handling of diverse workloads and cost-effective scaling.

To optimize real-time inference workloads, SageMaker employs application automatic scaling (auto scaling). This feature dynamically adjusts both the number of instances in use and the quantity of model copies deployed (when using inference components), responding to real-time changes in demand. When traffic to the endpoint surpasses a predefined threshold, auto scaling increases the available instances and deploys additional model copies to meet the heightened demand. Conversely, as workloads decrease, the system automatically removes unnecessary instances and model copies, effectively reducing costs. This adaptive scaling ensures that resources are optimally utilized, balancing performance needs with cost considerations in real-time.

Working with Cisco, Amazon SageMaker releases new sub-minute high-resolution pre-defined metric type SageMakerVariantConcurrentRequestsPerModelHighResolution for faster autoscaling and reduced detection time. This newer high-resolution metric has shown to reduce scaling detection times by up to 6x (compared to existing SageMakerVariantInvocationsPerInstance metric) and thereby improving overall end-to-end inference latency by up to 50%, on endpoints hosting Generative AI models like Llama3-8B.

With this new release, SageMaker real-time endpoints also now emits new ConcurrentRequestsPerModel and ConcurrentRequestsPerModelCopy CloudWatch metrics as well, which are more suited for monitoring and scaling Amazon SageMaker endpoints hosting LLMs and FMs.

Cisco’s Evaluation of faster autoscaling feature for GenAI inference

Cisco evaluated Amazon SageMaker’s new pre-defined metric types for faster autoscaling on their Generative AI workloads. They observed up to a 50% latency improvement in end-to-end inference latency by using the new SageMakerequestsPerModelHighResolution metric, compared to the existing SageMakerVariantInvocationsPerInstance  metric.

The setup involved using their Generative AI models, on SageMaker’s real-time inference endpoints. SageMaker’s autoscaling feature dynamically adjusted both the number of instances and the quantity of model copies deployed to meet real-time changes in demand. The new high-resolution SageMakerVariantConcurrentRequestsPerModelHighResolution metric reduced scaling detection times by up to 6x, enabling faster autoscaling and lower latency.

In addition, SageMaker now emits new CloudWatch metrics, including ConcurrentRequestsPerModel and ConcurrentRequestsPerModelCopy, which are better suited for monitoring and scaling endpoints hosting large language models (LLMs) and foundation models (FMs). This enhanced autoscaling capability has been a game-changer for Cisco, helping to improve the performance and efficiency of their critical Generative AI applications.

We are really pleased with the performance improvements we’ve seen from Amazon SageMaker’s new autoscaling metrics. The higher-resolution scaling metrics have significantly reduced latency during initial load and scale-out on our Gen AI workloads. We’re excited to do a broader rollout of this feature across our infrastructure

– Travis Mehlinger, Principal Engineer at Cisco.

Cisco further plans to work with SageMaker inference to drive improvements in rest of the variables that impact autoscaling latencies. Like model download and load times.

Conclusion

Cisco’s Webex AI team is continuing to leverage Amazon SageMaker Inference to power generative AI experiences across its Webex portfolio. Evaluation with faster autoscaling from SageMaker has shown Cisco up to 50% latency improvements in its GenAI inference endpoints. As WxAI team continues to push the boundaries of AI-driven collaboration, its partnership with Amazon SageMaker will be crucial in informing upcoming improvements and advanced GenAI inference capabilities. With this new feature Cisco looks forward to further optimizing its AI Inference performance by rolling it broadly in multiple regions and delivering even more impactful generative AI features to its customers.


About the Authors

Travis Mehlinger is a Principal Software Engineer in the Webex Collaboration AI group, where he helps teams develop and operate cloud-native AI and ML capabilities to support Webex AI features for customers around the world.In his spare time, Travis enjoys cooking barbecue, playing video games, and traveling around the US and UK to race go karts.

Karthik Raghunathan is the Senior Director for Speech, Language, and Video AI in the Webex Collaboration AI Group. He leads a multidisciplinary team of software engineers, machine learning engineers, data scientists, computational linguists, and designers who develop advanced AI-driven features for the Webex collaboration portfolio. Prior to Cisco, Karthik held research positions at MindMeld (acquired by Cisco), Microsoft, and Stanford University.

Praveen Chamarthi is a Senior AI/ML Specialist with Amazon Web Services. He is passionate about AI/ML and all things AWS. He helps customers across the Americas to scale, innovate, and operate ML workloads efficiently on AWS. In his spare time, Praveen loves to read and enjoys sci-fi movies.

Saurabh Trikande is a Senior Product Manager for Amazon SageMaker Inference. He is passionate about working with customers and is motivated by the goal of democratizing AI. He focuses on core challenges related to deploying complex AI applications, multi-tenant models, cost optimizations, and making deployment of Generative AI models more accessible. In his spare time, Saurabh enjoys hiking, learning about innovative technologies, following TechCrunch and spending time with his family.

Ravi Thakur is a Sr Solutions Architect Supporting Strategic Industries at AWS, and is based out of Charlotte, NC. His career spans diverse industry verticals, including banking, automotive, telecommunications, insurance, and energy. Ravi’s expertise shines through his dedication to solving intricate business challenges on behalf of customers, utilizing distributed, cloud-native, and well-architected design patterns. His proficiency extends to microservices, containerization, AI/ML, Generative AI, and more. Today, Ravi empowers AWS Strategic Customers on personalized digital transformation journeys, leveraging his proven ability to deliver concrete, bottom-line benefits.

Read More

How Cisco accelerated the use of generative AI with Amazon SageMaker Inference

How Cisco accelerated the use of generative AI with Amazon SageMaker Inference

This post is co-authored with Travis Mehlinger and Karthik Raghunathan from Cisco.

Webex by Cisco is a leading provider of cloud-based collaboration solutions, including video meetings, calling, messaging, events, polling, asynchronous video, and customer experience solutions like contact center and purpose-built collaboration devices. Webex’s focus on delivering inclusive collaboration experiences fuels their innovation, which uses artificial intelligence (AI) and machine learning (ML), to remove the barriers of geography, language, personality, and familiarity with technology. Its solutions are underpinned with security and privacy by design. Webex works with the world’s leading business and productivity apps—including AWS.

Cisco’s Webex AI (WxAI) team plays a crucial role in enhancing these products with AI-driven features and functionalities, using large language models (LLMs) to improve user productivity and experiences. In the past year, the team has increasingly focused on building AI capabilities powered by LLMs to improve productivity and experience for users. Notably, the team’s work extends to Webex Contact Center, a cloud-based omni-channel contact center solution that empowers organizations to deliver exceptional customer experiences. By integrating LLMs, the WxAI team enables advanced capabilities such as intelligent virtual assistants, natural language processing (NLP), and sentiment analysis, allowing Webex Contact Center to provide more personalized and efficient customer support. However, as these LLM models grew to contain hundreds of gigabytes of data, the WxAI team faced challenges in efficiently allocating resources and starting applications with the embedded models. To optimize its AI/ML infrastructure, Cisco migrated its LLMs to Amazon SageMaker Inference, improving speed, scalability, and price-performance.

This post highlights how Cisco implemented new functionalities and migrated existing workloads to Amazon SageMaker inference components for their industry-specific contact center use cases. By integrating generative AI, they can now analyze call transcripts to better understand customer pain points and improve agent productivity. Cisco has also implemented conversational AI experiences, including chatbots and virtual agents that can generate human-like responses, to automate personalized communications based on customer context. Additionally, they are using generative AI to extract key call drivers, optimize agent workflows, and gain deeper insights into customer sentiment. Cisco’s adoption of SageMaker Inference has enabled them to streamline their contact center operations and provide more satisfying, personalized interactions that address customer needs.

In this post, we discuss the following:

  • Cisco’s business use cases and outcomes
  • How Cisco accelerated the use of generative AI powered by LLMs for their contact center use cases with the help of SageMaker Inference
  • Cisco’s generative AI inference architecture, which is built as a robust and secure foundation, using various services and features such as SageMaker Inference, Amazon Bedrock, Kubernetes, Prometheus, Grafana, and more
  • How Cisco uses an LLM router and auto scaling to route requests to appropriate LLMs for different tasks while simultaneously scaling their models for resiliency and performance efficiency.
  • How the solutions in this post impacted Cisco’s business roadmap and strategic partnership with AWS
  • How Cisco helped SageMaker Inference build new capabilities to deploy generative AI applications at scale

Enhancing collaboration and customer engagement with generative AI: Webex’s AI-powered solutions

In this section, we discuss Cisco’s AI-powered use cases.

Meeting summaries and insights

For Webex Meetings, the platform uses generative AI to automatically summarize meeting recordings and transcripts. This extracts the key takeaways and action items, helping distributed teams stay informed even if they missed a live session. The AI-generated summaries provide a concise overview of important discussions and decisions, allowing employees to quickly get up to speed. Beyond summaries, Webex’s generative AI capabilities also surface intelligent insights from meeting content. This includes identifying action items, highlighting critical decisions, and generating personalized meeting notes and to-do lists for each participant. These insights help make meetings more productive and hold attendees accountable.

Enhancing contact center experiences

Webex is also applying generative AI to its contact center solutions, enabling more natural, human-like conversations between customers and agents. The AI can generate contextual, empathetic responses to customer inquiries, as well as automatically draft personalized emails and chat messages. This helps contact center agents work more efficiently while maintaining a high level of customer service.

Webex customers realize positive outcomes with generative AI

Webex’s adoption of generative AI is driving tangible benefits for customers. Clients using the platform’s AI-powered meeting summaries and insights have reported productivity gains. Webex customers using the platform’s generative AI for contact centers have handled hundreds of thousands of calls with improved customer satisfaction and reduced handle times, enabling more natural, empathetic conversations between agents and clients. Webex’s strategic integration of generative AI is empowering users to work smarter and deliver exceptional experiences.

For more details on how Webex is harnessing generative AI to enhance collaboration and customer engagement, see Webex | Exceptional Experiences for Every Interaction on the Webex blog.

Using SageMaker Inference to optimize resources for Cisco

Cisco’s WxAI team is dedicated to delivering advanced collaboration experiences powered by cutting-edge ML. The team develops a comprehensive suite of AI and ML features for the Webex ecosystem, including audio intelligence capabilities like noise removal and optimizing speaker voices, language intelligence for transcription and translation, and video intelligence features like virtual backgrounds. At the forefront of WxAI’s innovations is the AI-powered Webex Assistant, a virtual assistant that provides voice-activated control and seamless meeting support in multiple languages. To build these sophisticated capabilities, WxAI uses LLMs, which can contain up to hundreds of gigabytes of training data.

Initially, WxAI embedded LLM models directly into the application container images running on Amazon Elastic Kubernetes Service (Amazon EKS). However, as the models grew larger and more complex, this approach faced significant scalability and resource utilization challenges. Operating the resource-intensive LLMs through the applications required provisioning substantial compute resources, which slowed down processes like allocating resources and starting applications. This inefficiency hampered WxAI’s ability to rapidly develop, test, and deploy new AI-powered features for the Webex portfolio. To address these challenges, the WxAI team turned to SageMaker Inference—a fully managed AI inference service that allows seamless deployment and scaling of models independently from the applications that use them. By decoupling the LLM hosting from the Webex applications, WxAI could provision the necessary compute resources for the models without impacting the core collaboration and communication capabilities.

 “The applications and the models work and scale fundamentally differently, with entirely different cost considerations; by separating them rather than lumping them together, it’s much simpler to solve issues independently.”

– Travis Mehlinger, Principal Engineer at Cisco.

This architectural shift has enabled Webex to harness the power of generative AI across its suite of collaboration and customer engagement solutions.

Solution overview: Improving efficiency and reducing costs by migrating to SageMaker Inference

To address the scalability and resource utilization challenges faced with embedding LLMs directly into their applications, the WxAI team migrated to SageMaker Inference. By taking advantage of this fully managed service for deploying LLMs, Cisco unlocked significant performance and cost-optimization opportunities. Key benefits include the ability to deploy multiple LLMs behind a single endpoint for faster scaling and improved response latencies, as well as cost savings. Additionally, the WxAI team implemented an LLM proxy to simplify access to LLMs for Webex teams, enable centralized data collection, and reduce operational overhead. With SageMaker Inference, Cisco can efficiently manage and scale their LLM deployments, harnessing the power of generative AI across the Webex portfolio while maintaining optimal performance, scalability, and cost-effectiveness.

The following diagram illustrates the WxAI architecture on AWS.

The architecture is built on a robust and secure AWS foundation:

  • The architecture uses AWS services like Application Load Balancer, AWS WAF, and EKS clusters for seamless ingress, threat mitigation, and containerized workload management.
  • The LLM proxy (a microservice deployed on an EKS pod as part of the Service VPC) simplifies the integration of LLMs for Webex teams, providing a streamlined interface and reducing operational overhead. The LLM proxy supports LLM deployments on SageMaker Inference, Amazon Bedrock, or other LLM providers for Webex teams.
  • The architecture uses SageMaker Inference for optimized model deployment, auto scaling, and routing mechanisms.
  • The system integrates Loki for logging, Amazon Managed Service for Prometheus for metrics, and Grafana for unified visualization, seamlessly integrated with Cisco SSO.
  • The Data VPC houses the data layer components, including Amazon ElastiCache for caching and Amazon Relational Database Service (Amazon RDS) for database services, providing efficient data access and management.

Use case overview: Contact center topic analytics

A key focus area for the WxAI team is to enhance the capabilities of the Webex Contact Center platform. A typical Webex Contact Center installation has hundreds of agents handling many interactions through various channels like phone calls and digital channels. Webex’s AI-powered Topic Analytics feature extracts the key reasons customers are calling about by analyzing aggregated historical interactions and clustering them into meaningful topic categories, as shown in the following screenshot. The contact center administrator can then use these insights to optimize operations, enhance agent performance, and ultimately deliver a more satisfactory customer experience.

The Topic Analytics feature is powered by a pipeline of three models: a call driver extraction model, a topic clustering model, and a topic labeling model, as illustrated in the following diagram.

The model details are as follows:

  • Call driver extraction – This generative model summarizes the primary reason or intent (referred to as the call driver) behind a customer’s call. Accurate automatic tagging of calls with call drivers helps contact center supervisors and administrators quickly understand the primary reason for any historical call. One of the key considerations when solving this problem was selecting the right model to balance quality and operational costs. The WxAI team chose the FLAN T5 model on SageMaker Inference and instruction fine-tuned it for extracting call drivers from call transcripts. FLAN-T5 is a powerful text-to-text transfer transformer model that performs various natural language understanding and generation tasks. This workload had a global footprint deployed in us-east-2, eu-west-2, eu-central-1, ap-southeast-1, ap-southeast-2, ap-northeast-1, and ca-central-1 AWS
  • Topic clustering – Although automatically tagging every contact center interaction with its call driver is a useful feature in itself, analyzing these call drivers in an aggregated fashion over a large batch of calls can uncover even more interesting trends and insights. The topic clustering model achieves this by clustering all the individually extracted call drivers from a large batch of calls into different topic clusters. It does this by creating a semantic embedding for each call driver and employing an unsupervised hierarchical clustering technique that operates on the vector embeddings. This results in distinct and coherent topic clusters where semantically similar call drivers are grouped together.
  • Topic labeling – The topic labeling model is a generative model that creates a descriptive name to serve as the label for each topic cluster. Several LLMs were prompt-tuned and evaluated in a few-shot setting to choose the ideal model for the label generation task. Finally, Llama2-13b-chat, with its ability to better capture contextual nuances and semantics of natural language conversation, was used for its accuracy, performance, and cost-effectiveness. Additionally, Llama2-13b-chat was deployed and used on SageMaker inference components, while maintaining relatively low operating costs compared to other LLMs, by using specific hardware like g4dn and g5

This solution also used the auto scaling capabilities of SageMaker to dynamically adjust the number of instances based on a desired minimum of 1 endpoint and maximum of 30. This approach provides efficient resource utilization while maintaining high throughput, allowing the WxAI platform to handle batch jobs overnight and scale to hundreds of inferences per minute during peak hours. By deploying the model on SageMaker Inference with auto scaling, WxAI team was able to deliver reliable and accurate responses to customer interactions for their Topic Analytics use case.

By accurately pinpointing the call driver, the system can suggest appropriate actions, resources, and next steps to the agent, streamlining the customer support process, further leading to personalized and accurate responses to customer questions.

To handle fluctuating demand and optimize resource utilization, the WxAI team implemented auto scaling for their SageMaker Inference endpoints. They configured the endpoints to scale from a minimum to a maximum instance count based on GPU utilization. Additionally, the LLM proxy routed requests between the different LLMs deployed on SageMaker Inference. This proxy abstracts the complexities of communicating with various LLM providers and enables centralized data collection and analysis. This led to enhanced generative AI workflows, optimized latency, and personalized use case implementations.

Benefits

Through the strategic adoption of AWS AI services, Cisco’s WxAI team has realized significant benefits, enabling them to build cutting-edge, AI-powered collaboration capabilities more rapidly and cost-effectively:

  • Improved development and deployment cycle time – By decoupling models from applications, the team has streamlined processes like bug fixes, integration testing, and feature rollouts across environments, accelerating their overall development velocity.
  • Simplified engineering and delivery – The clear separation of concerns between the lean application layer and resource-intensive model layer has simplified engineering efforts and delivery, allowing the team to focus on innovation rather than infrastructure complexities.
  • Reduced costs – By using fully managed services like SageMaker Inference, the team has offloaded infrastructure management overhead. Additionally, capabilities like asynchronous inference and multi-model endpoints have enabled significant cost optimization without compromising performance or availability.
  • Scalability and performance – Services like SageMaker Inference and Amazon Bedrock, combined with technologies like NVIDIA Triton Inference Server on SageMaker, have empowered the WxAI team to scale their AI/ML workloads reliably and deliver high-performance inference for demanding use cases.
  • Accelerated innovation – The partnership with AWS has given the WxAI team access to cutting-edge AI services and expertise, enabling them to rapidly prototype and deploy innovative capabilities like the AI-powered Webex Assistant and advanced contact center AI features.

Cisco’s contributions to SageMaker Inference: Enhancing generative AI inference capabilities

Building upon the success of their strategic migration to SageMaker Inference, Cisco has been instrumental in partnering with the SageMaker Inference team to build and enhance key generative AI capabilities within the SageMaker platform. Since the early days of generative AI, Cisco has provided the SageMaker Inference team with valuable inputs and expertise, enabling the introduction of several new features and optimizations:

  • Cost and performance optimizations for generative AI inference – Cisco helped the SageMaker Inference team develop innovative techniques to optimize the use of accelerators, enabling SageMaker Inference to reduce foundation model (ML) deployment costs by 50% on average and latency by 20% on average with inference components. This breakthrough delivers significant cost savings and performance improvements for customers running generative AI workloads on SageMaker.
  • Scaling improvements for generative AI inference – Cisco’s expertise in distributed systems and auto scaling has also helped the SageMaker team develop advanced capabilities to better handle the scaling requirements of generative AI models. These improvements reduce auto scaling times by up to 40% and auto scaling detection by 6 times, so customers can rapidly scale their generative AI workloads on SageMaker to meet spikes in demand without compromising performance.
  • Streamlined generative AI model deployment for inference – Recognizing the need for simplified generative AI model deployment, Cisco collaborated with AWS to introduce the ability to deploy open source LLMs and FMs with just a few clicks. This user-friendly functionality removes the complexity traditionally associated with deploying these advanced models, empowering more customers to harness the power of generative AI.
  • Simplified inference deployment for Kubernetes customers – Cisco’s deep expertise in Kubernetes and container technologies helped the SageMaker team develop new Kubernetes Operator-based inference capabilities. These innovations make it straightforward for customers running applications on Kubernetes to deploy and manage generative AI models, reducing LLM deployment costs by 50% on average.
  • Using NVIDIA Triton Inference Server for generative AI – Cisco worked with AWS to integrate the NVIDIA Triton Inference Server, a high-performance model serving container managed by SageMaker, to power generative AI inference on SageMaker Inference. This enabled the WxAI team to scale their AI/ML workloads reliably and deliver high-performance inference for demanding generative AI use cases.
  • Packaging generative AI models more efficiently – To further simplify the generative AI model lifecycle, Cisco worked with AWS to enhance the capabilities in SageMaker for packaging LLMs and FMs for deployment. These improvements make it straightforward to prepare and deploy these generative AI models, accelerating their adoption and integration.
  • Improved documentation for generative AI – Recognizing the importance of comprehensive documentation to support the growing generative AI ecosystem, Cisco collaborated with the AWS team to enhance the SageMaker documentation. This includes detailed guides, best practices, and reference materials tailored specifically for generative AI use cases, helping customers quickly ramp up their generative AI initiatives on the SageMaker platform.

By closely partnering with the SageMaker Inference team, Cisco has played a pivotal role in driving the rapid evolution of generative AI Inference capabilities in SageMaker. The features and optimizations introduced through this collaboration are empowering AWS customers to unlock the transformative potential of generative AI with greater ease, cost-effectiveness, and performance.

“Our partnership with the SageMaker Inference product team goes back to the early days of generative AI, and we believe the features we have built in collaboration, from cost optimizations to high-performance model deployment, will broadly help other enterprises rapidly adopt and scale generative AI workloads on SageMaker, unlocking new frontiers of innovation and business transformation.”

– Travis Mehlinger, Principal Engineer at Cisco.

Conclusion

By using AWS services like SageMaker Inference and Amazon Bedrock for generative AI, Cisco’s WxAI team has been able to optimize their AI/ML infrastructure, enabling them to build and deploy AI-powered features more efficiently, reliably, and cost-effectively. This strategic approach has unlocked significant benefits for Cisco in deploying and scaling its generative AI capabilities for the Webex platform. Cisco’s own journey with generative AI, as showcased in this post, offers valuable lessons and insights for other uses of SageMaker Inference.

Recognizing the impact of generative AI, Cisco has played a crucial role in shaping the future of these capabilities within SageMaker Inference. By providing valuable insights and hands-on collaboration, Cisco has helped AWS develop a range of powerful features that are making generative AI more accessible and scalable for organizations. From optimizing infrastructure costs and performance to streamlining model deployment and scaling, Cisco’s contributions have been instrumental in enhancing the SageMaker Inference service.

Moving forward, the Cisco-AWS partnership aims to drive further advancements in areas like conversational and generative AI inference. As generative AI adoption accelerates across industries, Cisco’s Webex platform is designed to scale and streamline user experiences through various use cases discussed in this post and beyond. You can expect to see ongoing innovation from this collaboration in SageMaker Inference capabilities, as Cisco and SageMaker Inference continue to push the boundaries of what’s possible in the world of AI.

For more information on Webex Contact Center’s Topic Analytics feature and related AI capabilities, refer to The Webex Advantage: Navigating Customer Experience in the Age of AI on the Webex blog.


About the Authors

Travis Mehlinger is a Principal Software Engineer in the Webex Collaboration AI group, where he helps teams develop and operate cloud-centered AI and ML capabilities to support Webex AI features for customers around the world. In his spare time, Travis enjoys cooking barbecue, playing video games, and traveling around the US and UK to race go-karts.

Karthik Raghunathan is the Senior Director for Speech, Language, and Video AI in the Webex Collaboration AI Group. He leads a multidisciplinary team of software engineers, machine learning engineers, data scientists, computational linguists, and designers who develop advanced AI-driven features for the Webex collaboration portfolio. Prior to Cisco, Karthik held research positions at MindMeld (acquired by Cisco), Microsoft, and Stanford University.

Saurabh Trikande is a Senior Product Manager for Amazon SageMaker Inference. He is passionate about working with customers and is motivated by the goal of democratizing machine learning. He focuses on core challenges related to deploying complex ML applications, multi-tenant ML models, cost optimizations, and making deployment of deep learning models more accessible. In his spare time, Saurabh enjoys hiking, learning about innovative technologies, following TechCrunch and spending time with his family.

Ravi Thakur is a Senior Solutions Architect at AWS, based in Charlotte, NC. He specializes in solving complex business challenges using distributed, cloud-centered, and well-architected patterns. Ravi’s expertise includes microservices, containerization, AI/ML, and generative AI. He empowers AWS strategic customers on digital transformation journeys, delivering bottom-line benefits. In his spare time, Ravi enjoys motorcycle rides, family time, reading, movies, and traveling.

Amit Arora is an AI and ML Specialist Architect at Amazon Web Services, helping enterprise customers use cloud-based machine learning services to rapidly scale their innovations. He is also an adjunct lecturer in the MS data science and analytics program at Georgetown University in Washington D.C.

Madhur Prashant is an AI and ML Solutions Architect at Amazon Web Services. He is passionate about the intersection of human thinking and generative AI. His interests lie in generative AI, specifically building solutions that are helpful and harmless, and most of all optimal for customers. Outside of work, he loves doing yoga, hiking, spending time with his twin, and playing the guitar.

Read More

Discover insights from Box with the Amazon Q Box connector

Discover insights from Box with the Amazon Q Box connector

Seamless access to content and insights is crucial for delivering exceptional customer experiences and driving successful business outcomes. Box, a leading cloud content management platform, serves as a central repository for diverse digital assets and documents in many organizations. An enterprise Box account typically contains a wealth of materials, including documents, presentations, knowledge articles, and more. However, extracting meaningful information from the vast amount of Box data can be challenging without the right tools and capabilities. Employees in roles such as customer support, project management, and product management require the ability to effortlessly query Box content, uncover relevant insights, and make informed decisions that address customer needs effectively.

Building a generative artificial intelligence (AI)-powered conversational application that is seamlessly integrated with your enterprise’s relevant data sources requires time, money, and people. First, you need to develop connectors to those data sources. Next, you need to index this data to make it available for a Retrieval Augmented Generation (RAG) approach where relevant passages are delivered with high accuracy to a large language model (LLM). To do this, you need to select an index that provides the capabilities to index the content for semantic and vector search, build the infrastructure to retrieve and rank the answers, and build a feature-rich web application. You also need to hire and staff a large team to build, maintain, and manage such a system.

Amazon Q Business is a fully managed generative AI-powered assistant that can answer questions, provide summaries, generate content, and securely complete tasks based on data and information in your enterprise systems. Amazon Q Business can help you get fast, relevant answers to pressing questions, solve problems, generate content, and take action using the data and expertise found in your company’s information repositories, code, and enterprise systems (such as Box, among others). Amazon Q provides out-of-the-box native data source connectors that can index content into a built-in retriever and uses an LLM to provide accurate, well-written answers. A data source connector is a component of Amazon Q that helps integrate and synchronize data from multiple repositories into one index.

Amazon Q Business offers multiple prebuilt connectors to a large number of data sources, including Box Content Cloud, Atlassian Confluence, Amazon Simple Storage Service (Amazon S3), Microsoft SharePoint, Salesforce, and many more, and helps you create your generative AI solution with minimal configuration. For a full list of Amazon Q Business supported data source connectors, see Amazon Q Business connectors.

In this post, we guide you through the process of configuring and integrating Amazon Q for Business with your Box Content Cloud. This will enable your support, project management, product management, leadership, and other teams to quickly obtain accurate answers to their questions from the documents stored in your Box account.

Find accurate answers from Box documents using Amazon Q Business

After you integrate Amazon Q Business with Box, you can ask questions based on the documents stored in your Box account. For example:

  • Natural language search – You can search for information within documents located in any folder by using conversational language, simplifying the process of finding desired data without the need to remember specific keywords or filters.
  • Summarization – You can ask Amazon Q Business to summarize contents of documents to meet your needs. This enables you to quickly understand the main points and find relevant information in your documents without having to scan through individual document descriptions manually.

Overview of the Box connector for Amazon Q Business

To crawl and index contents in Box, you can configure the Amazon Q Business Box connector as a data source in your Amazon Q Business application. When you connect Amazon Q Business to a data source and initiate the sync process, Amazon Q Business crawls and indexes documents from the data source into its index.

Types of documents

Let’s look at what are considered as documents in the context of the Amazon Q business Box connector. A document is a collection of information that consists of a title, the content (or the body), metadata (data about the document), and access control list (ACL) information to make sure answers are provided from documents that the user has access to.

The Amazon Q Business Box connector supports crawling of the following entities in Box:

  • Files – Each file is considered a single document
  • Comments – Each comment is considered a single document
  • Tasks – Each task is considered a single document
  • Web links – Each web link is considered a single document

Additionally, Box users can create custom objects and custom metadata fields. Amazon Q supports the crawling and indexing of these custom objects and custom metadata.

The Amazon Q Business Box connector also supports the indexing of a rich set of metadata from the various entities in Box. It further provides the ability to map these source metadata fields to Amazon Q index fields for indexing this metadata. These field mappings allow you to map Box field names to Amazon Q index field names. There are two types of metadata fields that Amazon Q connectors support:

  • Reserved or default fields – These are required with each document, such as the title, creation date, or author
  • Custom metadata fields – These are fields created in the data source in addition to what the data source already provides

Refer to Box data source connector field mappings for more information.

Authentication

Before you index the content from Box, you need to first establish a secure connection between the Amazon Q Business connector for Box with your Box cloud instance. To establish a secure connection, you need to authenticate with the data source. Let’s look at the supported authentication mechanisms for the Box connector.

The Amazon Q Box connector supports tokens with JWT authentication by Box as the authentication method. This authentication approach requires the configuration of several parameters, including the Box client ID, client secret, public key ID, private key, and passphrase. By implementing this token-based JWT authentication, the Amazon Q Business assistant can securely connect to and interact with data stored within the Box platform on behalf of your organization.

Refer to JWT Auth in the Box Developer documentation for more information on setting up and managing JWT tokens in Box.

Supported box subscriptions

To integrate Amazon Q Business with Box using the Box connector, access to Box Enterprise or Box Enterprise Plus plans is required. Both plans provide the necessary capabilities to create a custom application, download a JWT token as an administrator, and then configure the connector to ingest relevant data from Box.

Secure querying with ACL crawling, identity crawling, and User Store

The success of Amazon Q Business applications hinges on two key factors: making sure end-users only see responses generated from documents they have access to, and maintaining the privacy and security of each user’s conversation history. Amazon Q Business achieves this by validating the user’s identity every time they access the application, and using this to restrict tasks and answers to the user’s authorized documents. This is accomplished through the integration of AWS IAM Identity Center, which serves as the authoritative identity source and validates users. You can configure IAM Identity Center to use your enterprise identity provider (IdP)—such as Okta or Microsoft Entra ID—as the identity source.

ACLs and identity crawling are enabled by default and can’t be disabled. The Box connector automatically retrieves user identities and ACLs from the connected data sources. This allows Amazon Q Business to filter chat responses based on the end-user’s document access level, so they only see the information they are authorized to view. If you need to index documents without ACLs, you must explicitly mark them as public in your data source. For more information on how the Amazon Q Business connector crawls Box ACLs, refer to How Amazon Q Business connector crawls Box ACLs.

In the Box platform, an administrative user can provision additional user accounts and assign varying permission levels, such as viewer, editor, or co-owner, to files or folders. Fine-grained access is further enhanced through the Amazon Q User Store, which is an Amazon Q data source connector feature that streamlines user and group management across all the data sources attached to your application. This granular permission mapping enables Amazon Q Business to efficiently enforce access controls based on the user’s identity and permissions within the Box environment. For more information on the Amazon Q Business User store, refer to Understanding Amazon Q Business User Store.

Solution overview

In this post, we walk through the steps to configure a Box connector for an Amazon Q Business application. We use an existing Amazon Q application and configure the Box connector to sync data from specific Box folders, map relevant Box fields to the Amazon Q index, initiate the data sync, and then query the ingested Box data using the Amazon Q web experience.

As part of querying the Amazon Q Business application, we cover how to ask natural language questions on documents present in your Box folders and get back relevant results and insights using Amazon Q Business.

Prerequisites

For this walkthrough, you need the following:

Create users in IAM Identity Center

For this post, you need to create three sample users in IAM Identity Center. One user will act as the admin user; the other two will serve as department-specific users. This is to simulate the configuration of user-level access control on distinct folders within your Box account. Make sure to use the same email addresses when creating the users in your Box account.

Complete the following steps to create the users in IAM Identity Center:

  1. On the IAM Identity Center console, choose Users in the navigation pane.
  2. Choose Add user.
  3. For Username, enter a user name. For example, john_doe.
  4. For Password, select Send an email to this user with password setup instructions.
  5. For Email address and Confirm email address, enter your email address.
  6. For First name and Last name, enter John and Doe, respectively. You can also provide your preferred first and last names if necessary.
  7. Keep all other fields as default and choose Next.

  1. On the Add user to groups page, keep everything as default and choose Next.
  2. Verify the details on the Review and add user page, then choose Add user.

The user will get an email containing a link to join IAM Identity Center.

  1. Choose Accept Invitation and set up a password for your user. Remember to note it down for testing the Amazon Q Business application later.
  2. If required by your organization, complete the multi-factor authentication (MFA) setup for this user to enhance security during sign-in.
  3. Confirm that you can log in as the first user using the credentials you created in the previous step.
  4. Repeat the previous steps to create your second department-specific user. Use a different email address for this user. For example, set Username as mary_major, First name as Mary, and Last name as Major. Alternatively, you can use your own values if preferred.
  5. Verify that you can log in as the second user using the credentials you created in the previous step.
  6. Repeat the previous steps to create the third user, who will serve as the admin. Use your Box admin user’s email address for this account, and choose your preferred user name, first name, and last name. For this example, saanvi_sarkar will act as the admin user.
  7. Confirm that you can log in as the admin user using the credentials you created in the previous step.

This concludes the setup of all three users in the IAM Identity Center, each with unique email addresses.

Create two users in your Box account

For this example, you need two demo users in your Box account in addition to the admin user. Complete the following steps to create these two demo users, using the same email addresses you used when setting up these users in IAM Identity Center:

  1. Log in to your Box Enterprise Admin Console as an admin user.
  2. Choose Users & Groups in the navigation pane.

On the Managed Users tab, the admin user is listed by default.

  1. To create your first department-specific user, choose Add Users, then choose Add Users Manually.

  1. Enter the same name and email address that you used while creating this first department-specific user in IAM Identity Center. For example, use John Doe for Name and his email address for Email. You don’t need to specify groups or folders.
  2. Select the acknowledgement check box to agree to the payment method for adding this new user to your Box account.
  3. Choose Next.

  1. On the Add Users page, choose Complete to agree and add this new user to your Box account.
  2. To create your second department-specific user, choose Add Users, then choose Add Users Manually.
  3. Enter the same name and email address that you used while creating this second department-specific user in IAM Identity Center. For example, use Mary Major for Name and her email address for Email. You don’t need to specify groups or folders.

You now have all three users provisioned in your Box account.

Create a custom Box application for Amazon Q

Before you configure the Box data source connector in Amazon Q Business, you create a custom Box application in your Box account.

Complete the following steps to create an application and configure its authentication method:

  1. Log in to your Box Enterprise Developer Console as an admin user.
  2. Choose My Apps in the navigation pane.
  3. Choose Create New App.
  4. Choose Custom App.

  1. For App name, enter a name for your app. For example, AmazonQConnector.
  2. For Purpose, choose Other.
  3. For Please specify, enter Other.
  4. Leave the other options blank and choose Next.

  1. For Authentication Method, select Server Authentication (with JWT).
  2. Choose Create App.

  1. In My Apps, choose your created app and go to the Configuration
  2. In the App Access Level section, choose App + Enterprise Access.

  1. In the Application Scopes section, select the following permissions:
    1. Write all files and folders stored in Box
    2. Manage users
    3. Manage groups
    4. Manage enterprise properties

  1. In the Advanced Features section, select Make API calls using the as-user header.
  2. In the Add and Manage Public Keys section, choose Generate a Public/Private Keypair.

  1. Complete the two-step verification process and choose OK to download the JSON file to your computer.

  1. Choose Save Changes.
  2. On the Authorization tab, choose Review and Submit.

  1. In the Review App Authorization Submission pop-up, for App description, enter AmazonQConnector and choose Submit.

Your Box Enterprise owner needs to approve the application before you can use it. Complete the following steps to complete the authorization:

  1. Log in to your Box Enterprise Admin Console as the admin user.
  2. Choose Apps in the navigation pane and choose the Customs App Manager tab to view the apps that need to be authorized.
  3. Choose the AmazonQConnector app that says Pending Authorization.
  4. Choose the options menu (three dots) and choose Authorize App.

  1. Choose Authorize in the Authorize App pop-up.

This will authorize your AmazonQConnector application and change the status to Authorized.

You can review the downloaded JSON file in your computer’s downloads directory. It contains the client ID, client secret, public key ID, private key, passphrase, and enterprise ID, which you’ll need when creating the Box data source in a later step.

Add sample documents to your Box account

In this step, upload sample documents to your Box account. Later, you use the Amazon Q Box data source connector to crawl and index these documents.

  1. Download the zip file to your computer.
  2. Extract the files to a folder called AWS_Whitepapers.

  1. Log in to your Box Enterprise account as an admin user.
  2. Upload the AWS_Whitepapers folder to your Box account.

At the time of writing, this folder contains 6 folders and 60 files within them.

Set user-specific permissions on folders in your Box account

In this step, you set up user-level access control for two users on two separate folders in your Box account.

For this ACL simulation, consider the two department-specific users created earlier. Assume John is part of the machine learning (ML) team, so he needs access only to the Machine_Learning folder contents, whereas Mary belongs to the database team, so she needs access only to the Databases folder contents.

Log in to your Box account as an admin and grant viewer access to each user for their respective folders, as shown in the following screenshots. This restricts them to see only their assigned folder’s contents.

The Machine_Learning folder is accessible to the owner and user John Doe only.

The Databases folder is accessible to the owner and user Mary Major only.

Configure the Box connector for your Amazon Q Business application

Complete the following steps to configure your Box connector for Amazon Q Business:

  1. On the Amazon Q Business console, choose Applications in the navigation pane.
  2. Select the application you want to add the Box connector to.
  3. On the Actions menu, choose Edit.

  1. On the Update application page, leave all values unchanged and choose Update.

  1. On the Update retriever page, leave all values unchanged and choose Next.

  1. On the Connect data sources page, on the All tab, search for Box.
  2. Choose the plus sign next to the Box connector.

  1. On the Add data source page, for Data source name, enter a name, for example, box-data-source.
  2. Open the JSON file you downloaded from the Box Developer Console.

The file contains values for clientID, clientSecret, publicKeyID, privateKey, passphrase, and enterpriseID.

  1. In the Source section, for Box enterprise ID, enter the value of the enterpriseID key from the JSON file.

  1. For Authorization, no change is needed because by default the ACLs are set to ON for the Box data source connector.
  2. In the Authentication section, under AWS Secrets Manager secret, choose Create and add a new secret.
  3. For Secret name, enter a name for the secret, for example, connector. The prefix QBusiness-Box- is automatically added for you.
  4. For the remaining fields, enter the corresponding values from the downloaded JSON file.
  5. Choose Save to add the secret.

  1. In the Configure VPC and Security group section, use the default setting (No VPC) for this post.
  2. Identity crawling is enabled by default, so no changes are necessary.

  1. In the IAM role section, choose Create a new role (Recommended) and enter a role name, for example, box-role.

For more information on the required permissions to include in the IAM role, see IAM roles for data sources.

  1. In the Sync scope section, in addition to file contents, you can include Box web links, comments, and tasks to your index. Use the default setting (unchecked) for this post.
  2. In the Additional configuration section, you can choose to include or exclude regular expression (regex) patterns. These regex patterns can be applied based on the file name, file type, or file path. For this demo, we skip the regex patterns configuration.

  1. In the Sync mode section, select New, modified, or deleted content sync.
  2. In the Sync run schedule section, choose Run on demand.

  1. In the Field Mappings section, keep the default settings.

After you complete the retriever creation, you can modify field mappings and add custom field attributes. You can access field mapping by editing the data source.

  1. Choose Add data source and wait for the retriever to get created.

It can take a few seconds for the required roles and the connector to be created.

After the data source is created, you’re redirected to the Connect data sources page to add more data sources as needed.

  1. For this walkthrough, choose Next.
  2. In the Update groups and users section, choose Add groups and users to add the groups and users from IAM Identity Center set up by your administrator.

  1. In the Add or assign users and groups pop-up, select Assign existing users and groups to add existing users configured in your connected IAM Identity Center and choose Next.

Optionally, if you have permissions to add users to connected IAM Identity Center, you can select Add new users.

  1. On the Assign users and groups page, choose Get Started.
  2. In the search box, enter John Doe and choose his user name.

  1. Add the second user, Mary Major, by entering her name in the search box.

  1. Optionally, you can add the admin user to this application.
  2. Choose Assign to add these users to this Amazon Q app.
  3. In the Groups and users section, choose the Users tab, where you will see no subscriptions configured currently.
  4. Choose Manage access and subscriptions to configure the subscription.

  1. On the Manage access and subscriptions page, choose the Users
  2. Select your users.
  3. Choose Change subscription and choose Update subscription tier.

  1. On the Confirm subscription change page, for New subscription, choose Business Pro.
  2. Choose Confirm.

  1. Verify the changed subscription for all three users, then choose Done.

  1. Choose Update application to complete adding and setting up the Box data connector for Amazon Q Business.

Configure Box field mappings

To help you structure data for retrieval and chat filtering, Amazon Q Business crawls data source document attributes or metadata and maps them to fields in your Amazon Q index. Amazon Q has reserved fields that it uses when querying your application. When possible, Amazon Q automatically maps these built-in fields to attributes in your data source.

If a built-in field doesn’t have a default mapping, or if you want to map additional index fields, use the custom field mappings to specify how a data source attribute maps to your Amazon Q application.

  1. On the Amazon Q Business console, choose your application.
  2. Under Data sources, select your data source.
  3. On the Actions menu, choose Edit.

  1. In the Field mappings section, select the required fields to crawl under Files and folders, Comments, Tasks, and Web Links that are available and choose Update.

 When selecting all items, make sure you navigate through each page by choosing the page numbers and selecting Select All on every page to include all mapped items.

Index sample documents from the Box account

The Box connector setup for Amazon Q is now complete. Because you configured the data source sync schedule to run on demand, you need to start it manually.

In the Data sources section, choose the data source box-data-source and choose Sync now.

The Current sync state changes to Syncing – crawling, then to Syncing – indexing.

After a few minutes, the Current sync state changes to Idle, the Last sync status changes to Successful, and the Sync run history section shows more details, including the number of documents added.

As shown in the following screenshot, Amazon Q has successfully scanned and added all 60 files from the AWS_Whitepapers Box folder.

Query Box data using the Amazon Q web experience

Now that the data synchronization is complete, you can start exploring insights from Amazon Q. In the newly created Amazon Q application, choose Customize web experience to open a new tab with a preview of the UI and options to customize according to your needs.

You can customize the Title, Subtitle, and Welcome message as needed, which will be reflected in the UI.

For this walkthrough, we use the defaults and choose View web experience to be redirected to the login page for the Amazon Q application.

  1. Log in to the application as your first department-specific user, John Doe, using the credentials for the user that were added to the Amazon Q application.

When the login is successful, you’ll be redirected to the Amazon Q assistant UI, where you can start asking questions using natural language and get insights from your Box index.

  1. Enter a prompt in the Amazon Q Business AI assistant at the bottom, such as “What AWS AI/ML service can I use to convert text from one language to another?” Press Enter or choose the arrow icon to generate the response. You can also try your own prompts.

Because John Doe has access to the Machine_Learning folder, Amazon Q Business successfully processed his query that was related to ML and displayed the response. You can choose Sources to view the source files contributing to the response, enhancing its authenticity.

  1. Let’s attempt a different prompt related to the Databases folder, which John doesn’t have access to. Enter the prompt “How to reduce the amount of read traffic and connections to my Amazon RDS database?” or choose your own database-related prompt. Press Enter or choose the arrow icon to generate the response.

As anticipated, you’ll receive a response from the Amazon Q Business application indicating it couldn’t generate a reply from the documents John can access. Because John lacks access to the Databases folder, the Amazon Q Business application couldn’t generate a response.

  1. Go back to the Amazon Q Business Applications page and choose your application again.
  2. This time, open the web experience URL in private mode to initiate a new session, avoiding interference with the previous session.
  3. Log in as Mary Major, the second department-specific user. Use her user name, password, and any MFA you set up initially.
  4. Enter a prompt in the Amazon Q Business AI assistant at the bottom, such as “How to reduce the amount of read traffic and connections to my Amazon RDS database?” Press Enter or choose the arrow icon to generate the response. You can also try your own prompts.

Because Mary has access to the Databases folder, Amazon Q Business successfully processed her query that was related to databases and displayed the response. You can choose Sources to view the source files that contributed in generating the response.

  1. Now, let’s attempt a prompt that contains information from the Machine_Learning folder, which Mary isn’t authorized to access. Enter the prompt “What AWS AI/ML service can I use to convert text from one language to another?” or choose your own ML-related prompt.

As anticipated, the Amazon Q Business application will indicate it couldn’t generate a response because Mary lacks access to the Machine_Learning folder.

The preceding test scenarios illustrate the functionality of the Amazon Q Box connector in crawling and indexing documents along with their associated ACLs. With this mechanism, only users with the relevant permissions can access the respective folders and files within the linked Box account.

Congratulations! You’ve effectively utilized Amazon Q to unveil answers and insights derived from the content indexed from your Box account.

Frequently asked questions

In this section, we provide guidance to frequently asked questions.

Amazon Q Business is unable to answer your questions

If you get the response “Sorry, I could not find relevant information to complete your request,” this may be due to a few reasons:

  • No permissions – ACLs applied to your Box account don’t allow you to query certain data sources. If this is the case, reach out to your application administrator to make sure your ACLs are configured to access the data sources.
  • Data connector sync failed – Your data connector may have failed to sync information from the source to the Amazon Q Business application. Verify the data connector’s sync run schedule and sync history to confirm the sync is successful.
  • Incorrect regex pattern – Validate the correct definition of the regex include or exclude pattern when setting up the Box data source.

If none of these reasons apply to your use case, open a support case and work with your technical account manager to get this resolved.

How to generate responses from authoritative data sources

If you want Amazon Q Business to only generate responses from authoritative data sources, the use of guardrails can be highly beneficial. Within the application settings, you can specify the authorized data repositories, such as content management systems and knowledge bases, from which the assistant is permitted to retrieve and synthesize information. By defining these approved data sources as guardrails, you can instruct Amazon Q Business to only use reliable, up-to-date, and trustworthy information, eliminating the risk of incorporating data from non-authoritative or potentially unreliable sources.

Additionally, Amazon Q Business offers the capability to define content filters as part of Guardrails for Amazon Bedrock. These filters can specify the types of content, topics, or keywords deemed appropriate and aligned with your organization’s policies and standards. By incorporating these content-based guardrails, you can further refine the assistant’s responses to make sure they align with your authoritative information and messaging. The integration of Amazon Q Business with IAM Identity Center also serves as a critical guardrail, allowing you to validate user identities and align ACLs to make sure end-users only receive responses based on their authorized data access.

Amazon Q Business responds using old (stale) data even though your data source is updated

If you find that Amazon Q Business is responding with outdated or stale data, you can use the relevance tuning and boosting features to surface the latest documents. The relevance tuning functionality allows you to adjust the weightings assigned to various document attributes, such as recency, to prioritize the most recent information. Boosting can also be used to explicitly elevate the ranking of the latest documents, making sure they are prominently displayed in the assistant’s responses. For more information on relevance tuning, refer to Boosting chat responses using relevance tuning.

Additionally, it’s important to review the sync schedule and status for your data connectors. Verifying the sync frequency and the last successful sync run can help identify any issues with data freshness. Adjusting the sync schedule or running manual syncs, as needed, can help keep the data up to date and improve the relevance of the Amazon Q Business responses. For more information, refer to Sync run schedule.

Clean up

To prevent incurring additional costs, it’s essential to clean up and remove any resources created during the implementation of this solution. Specifically, you should delete the Amazon Q application, which will consequently remove the associated index and data connectors. However, any IAM roles and secrets created during the Amazon Q application setup process need to be removed separately. Failing to clean up these resources may result in ongoing charges, so it’s crucial to take the necessary steps to completely remove all components related to this solution.

Complete the following steps to delete the Amazon Q application, secret, and IAM role:

  1. On the Amazon Q Business console, select the application that you created.
  2. On the Actions menu, choose Delete and confirm the deletion.
  3. On the Secrets Manager console, select the secret that was created for the Box connector.
  4. On the Actions menu, choose Delete.
  5. Select the waiting period as 7 days and choose Schedule deletion.
  6. On the IAM console, select the role that was created during the Amazon Q application creation.
  7. Choose Delete and confirm the deletion.
  8. Delete the AWS_Whitepapers folder and its contents from your Box
  9. Delete the two demo users that you created in your Box Enterprise account.
  10. On the IAM Identity Center console, choose Users in the navigation pane.
  11. Select the three demo users that you created and choose Delete users to remove these users.

Conclusion

The Amazon Q Box connector allows organizations to seamlessly integrate their Box files into the powerful generative AI capabilities of Amazon Q. By following the steps outlined in this post, you can quickly configure the Box connector as a data source for Amazon Q and initiate synchronization of your Box information. The native field mapping options enable you to customize exactly which Box data to include in Amazon Q’s index.

Amazon Q can serve as a powerful assistant capable of providing rich insights and summaries about your Box files directly from natural language queries.

The Amazon Q Box integration represents a valuable tool for software teams to gain AI-driven visibility into their organization’s document repository. By bridging Box’s industry-leading content management with Amazon’s cutting-edge generative AI, teams can drive productivity, make better informed decisions, and unlock deeper insights into their organization’s knowledge base. As generative AI continues advancing, integrations like this will become critical for organizations aiming to deliver streamlined, data-driven software development lifecycles.

To learn more about the Amazon Q connector for Box, refer to Connecting Box to Amazon Q.


About the Author

Maran Chandrasekaran is a Senior Solutions Architect at Amazon Web Services, working with our enterprise customers. Outside of work, he loves to travel and ride his motorcycle in Texas Hill Country.

Senthil Kamala Rathinam is a Solutions Architect at Amazon Web Services specializing in data and analytics. He is passionate about helping customers design and build modern data platforms. In his free time, Senthil loves to spend time with his family and play badminton.

Vijai Gandikota is a Principal Product Manager in the Amazon Q and Amazon Kendra organization of Amazon Web Services. He is responsible for the Amazon Q and Amazon Kendra connectors, ingestion, security, and other aspects of the Amazon Q and Amazon Kendra services.

Read More

How Twilio generated SQL using Looker Modeling Language data with Amazon Bedrock

How Twilio generated SQL using Looker Modeling Language data with Amazon Bedrock

This post is co-written with Aishwarya Gupta, Apurva Gawad, and Oliver Cody from Twilio.

Today’s leading companies trust Twilio’s Customer Engagement Platform (CEP) to build direct, personalized relationships with their customers everywhere in the world. Twilio enables companies to use communications and data to add intelligence and security to every step of the customer journey, from sales and marketing to growth, customer service, and many more engagement use cases in a flexible, programmatic way. Across 180 countries, millions of developers and hundreds of thousands of businesses use Twilio to create personalized experiences for their customers. As one of the largest AWS customers, Twilio engages with data, artificial intelligence (AI), and machine learning (ML) services to run their daily workloads.

Data is the foundational layer for all generative AI and ML applications. Managing and retrieving the right information can be complex, especially for data analysts working with large data lakes and complex SQL queries. To address this, Twilio partnered with AWS to develop a virtual assistant that helps their data analysts find and retrieve relevant data from Twilio’s data lake by converting user questions asked in natural language to SQL queries. This virtual assistant tool uses Amazon Bedrock, a fully managed generative AI service that provides access to high-performing foundation models (FMs) and capabilities like Retrieval Augmented Generation (RAG). RAG optimizes language model outputs by extending the models’ capabilities to specific domains or an organization’s internal data for tailored responses.

This post highlights how Twilio enabled natural language-driven data exploration of business intelligence (BI) data with RAG and Amazon Bedrock.

Twilio’s use case

Twilio wanted to provide an AI assistant to help their data analysts find data in their data lake. They used the metadata layer (schema information) over their data lake consisting of views (tables) and models (relationships) from their data reporting tool, Looker, as the source of truth. Looker is an enterprise platform for BI and data applications that helps data analysts explore and share insights in real time.

Twilio implemented RAG using Anthropic Claude 3 on Amazon Bedrock to develop a virtual assistant tool called AskData for their data analysts. This tool converts questions from data analysts asked in natural language (such as “Which table contains customer address information?”) into a SQL query using the schema information available in Looker Modeling Language (LookML) models and views. The analysts can run this generated SQL directly, saving them the time to first identify the tables containing relevant information and then write a SQL query to retrieve the information.

The AskData tool provides ease of use and efficiency to its users:

  • Users need accurate information about the data in a quick and accessible manner to make business decisions. Providing a tool to minimize their time spent finding tables and writing SQL queries allows them to focus more on business outcomes and less on logistical tasks.
  • Users typically reach out to the engineering support channel when they have questions about data that is deeply embedded in the data lake or if they can’t access it using various queries. Having an AI assistant can reduce the engineering time spent in responding to these queries and provide answers more quickly.

Solution overview

In this post, we show you a step-by-step implementation and design of the AskData tool designed to serve as an AI assistant for Twilio’s data analysts. We discuss the following:

  • How to use a RAG approach to retrieve the relevant LookML metadata corresponding to users’ questions with the help of efficient data chunking and indexing and generate SQL queries from natural language
  • How to select the optimal large language model (LLM) for your use case from Amazon Bedrock
  • How analysts can query the data using natural language questions
  • The benefits of using RAG for data analysis, including increased productivity and reduced engineering overhead of finding the data (tables) and writing SQL queries.

This solution uses Amazon Bedrock, Amazon Relational Database Service (Amazon RDS), Amazon DynamoDB, and Amazon Simple Storage Service (Amazon S3). The following diagram illustrates the solution architecture.

The workflow consists of the following steps:

  1. An end-user (data analyst) asks a question in natural language about the data that resides within a data lake.
  2. This question uses metadata (schema information) stored in Amazon RDS and conversation history stored in DynamoDB for personalized retrieval to the user’s questions:
    • The RDS database (PostgreSQL with pgvector) stores the LookML tables and views as embeddings that are retrieved through a vector similarity search.
    • The DynamoDB table stores the previous conversation history with this user.
  3. The context and natural language question are parsed through Amazon Bedrock using an FM (in this case, Anthropic Claude 3 Haiku), which responds with a personalized SQL query that the user can use to retrieve accurate information from the data lake. The following is the prompt template that is used for generating the SQL query:
Human: The context information below represents the LookML data for Looker views and models. 
Using this context data, please generate a presto SQL query that will return the correct result for the user's question. 
Please provide a SQL query with the correct syntax, table names, and column names based on the provided LookML data.

<instructions>

1. Use the correct underlying SQL table names (table name in sql_table_name) 
and column names (use column names from the dimensions of the view as they are the correct column names). 
Use the following as an example:

{{example redacted}}

2. Join tables as necessary to get the correct result. 
- Avoid unnecessary joins if not explicitly requested by the user.

3. Avoid unnecessary filters if not explicitly requested by the user.

4. If the view has a derived table, use the derived query to answer question 
using table names and column names from derived query. Use the following as an example:

{{example redacted}}

5. The schema name is represented as <schema>.<table_name> within the LookML views. 
Use the existing schema name or "public" as the schema name if no schema is specified.

</instructions>

This is the chat history from previous messages:

<chat_history>

{chat_history}

</chat_history>

<context>

{context}

</context>

This is the user question:

<question>

{question}

</question>

Assistant: Here is a SQL query for the user question:

The solution comprises four main steps:

  1. Use semantic search on LookML metadata to retrieve the relevant tables and views corresponding to the user questions.
  2. Use FMs on Amazon Bedrock to generate accurate SQL queries based on the retrieved table and view information.
  3. Create a simple web application using LangChain and Streamlit.
  4. Refine your existing application using strategic methods such as prompt engineering, optimizing inference parameters and other LookML content.

Prerequisites

To implement the solution, you should have an AWS account, model access to your choice of FM on Amazon Bedrock, and familiarity with DynamoDB, Amazon RDS, and Amazon S3.

Access to Amazon Bedrock FMs isn’t granted by default. To gain access to an FM, an AWS Identity and Access Management (IAM) user with sufficient permissions needs to request access to it through the Amazon Bedrock console. After access is provided to a model, it is available for the users in the account.

To manage model access, choose Model access in the navigation pane on the Amazon Bedrock console. The model access page lets you view a list of available models, the output modality of the model, whether you have been granted access to it, and the End User License Agreement (EULA). You should review the EULA for terms and conditions of using a model before requesting access to it. For information about model pricing, refer to Amazon Bedrock pricing.

Model access

Model access

Structure and index the data

In this solution, we use the RAG approach to retrieve the relevant schema information from LookML metadata corresponding to users’ questions and then generate a SQL query using this information.

This solution uses two separate collections that are created in our vector store: one for Looker views and another for Looker models. We used the sentence-transformers/all-mpnet-base-v2 model for creating vector embeddings and PostgreSQL with pgvector as our vector database. As long as the LookML file doesn’t exceed the context window of the LLM used to generate the final response, we don’t split the file into chunks and instead pass the file in its entirety to the embeddings model. The vector similarity search is able to find the correct files that contain the LookML tables and views relevant to the user’s question. We can pass the entire LookML file contents to the LLM, taking advantage of its large context window, and the LLM is able to pick the schemas for the relevant tables and views to generate the SQL query.

The two subsets of LookML metadata provide distinct types of information about the data lake. Views represent individual tables, and models define the relationships between those tables. By separating these components, we can first retrieve the relevant views based on the user’s question, and then use those results to identify the associated models that capture the relationships between the retrieved views.

This two-step procedure provides a more comprehensive understanding of the relevant tables and their relationships to the user question. The following diagram shows how both subsets of metadata are chunked and stored as embeddings in different vectors for enhanced retrieval. The LookML view and model information is brought into Amazon S3 through a separate data pipeline (not shown).

Content ingestion into vector db

Content ingestion into vector db

Select the optimal LLM for your use case

Selecting the right LLM for any use case is essential. Every use case has different requirements for context length, token size, and the ability to handle various tasks like summarization, task completion, chatbot applications, and so on. Amazon Bedrock is a fully managed service that offers a choice of high-performing FMs from leading AI companies like AI21 Labs, Anthropic, Cohere, Meta, Mistral, Stability AI, and Amazon within a single API, along with a broad set of capabilities to build generative AI applications with security, privacy, and responsible AI.

This solution is implemented using Anthropic Claude 3, available through Amazon Bedrock. Anthropic Claude 3 is chosen for two main reasons:

  • Increased context window – Anthropic Claude 3 can handle up to 200,000 tokens in its context, allowing for processing larger LookML queries and tables. This expanded capacity is crucial when dealing with complex or extensive data, so the LLM has access to the necessary information for accurate and informed responses to the user.
  • Enhanced reasoning abilities – Anthropic Claude 3 demonstrates enhanced performance when working with larger contexts, enabling it to better understand and respond to user queries that require a deeper comprehension of the views, models, and their relationships. You can gain granular control over the reasoning capabilities using several prompt engineering techniques.

Build a web application

This solution uses LangChain and Streamlit to build a web application and integrate Amazon Bedrock into it. LangChain is a framework specifically designed to simplify the creation of applications using LLMs, and it’s straightforward to use Amazon Bedrock through LangChain using the Amazon Bedrock component available in LangChain. We use Streamlit to develop the frontend for this web application.

For data analysts to effortlessly interact with and get queries to extract relevant data from their data lake, this solution implements a chat engine using the ConversationalRetrievalChain mechanism, which enables you to pass a custom vector store retriever, prompt, and conversation history to the LLM and generate personalized answers to user questions. To store the chat history, we use DynamoDB with the user session ID as the primary key. DynamoDB is a highly scalable and durable NoSQL database service, enabling you to efficiently store and retrieve chat histories for multiple user sessions concurrently. The following screenshot shows an example of the chat interface developed using Streamlit.

AskData user interface

AskData user interface

The following is an example scenario to get an idea of the user workflow when interacting with AskData:

User Question - how do I find a customer address?

AskData Response - Based on the provided LookML data, 
you can find the customer address by querying the raw_mailing_addresses table, 
which contains address-related information. 
Here is the SQL query to retrieve the customer address:

SELECT
    account,
    address1,
    address2,
    city,
    province,
    postal_code,
    iso_country
FROM
    raw_mailing_addresses

This query will return the account SID (which can be used to identify the customer), the first line of the address, the second line of the address (if any), the city, the province or state, the postal code, and the ISO country code for all entries in the raw_mailing_addresses table. If you have a specific customer’s account and you want to retrieve the address for that customer, you can add a WHERE clause to the query:

SELECT
    account,
    address1,
    address2,
    city,
    province,
    postal_code,
    iso_country
FROM
    raw_mailing_addresses
WHERE
    account = 'YourCustomerAccount' -- Replace with the actual account

Replace ‘YourCustomerAccount‘ with the actual account of the customer whose address you want to find.

Optimize the application

Although using an LLM to answer user questions about data is efficient, it comes with recognized limitations, such as the ability of the LLM to generate inaccurate responses, often due to hallucinated information. To enhance the accuracy of our application and reduce hallucinations, we did the following:

  • Set the temperature for the LLM to 0.1 to reduce the LLM’s propensity for overly creative responses.
  • Added instructions in the prompt to only generate the SQL query based on the context (schema, chat history) being provided in the prompt.
  • Meticulously removed duplicate and redundant entries from the LookML data before it was ingested into the vector database.
  • Added a user experience feedback (a rating from 1–5 with an optional text input for comments) as part of the UI of AskData. We used the feedback to improve the quality of our data, prompts, and inference parameter settings.

Based on user feedback, the application achieved a net promoter score (NPS) of 40, surpassing the initial target score of 35%. We set this target due to the following key factors: the lack of relevant information for specific user questions within the LookML data, specific rules related to the structure of SQL queries that might need to be added, and the expectation that sometimes the LLM would make a mistake in spite of all the measures we put in place.

Conclusion

In this post, we illustrated how to use generative AI to significantly enhance the efficiency of data analysts. By using LookML as metadata for our data lake, we constructed vector stores for views (tables) and models (relationships). With the RAG framework, we efficiently retrieved pertinent information from these stores and provided it as context to the LLM alongside user queries and any previous chat history. The LLM then seamlessly generated SQL queries in response.

Our development process was streamlined thanks to various AWS services, particularly Amazon Bedrock, which facilitated the integration of LLM for query responses, and Amazon RDS, serving as our vector stores.

Check out the following resources to learn more:

Get started with Amazon Bedrock today, and leave your feedback and questions in the comments section.


About the Authors

Apurva Gawad is a Senior Data Engineer at Twilio specializing in building scalable systems for data ingestion and empowering business teams to derive valuable insights from data. She has a keen interest in AI exploration, blending technical expertise with a passion for innovation. Outside of work, she enjoys traveling to new places, always seeking fresh experiences and perspectives.

Aishwarya Gupta is a Senior Data Engineer at Twilio focused on building data systems to empower business teams to derive insights. She enjoys to travel and explore new places, foods, and culture.

Oliver Cody is a Senior Data Engineering Manager at Twilio with over 28 years of professional experience, leading multidisciplinary teams across EMEA, NAMER, and India. His experience spans all things data across various domains and sectors. He has focused on developing innovative data solutions, significantly optimizing performance and reducing costs.

Amit Arora is an AI and ML specialist architect at Amazon Web Services, helping enterprise customers use cloud-based machine learning services to rapidly scale their innovations. He is also an adjunct lecturer in the MS data science and analytics program at Georgetown University in Washington D.C.

Johnny Chivers is a Senior Solutions Architect working within the Strategic Accounts team at AWS. With over 10 years of experience helping customers adopt new technologies, he guides them through architecting end-to-end solutions spanning infrastructure, big data, and AI.

Read More

Figure Unveils Next-Gen Conversational Humanoid Robot With 3x AI Computing for Fully Autonomous Tasks

Figure Unveils Next-Gen Conversational Humanoid Robot With 3x AI Computing for Fully Autonomous Tasks

Silicon Valley’s Figure has taken the wraps off of its next-generation Figure 02 conversational humanoid robot that taps into NVIDIA Omniverse and NVIDIA GPUs for fully autonomous tasks.

Figure said it recently tested Figure 02 for data collection and use-case training at BMW Group’s Spartanburg, South Carolina, production line.

Figure 02 comes just 10 months after Figure launched the first version of its general-purpose humanoid robot. The company has accelerated its development timeline using NVIDIA Isaac Sim — a reference application built on the NVIDIA Omniverse platform — to design, train and test AI-based robots using synthetic data, as well as NVIDIA GPUs to train generative AI models.

“Our rapid progress, marked by advances in speech, vision, dexterity and computational power, brings us closer to delivering humanoid robots to address labor shortages for many industries,” said Brett Adcock, CEO of Figure.

The company added a second NVIDIA RTX GPU-based module on board Figure 02, which supplies 3x inference gains for handling fully autonomous real-world AI tasks compared with the robot’s first iteration.

Figure aims to commercialize industrial humanoid robots to address labor shortages, and it plans to produce consumer versions.

Founded in 2022, the startup is partnered with OpenAI to develop custom AI models, trained on NVIDIA H100 GPUs, that drive the robots’ conversational AI capabilities. Figure recently raised $675 million in funding from leading technology companies including NVIDIA.

“Developing autonomous humanoid robots requires the fusion of three computers: NVIDIA DGX for AI training, NVIDIA Omniverse for simulation and NVIDIA Jetson in the robot,” said Deepu Talla, vice president of robotics and edge computing at NVIDIA. “Leading companies, including  Figure, are tapping into the NVIDIA robotics stack, from edge to cloud, to drive innovation in humanoid robotics.”

Robotic Hands Capable of Handling Real-World Tasks

New human-scale hands, six RGB cameras, and perception AI models trained with synthetic data generated in Isaac Sim enable Figure 02 to perform high-precision pick-and-place tasks required for smart manufacturing applications.

Figure is among the initial members to join the new NVIDIA Humanoid Robot Developer Program, which provides early access to the latest tools and computing technologies for humanoid robot development. This includes the latest releases of NVIDIA Isaac Sim, Isaac Lab, NIM microservices (RoboCasa and MimicGen), OSMO, Jetson Thor and Project GR00T general-purpose humanoid foundation models.

 

 

Read More

GeForce NOW Celebrates 2,000 Games in the Cloud

GeForce NOW Celebrates 2,000 Games in the Cloud

This GFN Thursday marks 2,000 games in the GeForce NOW library, with five new games joining this week, alongside a demo for Square Enix’s Visions of Mana and a new reward for members playing Elder Scrolls Online.

From epic role-playing games (RPGs) to heart-pounding shooters, the GeForce NOW library offers a variety of adventures for members to dive into anytime, anywhere.

There’s more to come — the highly anticipated action RPG Black Myth: Wukong from Game Science will soon be available for members to stream when it comes to the cloud at launch on Tuesday, Aug. 20.

Plus, gamers looking to try GeForce NOW can lock in a one- or six-month Priority or Ultimate membership at half price with the limited-time summer sale.

More Choices Than a Buffet

GeForce NOW has achieved the remarkable milestone of over 2,000 games supported in the cloud.

They’re all playable across devices, including PCs, Macs, NVIDIA SHIELD TVs, select Samsung and LG Smart TVs, mobile devices and handheld consoles like the ROG Ally and Steam Deck. Visually stunning games like Cyberpunk 2077 and Alan Wake 2 can be played at max settings — all from the comfort of the couch or while on the go.

2,000 games library on GeForce NOW.
2,000 reasons to game on.

Thanks to collaborations with renowned publishers like Blizzard, Capcom, Epic Games and Square Enix, as well as indie studios including Coffee Stain Studios, Re-Logic and Team 17, GeForce NOW is opening the door for more gamers to enjoy incredible gaming experiences in new ways, streaming across their favorite devices.

Since GeForce NOW supports games that members already own, there’s no need to repurchase titles to enjoy them in the cloud. And features like game library-syncing with Steam and Ubisoft Connect make it seamless to jump into gameplay without the hassle of installation or updates.

2,000 games on GeForce NOW.

The GeForce NOW library continues to expand each week with titles from popular digital stores like Battle.net, Epic Games Store, GOG.com, Steam, Ubisoft Connect and Xbox, and supports over 120 PC Game Pass titles — so gear up and game on.

Khajiit Has Wares … in the Cloud

ESO member reward on GeForce NOW
It’s not just a snack — it’s a statement.

It’s time for a new reward. GeForce NOW members playing Elder Scrolls Online can add a touch of sophistication to their journeys with a free Noble Snack emote, letting their character savor a regal treat in the heart of Tamriel.

Members who’ve opted in to GeForce NOW’s Rewards program can check their email for instructions on how to redeem the reward. Ultimate and Priority members can redeem the reward today, while free members can claim it starting on Friday, Aug. 9. It’s available through Sunday, Sept. 8 — first come, first served.

Here Come the Games

GeForce NOW members can today play a demo of Square Enix’s upcoming action RPG Visions of Mana before it launches in the cloud on Thursday, Aug. 29. Experience snippets of the story, the battles and the game’s focus on elemental powers — hallmarks of the Mana series. Though the title’s bonus items can’t be obtained through the demo in the cloud, members can get the items later by playing the full game when it launches on GeForce NOW.

Members can also look for the following this week:

  • Warhammer 40,000: Speed Freeks (New release on Steam, Aug. 6)
  • Prince of Persia: The Lost Crown (New release on Steam, Aug. 8)
  • Ratten Reich (New release on Steam, Aug. 9)
  • Nine Sols (Steam)
  • Visions of Mana Demo (Steam)

What are you planning to play this weekend? Let us know on X or in the comments below.

Read More

Collaborators: AI and the economy with Brendan Lucier and Mert Demirer

Collaborators: AI and the economy with Brendan Lucier and Mert Demirer

Headshots of Brendan Lucier and Mert Demirer for the Microsoft Research Podcast

Transforming research ideas into meaningful impact is no small feat. It often requires the knowledge and experience of individuals from across disciplines and institutions. Collaborators, a Microsoft Research Podcast series, explores the relationships—both expected and unexpected—behind the projects, products, and services being pursued and delivered by researchers at Microsoft and the diverse range of people they’re teaming up with. 

What can the breakdown of jobs into their specific tasks tell us about the long-term impact of AI on the economy? Microsoft Senior Principal Researcher Brendan Lucier and MIT Assistant Professor Mert Demirer are combining their expertise in micro- and macroeconomics, respectively, to build a framework for answering the question and ultimately helping the world prepare for and responsibly steer the course of disruption accompanying the technology. In this episode, they share how their work fits into the Microsoft research initiative AI, Cognition, and the Economy, or AICE; how the evolution of the internet may indicate the best is yet to come for AI; and their advice for budding AI researchers.

Transcript 

[TEASER] 

[MUSIC PLAYS UNDER DIALOGUE] 

BRENDAN LUCIER: What we’re doing here is a prediction problem. And when we were trying to look into the future this way, one way we do that is we try to get as much information as we can about where we are right now. And so we were lucky to have, like, a ton of information about the current state of the economy and the labor market and some short-term indicators on how generative AI seems to be, sort of, affecting things right now, in this moment. And then the idea is to layer some theory models on top of that to try to extrapolate forward, right, in terms of what might be happening, sort of get a glimpse of this future point. 

MERT DEMIRER: So this is a prediction problem that we cannot use machine learning, AI. Otherwise, it would have been a very easy problem to solve. So what you need instead is a model or, like, framework that will take, for example, inputs of the productivity gains or for, like, microfoundation as an input and then generate predictions for the entire economy. 


[TEASER ENDS] 

GRETCHEN HUIZINGA: You’re listening to Collaborators, a Microsoft Research Podcast showcasing the range of expertise that goes into transforming mind-blowing ideas into world-changing technologies. I’m Dr. Gretchen Huizinga.

[MUSIC FADES] 

On today’s episode, I’m talking to Dr. Brendan Lucier, a senior principal researcher in the economics and computation group at Microsoft Research, and Dr. Mert Demirer, an assistant professor of applied economics at the MIT Sloan School of Management. Brendan and Mert are exploring the economic impact of job automation and generative AI as part of Microsoft’s AI, Cognition, and the Economy, or AICE, research initiative. And since they’re part of the AICE Accelerator Pilot collaborations, let’s get to know our collaborators. Brendan, let’s start with you and your “business address,” if you will. Your research lives at the intersection of microeconomic theory and theoretical computer science. So tell us what people—shall we call them theorists?—who live there do and why they do it! 

BRENDAN LUCIER: Thank you so much for having me. Yeah, so this is a very interdisciplinary area of research that really gets at, sort of, this intersection of computation and economics. And what it does is it combines the ideas from algorithm design and computational complexity that we think of when we’re building algorithmic systems with, sort of, the microeconomic theory of how humans will use those systems and how individuals make decisions, right. How their goals inform their actions and how they interact with each other. And where this really comes into play is in the digital economy and platforms that we, sort of, see online that we work with on an everyday basis, right. So we’re increasingly interacting with algorithms as part of our day-to-day life. So we use them to search for information; we use them to find rides and find jobs and have recommendations on what products we purchase. And as we do these things online, you know, some of the algorithms that go into this, like, help them grow into these huge-scale, you know, internet-sized global platforms. But fundamentally, these are still markets, right. So even though there’s a lot of algorithms and a lot of computational ideas that go into these, really what they’re doing is connecting human users to the goods and the services and to each other over the course of what they need to do in their day-to-day life, right. And so this is where this microeconomic view really comes into play. So what we know is that when people are interacting with these platforms to get at what they want, they’re going to be strategic about this, right. So people are always going to use tools in the ways that, sort of, work best for them, right, even if that’s not what the designer has in mind. And so when we’re designing algorithms, in a big way, we’re not necessarily designing solutions; we’re designing the rules of a game that people are going to end up playing with the platform or with each other.

HUIZINGA: Wow. 

LUCIER: And so a big part of, sort of, what we do in this area is that if we’re trying to understand the impact of, like, a technology change or a new platform that we’re going to design, we need to understand what it is that the users want and how they’re going to respond to that change when they interact with it. 

HUIZINGA: Right.

LUCIER: When we think about, sort of, microeconomic theory, a lot of this is, you know, ideas from game theory, ideas about how it is that humans make decisions, either on their own or in interaction with each other, right.

HUIZINGA: Yeah.

LUCIER: So when I’m in the marketplace, maybe I’m thinking not only about what’s best for me, but, sort of, I’m anticipating maybe what other people are going to be doing, as well. And I really need to be thinking about how the algorithms that make up the fundamentals of those marketplaces are going to influence the way people are thinking about not only what they’re doing but what other people are doing. 

HUIZINGA: Yeah, this is so fascinating because even as you started to list the things that we use algorithms—and we don’t even think about it—but we look for a ride, a job, a date. All of these things that are part of our lives have become algorithmic! 

LUCIER: Absolutely. And it’s fascinating that, you know, when we think about, you know, someone might launch a new algorithm, a new advance to these platforms, that looks on paper like it’s going to be a great improvement, assuming that people keep behaving the way they were behaving before. But of course, people will naturally respond, and so there’s always this moving target of trying to anticipate what it is that people actually are really trying to do and how they will adapt. 

HUIZINGA: We’re going to get into that so deep in a few minutes. But first, Mert, you are an assistant professor of economics at MIT’s famous Sloan School of Management, and your homepage tells us your research interests include industrial organization and econometrics. So unpack those interests for our listeners and tell us what you spend most of your time doing at the Sloan School. 

MERT DEMIRER: Thank you so much for having me. My name is Mert Demirer. I am an assistant professor at MIT Sloan, and I spend most of my time doing research and teaching MBAs. And in my research, I’m an economist, so I do research in a field called industrial organization. And the overarching theme of my research is firms and firm productivity. So in my research, I ask questions like, what makes firms more productive? What are the determinants of firm growth, or how do industries evolve over time? So what I do is I typically collect data from firms, and I use some econometric model or sometimes a model of industrial or the firm model and then I answer questions like these. And more recently, my research focused on new emerging technologies and how firms use these emerging technologies and what are the productivity effect of these new technologies. And I, more specifically, I did research on cloud computing, which is a really important technology … 

HUIZINGA: Yeah … 

DEMIRER: … transforming firms and industries. And more recently, my research focuses on AI, both, like, the adoption of AI and the productivity impact of AI. 

HUIZINGA: Right, right. You know, even as you say it, I’m thinking, what’s available data? What’s good data? And how much data do you need to make informed analysis or decisions? 

DEMIRER: So finding good data is a challenge in this research. In general, there are, like, official data sources like census or, like, census of manufacturers, which have been commonly used in productivity research. That data is very comprehensive and very useful. But of course, if you want to get into the details of, like, new technologies and, like, granular firm analysis, that’s not enough. So what I have been trying to do more recently is to find industry partners which have lots of good data on other firms. 

HUIZINGA: Gotcha. 

DEMIRER: So these are typically the main data sources I use. 

HUIZINGA: You know, this episode is part of a little series within a series we’re doing on AI, Cognition, and the Economy, and we started out with Abi Sellen from the Cambridge, UK, lab, who gave us an overview of the big ideas behind the initiative. And you’re going to give us some discussion today on AI and, specifically, the economy. But before we get into your current collaboration, let’s take a minute to “geolocate” ourselves in the world of economics and how your work fits into the larger AICE research framework. So, Brendan, why don’t you situate us with the “micro” view and its importance to this initiative, and then Mert can zoom out and talk about the “macro” view and why we need him, too. 

LUCIER: Yeah, sure. Yeah, I just, I just love this AICE program and the way that it puts all this emphasis on how human users are interacting with AI systems and tools, and this is really, like, a focal point of a lot of this, sort of, micro view, also. So, like, from this econ starting point of microeconomics, one place I think of is imagining how users would want to integrate AI tools into their day-to-day, right—into both their workflow as part of their jobs; in terms of, sort of, what they’re doing in their personal lives. And when we think about how new tools like AI tech, sort of, comes into those workflows, an even earlier question is, how is it that users are organizing what they do into individual tasks and, like, why are they doing them that way in the first place, right? So when we want to think about, you know, how it is that AI might come in and help them with pain points that they’re dealing with, we, sort of, need to understand, like, what it is they’re trying to accomplish and what are the goals that they have in mind. And this is super important when we’re trying to build effective tools because we need to understand how they’ll change their behavior or adjust to incorporate this new technology and trying to zoom into that view. 

HUIZINGA: Yeah. Mert, tell us a little bit more about the macro view and why that’s important in this initiative, as well. 

DEMIRER: Macro view is very complementary to micro view, and it takes a more holistic approach and analyzes the economy with its components rather than focusing on individual components. So instead of focusing on one component and analyze the collectivity effect of AI on a particular, like, occupation or sector, you just analyze this whole economy and you model the interactions between these components. And this holistic view is really essential if you want to understand AI because this is going to allow you to make, like, long-term projections and it’s going to help you understand how AI is going to affect, like, the entire economy. And to make things, like, more concrete—and going back to what Brendan said—that suppose you analyze a particular task or you figured out how AI saw the pain point and it increased the productivity by like x amount, so that impact on that occupation or, let’s say, the industry won’t be limited to that industry, right? The wage is going to change in this industry, but it’s going to affect other industries, potentially, like, labor from one industry which is affected significantly by AI to other industries, and, like, maybe new firms are going to emerge, some firms are going to exit, and so on. So this holistic view, it essentially models all of these components in just one system and also tries to understand the interactions between those. And as I said, this is really helpful because first of all, this helps you to make long-term projections about AI, how AI is going to impact the economy. And second, this is going to let you go beyond the first-order impact. Because you can essentially look at what’s going on and analyze or measure the first-order impact, but if you want to get the second- or third-order impact, then you need a framework or you need a bigger model. And typically, those, like, second- or third-order effects are typically the unintended effects or the hidden effects. 

HUIZINGA: Right. 

DEMIRER: And that’s why this, like, more holistic approach is useful, particularly for AI. 

HUIZINGA: Yeah, I got to just say right now, I feel like I wanted to sit down with you guys for, like, a couple hours—not with a microphone—but just talking because this is so fascinating. And Abi Sellen mentioned this term “line of sight” into future projections, which was sort of an AICE overview goal. Interestingly, Mert, when you mentioned the term productivity, is that the metric? Is productivity the metric that we’re looking to in terms of this economic impact? It seems to be a buzzword, that we need to be “more productive.” Is that, kind of, a framework for your thinking? 

DEMIRER: I think it is an important component. It’s an important component how we should analyze and think about AI because again, like, when you zoom into, like, the micro view of, like, how AI is going to affect my day-to-day work, that is, like, very natural to think that in terms of, like, productivity—oh, I saved, like, half an hour yesterday by using, like, AI. And, OK, that’s the productivity, right. That’s very visible. Like, that’s something you can see, something you can easily measure. But that’s only one component. So you need to understand how that productivity effect is going to change other things. 

HUIZINGA: Right! 

LUCIER: Like how I am going to spend the additional time, whether I’m going to spend that time for leisure or I’m going to do something else. 

HUIZINGA: Right. 

DEMIRER: In that sense, I think productivity is an important component, and maybe it is, like, the initial point to analyze these technologies. But we will definitely go beyond the productivity effect and understand how these, like, potential productivity effects are going to affect, like, the other parts of the economy and how the agents—like firms, people—are going to react to that potential productivity increase. 

HUIZINGA: Yeah, yeah, in a couple questions I’ll ask Brendan specifically about that. But in the meantime, let’s talk about how you two got together on this project. I’m always interested in that story. This question is also known as “how I met your mother.” And the meetup stories are often quite fun and sometimes surprising. In fact, last week, one person told his side of the story, and the other guy said, hey, I didn’t even know that! [LAUGHS] So, Brendan, tell us your side of who called who and how it went down, and then Mert can add his perspective. 

LUCIER: Great. So, yeah, so I’ve known Mert for quite some time! Mert joined our lab as a—the Microsoft Research New England lab—as an intern some years ago and then as a postdoc in, sort of, 2020, 2021. And so over that time, we got to know each other quite well, and I knew a lot about the macroeconomic work that Mert was doing. And so then, fast-forward to more recently, you know, this particular project initially started as discussions between myself and my colleague Nicole Immorlica at Microsoft Research and John Horton, who’s an economist at MIT who was visiting us as a visiting researcher, and we were discussing how the structure of different jobs and how those jobs break down into tasks might have an impact on how they might be affected by AI. And then very early on in that conversation, we, sort of, realized that, you know, this was really a … not just, like, a microeconomic question; it’s not just a market design question. The, sort of, the macroeconomic forces were super important. And then immediately, we knew, OK, like, Mert’s top of our list; we need, [LAUGHTER] we need, you know, to get Mert in here and talking to us about it. And so we reached out to him. 

HUIZINGA: Mert, how did you come to be involved in this from your perspective? 

DEMIRER: As Brendan mentioned, I spent quite a bit of time at Microsoft Research, both as an intern and as a postdoc, and Microsoft Research is a very, like, fun place to be as an economist and a really productive place to be as an economist because it’s very, like, interdisciplinary. It is a lot different from a typical academic department and especially an economics academic department. So my time at Microsoft Research has already led to a bunch of, like, papers and collaborations. And then when Brendan, like, emailed me with the research question, I thought it’s, like, no-brainer. It’s an interesting research question, like part of Microsoft Research. So I said, yeah, let’s do it! 

HUIZINGA: Brendan, let’s get into this current project on the economic impact of automation and generative AI. Such a timely and fascinating line of inquiry. Part of your research involves looking at a lot of current occupational data. So from the vantage point of microeconomic theory and your work, tell us what you’re looking at, how you’re looking at it, and what it can tell us about the AI future. 

LUCIER: Fantastic. Yeah, so in some sense, the idea of this project and the thing that we’re hoping to do is, sort of, get our hands on the long-term economic impact of generative AI. But it’s fundamentally, like, a super-hard problem, right? For a lot of reasons. And one of those reasons is that, you know, some of the effects could be quite far in the future, right. So this is things where the effects themselves but especially, like, the data we might look at to measure them could be years or decades away. And so, fundamentally, what we’re doing here is a prediction problem. And when we were trying to, sort of, look into the future this way, one way we do that is we try to get as much information as we can about where we are right now, right. And so we were lucky to have, like, a ton of information about the current state of the economy and the labor market and some short-term indicators on how generative AI seems to be, sort of, affecting things right now in this moment. And then the idea is to, sort of, layer some theory models on top of that to try to extrapolate forward, right, in terms of what might be happening, sort of get a glimpse of this future point. So in terms of the data we’re looking at right now, there’s this absolutely fantastic dataset that comes from the Department of Labor. It’s the O*NET database. This is the, you know, Occupational Information Network—publicly available, available online—and what it does is basically it breaks down all occupations across the United States, gives a ton of information about them, including—and, sort of, importantly for us—a very detailed breakdown of the individual tasks that make up the day-to-day in terms of those occupations, right. So, for example, if you’re curious to know what, like, a wind energy engineer does day-to-day, you could just go online and look it up, and so it basically gives you the entire breakdown. Which is fantastic. I mean, it’s, you know, I love, sort of, browsing it. It’s an interesting thing to do with an afternoon. [LAUGHTER] But from our perspective, the fact that we have these tasks—and it actually gives really detailed information about what they are—lets us do a lot of analysis on things like how AI tools and generative AI might help with different tasks. There’s a lot of analysis that we and, like, a lot of other papers coming out the last year have done in looking at which tasks do we think generative AI can have a big influence on and which ones less so in the present moment, right. And there’s been work by, you know, OpenAI and LinkedIn and other groups, sort of, really leaning into that. We can actually take that one step further and actually look also at the structure between tasks, right. So we can see not only, like, what fraction of the time I spend are things that can be influenced by generative AI but also how they relate to, like, my actual, sort of, daily goals. Like, when I look at the tasks I have to do, do I have flexibility in when and where I do them, or are things in, sort of, a very rigid structure? Are there groups of interrelated tasks that all happen to be really exposed to generative AI? And, you know, what does that say about how workers might reorganize their work as they integrate AI tools in and how that might change the nature of what it is they’re actually trying to do on a day-to-day basis? 

HUIZINGA: Right. 

LUCIER: So just to give an example, so, like, one of the earliest examples we looked at as we started digging into the data and testing this out was radiology. And so radiology is—you know, this is medical doctors that specialized in using medical imaging technology—and it happens to be an interesting example for this type of work because you know there are lots of tasks that make that up and they have a lot of structure to them. And it turns out when you look at those tasks, there’s interestingly, like, a big group of tasks that all, sort of, are prerequisites for an important, sort of, core part of the job, … 

HUIZINGA: Right … 

LUCIER: … which is, sort of, recommending a plan of which tests to, sort of, perform, right. So these are things like analyzing medical history and analyzing procedure requests, summarizing information, forming reports. And these are all things that we, sort of, expect that generative AI can be quite effective at, sort of, assisting with, right. And so the fact that these are all, sort of, grouped together and feed into something that’s a core part of the job really is suggestive that there’s an opportunity here to delegate some of those, sort of, prerequisite tasks out to, sort of, AI tools so that the radiologist can then focus on the important part, which is the actual recommendations that they can make. 

HUIZINGA: Right. 

LUCIER: And so the takeaway here is that it matters, like, how these tasks are related to each other, right. Sort of, the structure of, you know, what it is that I’m doing and when I’m doing them, right. So this situation would perhaps be very different if, as I was doing these tasks where AI is very helpful, I was going back and forth doing consulting with patients or something like this, where in that, sort of, scenario, I might imagine that, yeah, like an AI tool can help me, like, on a task-by-task basis but maybe I’m less likely to try to, like, organize all those together and automate them away. 

HUIZINGA: Right. Yeah, let me focus a little bit more on this idea of you in the lab with all this data, kind of, parsing out and teasing out the tasks and seeing which ones are targets for AI, which ones are threatened by AI, which ones would be wonderful with AI. Do you have buy-in from these exemplar-type occupations that they say, yes, we would like you to do this to help us? I mean, is there any of that collaboration going on with these kinds of occupations at the task level? 

LUCIER: So the answer is not yet. [LAUGHTER] But this is definitely an important part of the workflow. So I would say that, you know, ultimately, the goal here is that, you know, as we’re looking for these patterns across, like, individual exemplar occupations, that, sort of, what we’re looking for is relationships between tasks that extrapolate out, right. Across lots of different industries, right. So, you know, it’s one thing to be able to say, you know, a lot of very deep things about how AI might influence a particular job or a particular industry. But in some sense, the goal here is to see patterns of tasks that are repeated across lots of different occupations, across lots of different sectors that say, sort of, these are the types of patterns that are really amenable to, sort of, AI being integrated well into the workforce, whereas these are scenarios where it’s much more of an augmenting story as opposed to an automating story. But I think one of the things that’s really interesting about generative AI as a technology here, as opposed to other types of automated technology, is that while there are lots of aspects of a person’s job that can be affected by generative AI, there’s this relationship between the types of work that I might use an AI for versus the types of things that are, sort of, like the core feature of what I’m doing on a day-to-day. 

HUIZINGA: Right. Gotcha … 

LUCIER: And so, maybe it’s, like, at least in the short term, it actually looks quite helpful to say that, you know, there are certain aspects of my work, like going out and summarizing a bunch of heavy data reports, that I’m very happy to have an AI, sort of, do that part of my work. So then I can go and use those things forward in, sort of, the other half of my day. 

HUIZINGA: Yeah. And that’s to Mert’s point: look how much time I just saved! Or I got a half hour back! We’ll get to that in a second. But I really now am eager, Mert, to have you explain your side of this. Brendan just gave us a wonderful task-centric view of AI’s impact on specific jobs. I want you to zoom out and talk about the holistic, as you mentioned before, or macroeconomic view in this collaboration. How are you looking at the impact of AI beyond job tasks, and what role does your work play in helping us understand how these advances in AI might affect job markets and the economy writ large? 

DEMIRER: One thing Brendan mentioned a few minutes ago is this is a prediction task. Like, we need to predict what will be the effect of AI, how AI is going to affect the economy, especially in the long run. So this is a prediction problem that we cannot use machine learning, AI. Otherwise, it would have been a very easy problem to solve. 

HUIZINGA: Right … [LAUGHS] 

DEMIRER: So what you need instead is a model or, like, framework that will take, for example, inputs of, like, the productivity gains, for example, like Brendan talked about, or for, like, microfoundation as an input and then generate predictions for the entire economy. To do that, what I do in my research is I develop and use models of industries and firms. So these models essentially incorporate a bunch of economic agents. Like, this could be labor; this could be firms; this could be [a] policymaker who is trying to regulate the industry. And then you write down the incentives of these, like, different agents in the economy, and then you write down this model, you solve this model with the available data, and then this model gives you predictions. So you can, once you have a model like this, you can ask what would be the effect of a change in the economic environment on like wages, on productivity, on industry concentration, let’s say. So this is what I do in my research. So, like, I briefly mentioned my research on cloud computing. I think this is a very good example. When you think about cloud computing, always … everyone always, like, thinks about it helps you, like, scale very rapidly, which is true, and, like, which is the actual, like, the firm-level effect of cloud computing. But then the question is, like, how that is going to affect the entire industry, whether the industry is going to be more concentrated or less concentrated, it’s going to grow, like, faster, or which industry is going to grow faster, and so on. So essentially, in my research, I develop models like this to answer questions—these, like, high-level questions. And when it comes to AI, we have these, like, very detailed micro-level studies, like these exposure measures Brendan already mentioned, and the framework, the micro framework, we developed is a task view of AI. What you do is, essentially, you take the output of that micro model and then you feed it into a bigger economy-level model, and you develop a higher-level prediction. So, for example, you can apply this, like, task-based model on many different occupations. You can get a number for every occupation, like for occupation A, productivity will be 5 percent; for occupation B, it’s going to be like 10 percent; and so on. You can aggregate them at the industry level—you can get some industry-level numbers—you feed those numbers into a more, like, general equilibrium model and then you solve the model and then you answer questions like, what will be the effect of AI on wage on average? Or, like, what will be the effect of AI on, like, total output in the economy? So my research is, like, more on this answering, like, bigger industry-level or economic-level questions. 

HUIZINGA: Well, Brendan, one of our biggest fears about AI is that it’s going to “steal our jobs.” I just made air quotes on a podcast again. But this isn’t our first disruptive technology rodeo, to use a phrase. So that said, it’s the first of its kind. What sets AI apart from disruptive technologies of the past, and how can looking at the history of technological revolutions help us manage our expectations, both good and bad? 

LUCIER: Fantastic. Such an important question. Yeah, like there’s been, you know, just so much discussion and “negativity versus optimism” debates in the world in the public sphere … 

HUIZINGA: Hope versus hype … 

LUCIER: … and in the academic sphere … yeah, exactly. Hope versus hype. But as you say, yeah, it’s not our first rodeo. And we have a lot of historical examples of these, you know, disruptive, like, so-called general-purpose technologies that have swept through the economy and made a lot of changes and enabled things like electricity and the computer and robotics. Going back further, steam engine and the industrial revolution. You know, these things are revolutions in the sense that, you know, they sort of rearrange work, right. They’re not just changing how we do things. They change what it is that we even do, like just the nature of work that’s being done. And going back to this point of automation versus augmentation, you know, what that looks like can vary quite a bit from revolution to revolution, right. So sometimes this looks like fully automating away certain types of work. But in other cases, it’s just a matter of, sort of, augmenting workers that are still doing, in some terms, what they were doing before but with a new technology that, like, substantially helps them and either takes part of their job and makes it redundant so they can focus on something that’s, you know, more core or just makes them do what they were doing before much, much faster. 

HUIZINGA: Right. 

LUCIER: And either way, you know, this can have a huge impact on the economy and especially, sort of, the labor market. But that impact can be ambiguous, right. So, you know, if I make, you know, a huge segment of workers twice as productive, then companies have a choice. They can keep all the workers and have twice the output, or they can get the same output with half as many workers or something in between, and, you know, which one of those things happens depends not even so much on the technology but on, sort of, the broader economic forces, right. The, you know, the supply and demand and how things are going to come together in equilibrium, which is why this macroeconomic viewpoint is so important to actually give the predictions on, you know, how companies might respond to these changes that are coming through the new technology. Now, you know, where GenAI is, sort of, interesting as an example is the way that, you know, what types of work it impacts, right. So generative AI is particularly notable in that it impacts, you know, high-skill, you know, knowledge-, information-based work directly, right[1]. And it cuts across so many different industries. We think of all the different types of occupations that involve, you know, summarizing data or writing a report or writing emails. There’s so many different types of occupations where this might not be the majority of what they do, but it’s a substantial fraction of what they do. And so in many cases, you know, this technology—as we were saying before—can, sort of, come in and has the potential to automate out or at least really help heavily assist with parts of the job but, in some cases, sort of, leave some other part of the job, which is a core function. And so these are the places where we really expect this human-AI collaboration view to be especially impactful and important, right. Where we’re going to have lots of different workers in lots of different occupations who are going to be making choices on which parts of their work they might delegate to, sort of, AI agents and which parts of the work, you know, they really want to keep their own hands on. 

HUIZINGA: Right, right. Brendan, talk a little more in detail about this idea of low-skill work and high-skill work, maybe physical labor and robotics kind of replacements versus knowledge worker and mental work replacements, and maybe shade it a little bit with the idea of inequalities and how that’s going to play out. I mean, I imagine this project, this collaboration, is looking at some of those issues, as well? 

LUCIER: Absolutely. So, yeah, when we think about, you know, what types of work get affected by some new technology—and especially, sort of, automation technology—a lot of the times in the past, the sorts of work that have been automated out are what we’d call low-skill or, like, at least, sort of, more physical types of labor being replaced or automated by, you know, robotics. We think about the potential of manufacturing and how that displaces, like, large groups of workers who are, sort of, working in the factory manually. And so there’s a sense when this, sort of, happens and a new technology comes through and really disrupts work, there’s this transition period where certain people, you know, even if at the end of the day, the economy will eventually reach sort of new equilibrium which is generally more productive or good overall, there’s a big question of who’s winning and who’s losing both in the long term but especially in that short term, … 

HUIZINGA: Yeah! 

LUCIER: … sort of intermediate, you know, potentially very chaotic and disruptive period. And so very often in these stories of automation historically, it’s largely marginalized low-skill workers who are really getting affected by that transition period. AI—and generative AI in particular—is, sort of, interesting in the potential to be really hitting different types of workers, right. 

HUIZINGA: Right. 

LUCIER: Really this sort of, you know, middle sort of white-collar, information-work class. And so, you know, really a big part of this project and trying to, sort of, get this glimpse into the future is getting, sort of, this—again, as you said—line of sight on which industries we expect to be, sort of, most impacted by this, and is it as we might expect, sort of, those types of work that are most directly affected, or are there second- or third-order effects that might do things that are unanticipated? 

HUIZINGA: Right, and we’ll talk about that in a second. So, Mert, along those same lines, it’s interesting to note how new technologies often start out simply by imitating old technologies. Early movies were stage plays on film. Email was a regular letter sent over a computer. [LAUGHS] Video killed the radio star … But eventually, we realized that these new technologies can do more than we thought. And so when we talked before, you said something really interesting. You said, “If a technology only saves time, it’s boring technology.” What do you mean by that? And if you mean what I think you mean, how does the evolution—not revolution but evolution—of previous technologies serve as a lens for the affordances that we may yet get from AI? 

DEMIRER: Let me say first, technology that saves time is still very useful technology! [LAUGHTER] Who wouldn’t want a technology that will save time? 

HUIZINGA: Sure … 

DEMIRER: But it is less interesting for us, like, to study and maybe it’s, like, less interesting in terms of, like, the broader implications. And so why is that? Because if a technology saves time, then, OK, so I am going to have maybe more time, and the question is, like, how I’m going to spend that time. Maybe I’m going to have more leisure or maybe I’m going to have to produce more. It’s, like, relatively straightforward to analyze and quantify. So however, like, the really impactful technologies could allow us to accomplish new tasks that were previously impossible, and they should open up new opportunities for creativity. And I think here, this knowledge-worker impact of AI is particularly important because I think as a technology, the more it affects knowledge worker, the more likely it’s going to allow us to achieve new things; it’s going to allow us to create more things. So I think in that sense, I think generative AI has a huge potential in terms of making us accomplish new things. And to give you an example from my personal experience, so I’m a knowledge worker, so I do research, I teach, and generative AI is going to help my work, as well. So it’s already affecting … so it’s already saving me time. It’s making me more productive. So suppose that generative AI just, like, makes me 50 percent more productive, let’s say, like five years from now, and that’s it. That’s the only effect. So what’s going to happen to my job? Either I’m going to maybe, like, take more time off or maybe I’m going to write more of the same kind of papers I am writing in economics. But … so imagine, like, generative AI is helping me writing a different kind of paper. How is that possible? So I have a PhD in econ, and if I try really hard, maybe I can do another PhD. But that’s it. Like, I can specialize only one or, like, two topics. But imagine generative AI as an, like, agent or collaborator having PhD in, like, hundreds of different fields, and then you can, like, collaborate and, like, communicate and get information through generative AI on really different fields. That will allow me to do different kinds of research, like more interdisciplinary kinds of research. In that sense, I think the really … the most important part of generative AI is going to be this … what it will allow us to achieve new things, like what creative new things we are going to do. And I can give you a simple example. Like, we were talking about previous technologies. Let’s think of internet. So what was the first application of internet? It’s sending an email. It saves you time. Instead of writing things on a paper and, like, mailing it, you just, like, send it immediately, and it’s a clear time-saving technology. But what are the major implications for internet, like, today? It’s not email. It is like e-commerce, or it is like social media. It allows us to access infinite number of products beyond a few stores in our neighborhood, or it allows us to communicate or connect with people all around the world … 

HUIZINGA: Yeah … 

DEMIRER: … instead of, again, like limiting ourselves to our, like, social circle. So in that sense, I think we are currently in the “email phase” of AI, … 

HUIZINGA: Right … 

DEMIRER: … and we are going to … like, I think AI is going to unlock so many other new capabilities and opportunities, and that is the most exciting part. 

HUIZINGA: Clearly, one of the drivers behind the whole AICE research initiative is the question of what could possibly go wrong if we got everything right, and I want to anchor this question on the common premise that if we get AI right, it will free us from drudgery—we’ve kind of alluded to that—and free us to spend our time on more meaningful or “human”—more air quotes there—pursuits. So, Brendan, have you and your team given any thought to this idea of unintended consequences and what such a society might actually look like? What will we do when AI purportedly gives us back our time? And will we really apply ourselves to making the world better? Or will we end up like those floating people in the movie WALL-E

LUCIER: [LAUGHS] I love that framing, and I love that movie, so this is great. Yeah. And I think this is one of these questions about, sort of, the possible futures that I think is super important to be tackling. In the past, people, sort of, haven’t stopped working; they’ve shifted to doing different types of work. And as you’re saying, there’s this ideal future in which what’s happening is that people are shifting to doing more meaningful work, right, and the AI is, sort of, taking over parts of the, sort of, the drudgery, you know. These, sort of, annoying tasks that, sort of, I need to do as just, sort of, side effects of my job. I would say that where the economic theory comes in and predicts something that’s slightly different is that I would say that the economic theory predicts that people will do more valuable work in the sense that people will tend to be shifted in equilibrium towards doing things that complement what it is that the AI can do or doing things that the AI systems can’t do as well. And, you know, this is really important in the sense that, like, we’re building these partnerships with these AI systems, right. There’s this human-AI collaboration where human people are doing the things that they’re best at and the AI systems are doing the things that they’re best at. And while we’d love to imagine that, like, that more valuable work will ultimately be more meaningful work in that it’s, sort of, fundamentally more human work, that doesn’t necessarily have to be the case. You know, we can imagine scenarios in which I personally enjoy … there are certain, you know, types of routine work that I happen to personally enjoy and find meaningful. But even in that world, if we get this right and, sort of, the, you know, the economy comes at equilibrium to a place where people are being more productive, they’re doing more valuable work, and we can effectively distribute those gains to everybody, there’s a world in which, you know, this has the potential to be the rising tide that lifts all boats. 

HUIZINGA: Right. 

LUCIER: And so that what we end up with is, you know, we get this extra time, but through this different sort of indirect path of the increased standard of living that comes with an improved economy, right. And so that’s the sort of situation where that source of free time I think really has the potential to be somewhere where we can use it for meaningful pursuits, right. But there are a lot of steps to take to, sort of, get there, and this is why it’s, I think, super important to get this line of sight on what could possibly be happening in terms of these disruptions. 

HUIZINGA: Right. Brendan, something you said reminded me that I’ve been watching a show called Dark Matter, and the premise is that there’s many possible lives we could live, all determined by the choices we make. And you two are looking at possible futures in labor markets and the economy and trying to make models for them. So how do existing hypotheses inform where AI is currently headed, and how might your research help predict them into a more optimal direction? 

LUCIER: Yeah, that’s a really big question. Again, you know, as we’ve said a few times already, there’s this goal here of getting this heads-up on which segments of the economy can be most impacted. And we can envision these better futures as the economy stabilizes, and maybe we can even envision pathways towards getting there by trying to address, sort of, the potential effects of inequality and the distribution of those gains across people. But even in a world where we get all those things right, that transition is necessarily going to be disruptive, right. 

HUIZINGA: Right. 

LUCIER: And so even if we think that things are going to work out well in the long term, in the short term, there’s certainly going to be things that we would hope to invest in to, sort of, improve for everyone. And so even in a world where we believe, sort of, the technology is out there and we really think that people are going to be using it in the ways that make most sense to them, as we get hints about where these impacts can be largest, I think that an important value there is that it lets us anticipate opportunities for responsible stewardship, right. So if we can see where there’s going to be impact, I think we can get a hint as to where we should be focusing our efforts, and that might look like getting ahead of demand for certain use cases or anticipating extra need for, you know, responsible AI guardrails, or even just, like, understanding, you know, [how] labor market impacts can help us inform policy interventions, right. And I think that this is one of the things that gets me really excited about doing this work at Microsoft specifically. Because of how much Microsoft has been investing in responsible AI, and, sort of, the fundamentals that underlie those guardrails and those possible actions means that we, sort of, in this company, we have the ability to actually act on those opportunities, right. And so I think it’s important to really, sort of, try to shine as much light as possible on where we think those will be most effective. 

HUIZINGA: Yeah. Mert, I usually ask my guests on Collaborators where their research is on the spectrum from “lab to life,” but this isn’t that kind of research. We might think of it more in terms of “lab for life” research, where your findings could actually help shape the direction of the product research in this field. So that said, where are you on the timeline of this project, and do you have any learnings yet that you could share with us? 

DEMIRER: I think the first thing I learned about this project is it is difficult to study AI! [LAUGHTER] So we are still in, like, the early stages of the project. So we developed this framework we talked about earlier in the podcast, and now what we are doing is we are applying that framework to a few particular occupations. And the challenge we had is these occupations, when you just describe them, it’s like very simple, but when you go to this, like, task view, it’s actually very complex, the number of tasks. Sometimes we see in the data, like, 20, 30 tasks they do, and the relationship between those tasks. So it turned out to be more difficult than I expected. So what we are currently doing is we are applying the framework to a few specific tasks which help us understand how the model works and whether the model needs any adjustment. And then the goal is once we understand the model on any few specific cases, we’ll scale that up. And then we are going to develop these big predictions on the economy. So we are currently not there yet, but we are hoping to get there pretty soon. 

HUIZINGA: And just to, kind of, follow up on that, what would you say your successful outcome of this research would be? What’s your artifact that you would deliver from this project as collaboration? 

DEMIRER: So ultimately, our goal is to develop predictions that will inform the trajectory the AI is taking, that’s going to inform, like, the policy. That’s our goal, and if we generate that output, and especially if it informs policy of how firms or different agents of the economy adopt AI, I think that will be the ideal output for this project. 

HUIZINGA: Yeah. And what you’ve just differentiated is that there are different end users of your research. Some of them might be governmental. Some of them might be corporate. Some of them might even be individuals or even just layers of management that try to understand how this is working and how they’re working. So wow. Well, I usually close each episode with some future casting. But that basically is what we’ve been talking about this whole episode. So I want to end instead by asking each of you to give some advice to researchers who might be just getting started in AI research, whether that’s the fields that develop the technology itself or the fields that help define its uses and the guardrails we put around it. So what is it important for us to pay attention to right now, and what words of wisdom could you offer to aspiring researchers? I’ll give you each the last word. Mert, why don’t you go first? 

DEMIRER: My first advice will be use AI yourself as much as possible. Because the great thing about AI is that everyone can access this technology even though it’s a very early stage, so there’s a huge opportunity. So I think if you want to study AI, like, you should use it as much as possible. That personally allows me to understand the technology better and also develop research questions. And the second advice would be to stay up to date with what’s happening. This is a very rapidly evolving technology. There is a new product, new use case, new model every day, and it’s hard to keep up. And it is actually important to distinguish between questions that won’t be relevant two months from now versus questions that’s going to be important five years from now. And that requires understanding how the technology is evolving. So I personally find it useful to stay up to date with what’s going on. 

HUIZINGA: Brendan, what would you add to that? 

LUCIER: So definitely fully agree with all of that. And so I guess I would just add something extra for people who are more on the design side, which is that when we build, you know, these systems, these AI tools and guardrails, we oftentimes will have some anticipated, you know, usage or ideas in our head of how this is going to land, and then there’ll always be this moment where it, sort of, meets the real users, you know, the humans who are going to use those things in, you know, possibly unanticipated ways. And, you know, this can be oftentimes a very frustrating moment, but this can be a feature, not a bug, very often, right. So the combined insight and effort of all the users of a product can be this, like, amazing strong force. And so, you know, this is something where we can try to fight against it or we can really try to, sort of, harness it and work with it, and this is why it’s really critical when we’re building especially, sort of, user-facing AI systems, that we design them from the ground up to be, sort of, collaborating, you know, with our users and guiding towards, sort of, good outcomes in the long term, you know, as people jointly, sort of, decide how best to use these products and guide towards, sort of, good usage patterns. 

[MUSIC] 

HUIZINGA: Hmmm. Well, Brendan and Mert, as I said before, this is timely and important research. It’s a wonderful contribution to the AICE research initiative, and I’m thrilled that you came on the podcast today to talk about it. Thanks for joining us. 

LUCIER: Thank you so much. 

DEMIRER: Thank you so much. 

[MUSIC FADES] 


[1] (opens in new tab) For more information, Lucier notes two resources about the economic impact of GenAI: GPTs are GPTs: An Early Look at the Labor Market Impact Potential of Large Language Models (opens in new tab) and Preparing the Workforce for Generative AI (opens in new tab)

The post Collaborators: AI and the economy with Brendan Lucier and Mert Demirer appeared first on Microsoft Research.

Read More

Improve AI assistant response accuracy using Knowledge Bases for Amazon Bedrock and a reranking model

Improve AI assistant response accuracy using Knowledge Bases for Amazon Bedrock and a reranking model

AI chatbots and virtual assistants have become increasingly popular in recent years thanks the breakthroughs of large language models (LLMs). Trained on a large volume of datasets, these models incorporate memory components in their architectural design, allowing them to understand and comprehend textual context.

Most common use cases for chatbot assistants focus on a few key areas, including enhancing customer experiences, boosting employee productivity and creativity, or optimizing business processes. For instance, customer support, troubleshooting, and internal and external knowledge-based search.

Despite these capabilities, a key challenge with chatbots is generating high-quality and accurate responses. One way of solving this challenge is to use Retrieval Augmented Generation (RAG). RAG is the process of optimizing the output of an LLM so it references an authoritative knowledge base outside of its training data sources before generating a response. Reranking seeks to improve search relevance by reordering the result set returned by a retriever with a different model. In this post, we explain how two techniques—RAG and reranking—can help improve chatbot responses using Knowledge Bases for Amazon Bedrock.

Solution overview

RAG is a technique that combines the strengths of knowledge base retrieval and generative models for text generation. It works by first retrieving relevant responses from a database, then using those responses as context to feed the generative model to produce a final output. Using a RAG approach for building a chatbot has many advantages. For example, retrieving responses from its database before generating a response could provide more relevant and coherent responses. This helps improve the conversational flow. RAG also scales better with more data compared to pure generative models, and it doesn’t require fine-tuning of the model when new data is added to the knowledge base. Additionally, the retrieval component enables the model to incorporate external knowledge by retrieving relevant background information from its database. This approach helps provide factual, in-depth, and knowledgeable responses.

To find an answer, RAG takes an approach that uses vector search across the documents. The advantage of using vector search is speed and scalability. Rather than scanning every single document to find the answer, with the RAG approach, you turn the texts (knowledge base) into embeddings and store these embeddings in the database. The embeddings are a compressed version of the documents, represented by an array of numerical values. After the embeddings are stored, the vector search queries the vector database to find the similarity based on the vectors associated with the documents. Typically, a vector search will return the top k most relevant documents based on the user question, and return the k results. However, because the similarity algorithm in a vector database works on vectors and not documents, vector search doesn’t always return the most relevant information in the top k results. This directly impacts the accuracy of the response if the most relevant contexts aren’t available to the LLM.

Reranking is a technique that can further improve the responses by selecting the best option out of several candidate responses. The following architecture illustrates how a reranking solution could work.

Bedrock KB Reranking model architecture

Architecture diagram for Reranking model integration with Knowledge Bases for Bedrock

Let’s create a question answering solution, where we ingest The Great Gatsby, a 1925 novel by American writer F. Scott Fitzgerald. This book is publicly available through Project Gutenberg. We use Knowledge Bases for Amazon Bedrock to implement the end-to-end RAG workflow and ingest the embeddings into an Amazon OpenSearch Serverless vector search collection. We then retrieve answers using standard RAG and a two-stage RAG, which involves a reranking API. We then compare results from these two methods.

The code sample is available in this GitHub repo.

In the following sections, we walk through the high-level steps:

  1. Prepare the dataset.
  2. Generate questions from the document using an Amazon Bedrock LLM.
  3. Create a knowledge base that contains this book.
  4. Retrieve answers using the knowledge base retrieve API
  5. Evaluate the response using the RAGAS
  6. Retrieve answers again by running a two-stage RAG, using the knowledge base retrieve API and then applying reranking on the context.
  7. Evaluate the two-stage RAG response using the RAGAS framework.
  8. Compare the results and the performance of each RAG approach.

For efficiency purposes, we provided sample code in a notebook used to generate a set of questions and answers. These Q&A pairs are used in the RAG evaluation process. We highly recommend having a human to validate each question and answer for accuracy.

The following sections explains major steps with the help of code blocks.

Prerequisites

To clone the GitHub repository to your local machine, open a terminal window and run the following commands:

git clone https://github.com/aws-samples/amazon-bedrock-samples
cd knowledge-bases/features-examples/03-advanced-concepts/reranking

Prepare the dataset

Download the book from the Project Gutenberg website. For this post, we create 10 large documents from this book and upload them to Amazon Simple Storage Service (Amazon S3):

target_url = "https://www.gutenberg.org/ebooks/64317.txt.utf-8" # the great gatsby
data = urllib.request.urlopen(target_url)
my_texts = []
for line in data:
my_texts.append(line.decode())

doc_size = 700 # size of the document to determine number of batches
batches = math.ceil(len(my_texts) / doc_size)

sagemaker_session = sagemaker.Session()
default_bucket = sagemaker_session.default_bucket()
s3_prefix = "bedrock/knowledgebase/datasource"

start = 0
s3 = boto3.client("s3")
for batch in range(batches):
    batch_text_arr = my_texts[start:start+doc_size]
    batch_text = "".join(batch_text_arr)
    s3.put_object(
        Body=batch_text,
        Bucket=default_bucket,
        Key=f"{s3_prefix}/{start}.txt"
    )
    start += doc_size  

Create Knowledge Base for Bedrock

If you’re new to using Knowledge Bases for Amazon Bedrock, refer to Knowledge Bases for Amazon Bedrock now supports Amazon Aurora PostgreSQL and Cohere embedding models, where we described how Knowledge Bases for Amazon Bedrock manages the end-to-end RAG workflow.

In this step, you create a knowledge base using a Boto3 client. You use Amazon Titan Text Embedding v2 to convert the documents into embeddings (‘embeddingModelArn’) and point to the S3 bucket you created earlier as the data source (dataSourceConfiguration):

bedrock_agent = boto3.client("bedrock-agent")
response = bedrock_agent.create_knowledge_base(
    name=knowledge_base_name,
    description='Knowledge Base for Bedrock',
    roleArn=role_arn,
    knowledgeBaseConfiguration={
        'type': 'VECTOR',
        'vectorKnowledgeBaseConfiguration': {
            'embeddingModelArn': embedding_model_arn
        }
    },
    storageConfiguration={
        'type': 'OPENSEARCH_SERVERLESS',
        'opensearchServerlessConfiguration': {
            'collectionArn': collection_arn,
            'vectorIndexName': index_name,
            'fieldMapping': {
                'vectorField':  "bedrock-knowledge-base-default-vector",
                'textField': 'AMAZON_BEDROCK_TEXT_CHUNK',
                'metadataField': 'AMAZON_BEDROCK_METADATA'
            }
        }
    }
)
knowledge_base_id = response['knowledgeBase']['knowledgeBaseId']
knowledge_base_name = response['knowledgeBase']['name']

response = bedrock_agent.create_data_source(
    knowledgeBaseId=knowledge_base_id,
    name=f"{knowledge_base_name}-ds",
    dataSourceConfiguration={
        'type': 'S3',
        's3Configuration': {
            'bucketArn': f"arn:aws:s3:::{bucket}",
            'inclusionPrefixes': [
                f"{s3_prefix}/",
            ]
        }
    },
    vectorIngestionConfiguration={
        'chunkingConfiguration': {
            'chunkingStrategy': 'FIXED_SIZE',
            'fixedSizeChunkingConfiguration': {
                'maxTokens': 300,
                'overlapPercentage': 10
            }
        }
    }
)
data_source_id = response['dataSource']['dataSourceId']

response = bedrock_agent.start_ingestion_job(
    knowledgeBaseId=knowledge_base_id,
    dataSourceId=data_source_id,
)

Generate questions from the document

We use Anthropic Claude on Amazon Bedrock to generate a list of 10 questions and the corresponding answers. The Q&A data serves as the foundation for the RAG evaluation based on the approaches that we are going to implement. We define the generated answers from this step as ground truth data. See the following code:

prompt_template = """The question should be diverse in nature 
across the document. The question should not contain options, not start with Q1/ Q2. 
Restrict the question to the context information provided.

<document>
{{document}}
</document>

Think step by step and pay attention to the number of question to create.

Your response should follow the format as followed:

Question: question
Answer: answer

"""
system_prompt = """You are a professor. Your task is to setup 1 question for an upcoming 
quiz/examination based on the given document wrapped in <document></document> XML tag."""

prompt = prompt_template.replace("{{document}}", documents)
temperature = 0.9
top_k = 250
messages = [{"role": "user", "content": [{"text": prompt}]}]
# Base inference parameters to use.
inference_config = {"temperature": temperature, "maxTokens": 512, "topP": 1.0}
# Additional inference parameters to use.
additional_model_fields = {"top_k": top_k}

# Send the message.
response = bedrock_runtime.converse(
    modelId=model_id,
    messages=messages,
    system=[{"text": system_prompt}],
    inferenceConfig=inference_config,
    additionalModelRequestFields=additional_model_fields
)
print(response['output']['message']['content'][0]['text'])
result = response['output']['message']['content'][0]['text']
q_pos = [(a.start(), a.end()) for a in list(re.finditer("Question:", result))]
a_pos = [(a.start(), a.end()) for a in list(re.finditer("Answer:", result))]

Retrieve answers using the knowledge base APIs

We use the generated questions and retrieve answers from the knowledge base using the retrieve and converse APIs:

contexts = []
answers = []

for question in questions:
    response = agent_runtime.retrieve(
        knowledgeBaseId=knowledge_base_id,
        retrievalQuery={
            'text': question
        },
        retrievalConfiguration={
            'vectorSearchConfiguration': {
                'numberOfResults': topk
            }
        }
    )
    
    retrieval_results = response['retrievalResults']
    local_contexts = []
    for result in retrieval_results:
        local_contexts.append(result['content']['text'])
    contexts.append(local_contexts)
    combined_docs = "n".join(local_contexts)
    prompt = llm_prompt_template.replace("{{documents}}", combined_docs)
    prompt = prompt.replace("{{query}}", question)
    temperature = 0.9
    top_k = 250
    messages = [{"role": "user", "content": [{"text": prompt}]}]
    # Base inference parameters to use.
    inference_config = {"temperature": temperature, "maxTokens": 512, "topP": 1.0}
    # Additional inference parameters to use.
    additional_model_fields = {"top_k": top_k}

    # Send the message.
    response = bedrock_runtime.converse(
        modelId=model_id,
        messages=messages,
        inferenceConfig=inference_config,
        additionalModelRequestFields=additional_model_fields
    )
    answers.append(response['output']['message']['content'][0]['text'])

Evaluate the RAG response using the RAGAS framework

We now evaluate the effectiveness of the RAG using a framework called RAGAS. The framework provides a suite of metrics to evaluate different dimensions. In our example, we evaluate responses based on the following dimensions:

  • Answer relevancy – This metric focuses on assessing how pertinent the generated answer is to the given prompt. A lower score is assigned to answers that are incomplete or contain redundant information. This metric is computed using the question and the answer, with values ranging between 0–1, where higher scores indicate better relevancy.
  • Answer similarity – This assesses the semantic resemblance between the generated answer and the ground truth. This evaluation is based on the ground truth and the answer, with values falling within the range of 0–1. A higher score signifies a better alignment between the generated answer and the ground truth.
  • Context relevancy – This metric gauges the relevancy of the retrieved context, calculated based on both the question and contexts. The values fall within the range of 0–1, with higher values indicating better relevancy.
  • Answer correctness – The assessment of answer correctness involves gauging the accuracy of the generated answer when compared to the ground truth. This evaluation relies on the ground truth and the answer, with scores ranging from 0–1. A higher score indicates a closer alignment between the generated answer and the ground truth, signifying better correctness.

A summarized report for standard RAG approach based on RAGAS evaluation:

answer_relevancy: 0.9006225160334027

answer_similarity: 0.7400904157096762

answer_correctness: 0.32703043056663855

context_relevancy: 0.024797687553157175

Two-stage RAG: Retrieve and rerank

Now that you have the results with the retrieve_and_generate API, let’s explore the two-stage retrieval approach by extending the standard RAG approach to integrate with a reranking model. In the context of RAG, reranking models are used after an initial set of contexts are retrieved by the retriever. The reranking model takes in the list of results and reranks each one based on the similarity between the context and the user query. In our example, we use a powerful reranking model called bge-reranker-large. The model is available in the Hugging Face Hub and is also free for commercial use. In the following code, we use the knowledge base’s retrieve API so we can get the handle on the context, and rerank it using the reranking model deployed as an Amazon SageMaker endpoint. We provide the sample code for deploying the reranking model in SageMaker in the GitHub repository. Here’s a code snippet that demonstrates two-stage retrieval process:

def generate_two_stage_context_answers(bedrock_runtime, 
                                       agent_runtime, 
                                       model_id, 
                                       knowledge_base_id, 
                                       retrieval_topk, 
                                       reranking_model, 
                                       questions, 
                                       rerank_top_k=3):
    contexts = []
    answers = []
    predictor = Predictor(endpoint_name=reranking_model, serializer=JSONSerializer(), deserializer=JSONDeserializer())
    for question in questions:
        retrieval_results = two_stage_retrieval(agent_runtime, knowledge_base_id, question, retrieval_topk, predictor, rerank_top_k)
        local_contexts = []
        documents = []
        for result in retrieval_results:
            local_contexts.append(result)

        contexts.append(local_contexts)
        combined_docs = "n".join(local_contexts)
        prompt = llm_prompt_template.replace("{{documents}}", combined_docs)
        prompt = prompt.replace("{{query}}", question)
        temperature = 0.9
        top_k = 250
        messages = [{"role": "user", "content": [{"text": prompt}]}]
        inference_config = {"temperature": temperature, "maxTokens": 512, "topP": 1.0}
        additional_model_fields = {"top_k": top_k}
        
        response = bedrock_runtime.converse(
            modelId=model_id,
            messages=messages,
            inferenceConfig=inference_config,
            additionalModelRequestFields=additional_model_fields
        )
        answers.append(response['output']['message']['content'][0]['text'])
    return contexts, answers

Evaluate the two-stage RAG response using the RAGAS framework

We evaluate the answers generated by the two-stage retrieval process. The following is a summarized report based on RAGAS evaluation:

answer_relevancy: 0.841581671275458

answer_similarity: 0.7961827348349313

answer_correctness: 0.43361356731293665

context_relevancy: 0.06049484724216884

Compare the results

Let’s compare the results from our tests. As shown in the following figure, the reranking API improves context relevancy, answer correctness, and answer similarity, which are important for improving the accuracy of the RAG process.

2 stage RAG evaluation metrics

RAG vs Two Stage Retrieval evaluation metrics

Similarly, we also measured the RAG latency for both approaches. The results can be shown in the following metrics and the corresponding chart:

Standard RAG latency: 76.59s

Two Stage Retrieval latency: 312.12s

reranking model speed comparison

Latency metric for RAG and Two Stage Retrieval process

In summary, using a reranking model (tge-reranker-large) hosted on an ml.m5.xlarge instance yields approximately four times the latency compared to the standard RAG approach. We recommend testing with different reranking model variants and instance types to obtain the optimal performance for your use case.

Conclusion

In this post, we demonstrated how to implement a two-stage retrieval process by integrating a reranking model. We explored how integrating a reranking model with Knowledge Bases for Amazon Bedrock can provide better performance. Finally, we used RAGAS, an open source framework, to provide context relevancy, answer relevancy, answer similarity, and answer correctness metrics for both approaches.

Try out this retrieval process today, and share your feedback in the comments.


About the Author

Wei Teh is an Machine Learning Solutions Architect at AWS. He is passionate about helping customers achieve their business objectives using cutting-edge machine learning solutions. Outside of work, he enjoys outdoor activities like camping, fishing, and hiking with his family.

Pallavi Nargund is a Principal Solutions Architect at AWS. In her role as a cloud technology enabler, she works with customers to understand their goals and challenges, and give prescriptive guidance to achieve their objective with AWS offerings. She is passionate about women in technology and is a core member of Women in AI/ML at Amazon. She speaks at internal and external conferences such as AWS re:Invent, AWS Summits, and webinars. Outside of work she enjoys volunteering, gardening, cycling and hiking.

Qingwei Li is a Machine Learning Specialist at Amazon Web Services. He received his Ph.D. in Operations Research after he broke his advisor’s research grant account and failed to deliver the Nobel Prize he promised. Currently he helps customers in the financial service and insurance industry build machine learning solutions on AWS. In his spare time, he likes reading and teaching.

Mani Khanuja is a Tech Lead – Generative AI Specialists, author of the book Applied Machine Learning and High Performance Computing on AWS, and a member of the Board of Directors for Women in Manufacturing Education Foundation Board. She leads machine learning projects in various domains such as computer vision, natural language processing, and generative AI. She speaks at internal and external conferences such AWS re:Invent, Women in Manufacturing West, YouTube webinars, and GHC 23. In her free time, she likes to go for long runs along the beach.

Read More

Automate the machine learning model approval process with Amazon SageMaker Model Registry and Amazon SageMaker Pipelines

Automate the machine learning model approval process with Amazon SageMaker Model Registry and Amazon SageMaker Pipelines

Innovations in artificial intelligence (AI) and machine learning (ML) are causing organizations to take a fresh look at the possibilities these technologies can offer. As you aim to bring your proofs of concept to production at an enterprise scale, you may experience challenges aligning with the strict security compliance requirements of their organization. In the face of these challenges, MLOps offers an important path to shorten your time to production while increasing confidence in the quality of deployed workloads by automating governance processes.

ML models in production are not static artifacts. They reflect the environment where they are deployed and, therefore, require comprehensive monitoring mechanisms for model quality, bias, and feature importance. Organizations often want to introduce additional compliance checks that validate that the model aligns with their organizational standards before it is deployed. These frequent manual checks can create long lead times to deliver value to customers. Automating these checks allows them to be repeated regularly and consistently rather than organizations having to rely on infrequent manual point- in-time checks.

This post illustrates how to use common architecture principles to transition from a manual monitoring process to one that is automated. You can use these principles and existing AWS services such as Amazon SageMaker Model Registry and Amazon SageMaker Pipelines to deliver innovative solutions to your customers while maintaining compliance for your ML workloads.

Challenge

As AI becomes ubiquitous, it’s increasingly used to process information and interact with customers in a sensitive context. Suppose a tax agency is interacting with its users through a chatbot. It’s important that this new system aligns with organizational guidelines by allowing developers to have a high degree of confidence that it responds accurately and without bias. At maturity, an organization may have tens or even hundreds of models in production. How can you make sure every model is properly vetted before it’s deployed and on each deployment?

Traditionally, organizations have created manual review processes to keep updated code from becoming available to the public through mechanisms such as an Enterprise Review Committee (ERC), Enterprise Review Board (ERB), or a Change Advisory Board (CAB).

Just as mechanisms have evolved with the rise of continuous integration and continuous delivery (CI/CD), MLOps can reduce the need for manual processes while increasing the frequency and thoroughness of quality checks. Through automation, you can scale in-demand skillsets, such as model and data analysis, introducing and enforcing in-depth analysis of your models at scale across diverse product teams.

In this post, we use SageMaker Pipelines to define the required compliance checks as code. This allows you to introduce analysis of arbitrary complexity while not being limited by the busy schedules of highly technical individuals. Because the automation takes care of repetitive analytics tasks, technical resources can focus on relentlessly improving the quality and thoroughness of the MLOps pipeline to improve compliance posture, and make sure checks are performing as expected.

Deployment of an ML model to production generally requires at least two artifacts to be approved: the model and the endpoint. In our example, the organization is willing to approve a model for deployment if it passes their checks for model quality, bias, and feature importance prior to deployment. Secondly, the endpoint can be approved for production if it performs as expected when deployed into a production-like environment. In a subsequent post, we walk you through how to deploy a model and implement sample compliance checks. In this post, we discuss how you can extend this process to large language models (LLMs), which produce a varied set of outputs and introduce complexities regarding automated quality assurance checks.

Aligning with AWS multi-account best practices

The solution outlined in this post spans across several accounts in a given AWS organization. For a deeper look at the various components required for an AWS organization multi-account enterprise ML environment, see MLOps foundation roadmap for enterprises with Amazon SageMaker. In this post, we refer to the advanced analytics governance account as the AI/ML governance account. We focus on the development of the enforcement mechanism for the centralized automated model approval within this account.

This account houses centralized components such as a model registry on SageMaker Model Registry, ML project templates on SageMaker Projects, model cards on Amazon SageMaker Model Cards, and container images on Amazon Elastic Container Registry (Amazon ECR).

We use an isolated environment (in this case, a separate AWS environment) to deploy and promote across various environments. You can modify the strategies discussed in this post along the spectrum of centralized vs. decentralized depending on the posture of your organization. For this example, we provide a centralized model. You can also extend this model to align with strict compliance requirements. For example, the AI/ML governance team trusts the development teams are sending the correct bias and explainability reports for a given model. Additional checks could be included to “trust by verify” to further bolster the posture of this organization. Additional complexities such as this are not addressed in this post. To dive further into the topic of MLOps secure implementations, refer to Amazon SageMaker MLOps: from idea to production in six steps.

Solution overview

The following diagram illustrates the solution architecture using SageMaker Pipelines to automate model approval.

A node, RegisteredModelValidationStep on top pointing to the second node below, UpdateModelStatusStep

The workflow comprises a comprehensive process for model building, training, evaluation, and approval within an organization containing different AWS accounts, integrating various AWS services. The detailed steps are as follows:

  1. Data scientists from the product team use Amazon SageMaker Studio to create Jupyter notebooks used to facilitate data preprocessing and model pre-building. The code is committed to AWS CodeCommit, a managed source control service. Optionally, you can commit to third-party version control systems such as GitHub, GitLab, or Enterprise Git.
  2. The commit to CodeCommit invokes the SageMaker pipeline, which runs several steps, including model building and training, and running processing jobs using Amazon SageMaker Clarify to generate bias and explainability reports.
    • SageMaker Clarify processes and stores its outputs, including model artifacts and reports in JSON format, in an Amazon Simple Storage Service (Amazon S3) bucket.
    • A model is registered in the SageMaker model registry with a model version.
  3. The Amazon S3 PUT action invokes an AWS Lambda
  4. This Lambda function copies all the artifacts from the S3 bucket in the development account to another S3 bucket in the AI/ML governance account, providing restricted access and data integrity. This post assumes your accounts and S3 buckets are in the same AWS Region. For cross-Region copying, see Copy data from an S3 bucket to another account and Region by using the AWS CLI.
  5. Registering the model invokes a default Amazon CloudWatch event associated with SageMaker model registry actions.
  6. The CloudWatch event is consumed by Amazon EventBridge, which invokes another Lambda
  7. This Lambda function is tasked with starting the SageMaker approval pipeline.
  8. The SageMaker approval pipeline evaluates the artifacts against predefined benchmarks to determine if they meet the approval criteria.
  9. Based on the evaluation, the pipeline updates the model status to approved or rejected accordingly.

This workflow provides a robust, automated process for model approval using AWS’s secure, scalable infrastructure and services. Each step is designed to make sure that only models meeting the set criteria are approved, maintaining high standards for model performance and fairness.

Prerequisites

To implement this solution, you need to first create and register an ML model in the SageMaker model registry with the necessary SageMaker Clarify artifacts. You can create and run the pipeline by following the example provided in the following GitHub repository.

The following sections assume that a model package version has been registered with status Pending Manual Approval. This status allows you to build an approval workflow. You can either have a manual approver or set up an automated approval workflow based on metrics checks in the aforementioned reports.

Build your pipeline

SageMaker Pipelines allows you to define a series of interconnected steps defined as code using the Pipelines SDK. You can extend the pipeline to help meet your organizational needs with both automated and manual approval steps. In this example, we build the pipeline to include two major steps. The first step evaluates artifacts uploaded to the AI/ML governance account by the model build pipeline against threshold values set by model registry administrators for model quality, bias, and feature importance. The second step receives the evaluation and updates the model’s status and metadata based on the values received. The pipeline is represented in SageMaker Pipelines by the following DAG.

A node, RegisteredModelValidationStep on top pointing to the second node below, UpdateModelStatusStep

Next, we dive into the code required for the pipeline and its steps. First, we define a pipeline session to help manage AWS service integration as we define our pipeline. This can be done as follows:

pipeline_session = PipelineSession()

Each step runs as a SageMaker Processor for which we specify a small instance type due to the minimal compute requirements of our pipeline. The processor can be defined as follows:

from sagemaker.processing import Processor
step_processor=Processor(
    image_uri=image_uri,
    role=role, 
    instance_type="ml.t3.medium", 
    base_job_name=base_job_name,
    instance_count=1,  
    sagemaker_session=pipeline_session,
)

We then define the pipeline steps using step_processor.run(…) as the input parameter to run our custom script inside the defined environment.

Validate model package artifacts

The first step takes two arguments: default_bucket and model_package_group_name. It outputs the results of the checks in JSON format stored in Amazon S3. The step is defined as follows:

process_step = ProcessingStep(
    name="RegisteredModelValidationStep",
    step_args= step_processor.run(
        code="automated-model-approval/model-approval-checks.py",
        inputs=[],
        outputs=[
            ProcessingOutput(
                output_name="checks",
                destination=f"s3://{default_bucket}/governance-pipeline/processor/",
                source="/opt/ml/processing/output"
        )],
        arguments=[
            "--default_bucket", default_bucket_s3, 
            "--model_package_group_name", model_package_group_name
        ]
    )
)

This step runs the custom script passed to the code parameter. We now explore this script in more detail.

Values passed to arguments can be parsed using standard methods like argparse and will be used throughout the script. We use these values to retrieve the model package. We then parse the model package’s metadata to find the location of the model quality, bias, and explainability reports. See the following code:

model_package_arn = client.list_model_packages(ModelPackageGroupName=model_package_group_name)[
        "ModelPackageSummaryList"
    ][0]["ModelPackageArn"]
    model_package_metrics = 
client.describe_model_package(ModelPackageName=model_package_arn)["ModelMetrics"]
model_quality_s3_key = model_package_metrics["ModelQuality"]["Statistics"]["S3Uri"].split(f"{default_bucket}/")[1]
model_quality_bias = model_package_metrics["Bias"]
model_quality_pretrain_bias_key = model_quality_bias["PreTrainingReport"]["S3Uri"].split(f"{default_bucket}/")[1]
model_quality__post_train_bias_key = model_quality_bias["PostTrainingReport"]["S3Uri"].split(f"{default_bucket}/")[1]
model_explainability_s3_key = model_package_metrics["Explainability"]["Report"]["S3Uri"].split(f"{default_bucket}/")[1]

The reports retrieved are simple JSON files we can then parse. In the following example, we retrieve the treatment equity and compare to our threshold in order to return a True or False result. Treatment equity is defined as the difference in the ratio of false negatives to false positives for the advantaged vs. disadvantaged group. We arbitrarily set the optimal threshold to be 0.8.

s3_obj = s3_client.get_object(Bucket=default_bucket, Key=model_quality__post_train_bias_key)
s3_obj_data = s3_obj['Body'].read().decode('utf-8')
model_quality__post_train_bias_json = json.loads(s3_obj_data)
treatment_equity = model_quality__post_train_bias_json["post_training_bias_metrics"][
        "facets"]["column_8"][0]["metrics"][-1]["value"]
treatment_equity_check_threshold = 0.8
treatment_equity_check = True if treatment_equity < treatment_equity_check_threshold else False

After running through the measures of interest, we return the true/false checks to a JSON file that will be copied to Amazon S3 as per the output variable of the ProcessingStep.

Update the model package status in the model registry

When the initial step is complete, we use the JSON file created in Amazon S3 as input to update the model package’s status and metadata. See the following code:

update_model_status_step = ProcessingStep(
    name="UpdateModelStatusStep",
    step_args=step_processor.run(
        code="automated-model-approval/validate-model.py",
        inputs=[
            ProcessingInput(
                source=process_step.properties.ProcessingOutputConfig.Outputs[
                    "checks"
                ].S3Output.S3Uri,
                destination="/opt/ml/processing/input",
            ),
        ],
        outputs=[],
        arguments=[
            "--model_package_group_name", model_package_group_name
        ]
    ),
)

This step runs the custom script passed to the code parameter. We now explore this script in more detail. First, parse the values in checks.json to evaluate if the model passed all checks or review the reasons for failure:

is_approved = True
reasons = []
with open('/opt/ml/processing/input/checks.json') as checks:
        checks = json.load(checks)
        print(f"checks: {checks}")
        for key, value in checks.items():            
            if not value:
                is_approved = False
                reasons.append(key)

After we know if the model should be approved or rejected, we update the model status and metadata as follows:

if is_approved:
        approval_description = "Model package meets organisational guidelines"
else:
        approval_description = "Model values for the following checks does not meet threshold: "

for reason in reasons:
approval_description+= f"{reason} "
        
model_package_update_input_dict = {
        "ModelPackageArn" : model_package_arn,
        "ApprovalDescription": approval_description,
        "ModelApprovalStatus" : "Approved" if is_approved else "Rejected"
    }
    
model_package_update_response = client.update_model_package(**model_package_update_input_dict)

This step produces a model with a status of Approved or Rejected based on the set of checks specified in the first step.

Orchestrate the steps as a SageMaker pipeline

We orchestrate the previous steps as a SageMaker pipeline with two parameter inputs passed as arguments to the various steps:

from sagemaker.workflow.pipeline import Pipeline
from sagemaker.workflow.parameters import ParameterString

model_package_group_name = ParameterString(
name="ModelPackageGroupName", default_value="ModelPackageGroupName is required variable."
)

default_bucket_s3 = ParameterString(
name="Bucket", default_value="Bucket is required variable")

pipeline = Pipeline(
    name=pipeline_name,
    parameters=[model_package_group_name, default_bucket_s3],
    steps=[process_step, update_model_status_step],
)

It’s straightforward to extend this pipeline by adding elements into the list passed to the steps parameter. In the next section, we explore how to run this pipeline as new model packages are registered to our model registry.

Run the event-driven pipeline

In this section, we outline how to invoke the pipeline using an EventBridge rule and Lambda function.

Create a Lambda function and select the Python 3.9 runtime. The following function retrieves the model package ARN, the model package group name, and the S3 bucket where the artifacts are stored based on the event. It then starts running the pipeline using these values:

import json
import boto3
sagemaker_client = boto3.client('sagemaker')

def lambda_handler(event, context):
    model_arn = event.get('detail', {}).get('ModelPackageArn', 'Unknown')
    model_package_group_name = event.get('detail', {}).get('ModelPackageGroupName', 'Unknown') 
    model_package_name = event.get('detail', {}).get('ModelPackageName', 'Unknown') 
    model_data_url = event.get('InferenceSpecification', {}).get('ModelDataUrl', 'Unknown')        
    
    # Specify the name of your SageMaker pipeline
    pipeline_name = 'model-governance-pipeline'
    
    # Define multiple parameters
    pipeline_parameters = [
    {'Name': "ModelPackageGroupName", 'Value': model_package_group_name}, {'Name': "Bucket", 'Value': model_data_url},
   ]
    # Start the pipeline execution
    response = sagemaker_client.start_pipeline_execution(
    	PipelineName=pipeline_name,
    	PipelineExecutionDisplayName=pipeline_name,
    	PipelineParameters=pipeline_parameters
    )
    
    # Return the response
    return response

After defining the Lambda function, we create the EventBridge rule to automatically invoke the function when a new model package is registered with PendingManualApproval into the model registry. You can use AWS CloudFormation and the following template to create the rule:

{
  "AWSTemplateFormatVersion": "2010-09-09",
  "Description": "CloudFormation template for EventBridge rule 'invoke-model-approval-checks'",
  "Resources": {
    "EventRule0": {
      "Type": "AWS::Events::Rule",
      "Properties": {
        "EventBusName": "default",
        "EventPattern": {
          "source": ["aws.sagemaker"],
          "detail-type": ["SageMaker Model Package State Change"],
          "detail": {
            "ModelApprovalStatus": ["PendingManualApproval"]
          }
        },
        "Name": "invoke-model-approval-checks",
        "State": "ENABLED",
        "Targets": [{
          "Id": "Id403a084c-2837-4408-940f-b808389653d1",
          "Arn": "<Your Lambda function ARN>"
        }]
      }
    }
  }
}

We now have a SageMaker pipeline consisting of two steps being invoked when a new model is registered to evaluate model quality, bias, and feature importance metrics and update the model status accordingly.

Applying this approach to generative AI models

In this section, we explore how the complexities introduced by LLMs change the automated monitoring workflow.

Traditional ML models typically produce concise outputs with obvious ground truths in their training dataset. In contrast, LLMs can generate long, nuanced sequences that may have little to no ground truth due to the autoregressive nature of training this segment of model. This strongly influences various components of the governance pipeline we’ve described.

For instance, in traditional ML models, bias is detected by looking at the distributions of labels over different population subsets (for example, male vs. female). The labels (often a single number or a few numbers) are a clear and simple signal used to measure bias. In contrast, generative models produce lengthy and complex answers, which don’t provide an obvious signal to be used for monitoring. HELM (a holistic framework for evaluating foundation models) allows you to simplify monitoring by untangling the evaluation process into metrics of concern. This includes accuracy, calibration and uncertainty, robustness, fairness, bias and stereotypes, toxicity, and efficiency. We then apply downstream processes to measure for these metrics independently. This is generally done using standardized datasets composed of examples and a variety of accepted responses.

We concretely evaluate four metrics of interest to any governance pipelines for LLMs: memorization and copyright, disinformation, bias, and toxicity, as described in HELM. This is done by collecting inference results from the model pushed to the model registry. The benchmarks include:

  • Memorization and copyright with books from bookscorpus, which uses popular books from a bestseller list and source code of the Linux kernel. This can be quickly extended to include a number of copyrighted works.
  • Disinformation with headlines from the MisinfoReactionFrames dataset, which has false headlines across a number of topics.
  • Bias with Bias Benchmark for Question Answering (BBQ). This QA dataset works to highlight biases affecting various social groups.
  • Toxicity with Bias in Open-ended Language Generation Dataset (BOLD), which benchmarks across profession, gender, race, religion, and political ideology.

Each of these datasets is publicly available. They each allow complex aspects of a generative model’s behavior to be isolated and distilled down to a single number. This flow is described in the following architecture.

A flow from the benchmark data set going into the Large Language Model, which gets saved into Requests & Responds, which gets sent to the processing job. The processing job has additional input from benchmark datasets from Ground Truth. Ultimately, metrics are sent to a Metrics & Results bucket.

For a detailed view of this topic along with important mechanisms to scale in production, refer to Operationalize LLM Evaluation at Scale using Amazon SageMaker Clarify and MLOps services.

Conclusion

In this post, we discussed a sample solution to begin automating your compliance checks for models going into production. As AI/ML becomes increasingly common, organizations require new tools to codify the expertise of their highly skilled employees in the AI/ML space. By embedding your expertise as code and running these automated checks against models using event-driven architectures, you can increase both the speed and quality of models by empowering yourself to run these checks as needed rather than relying on the availability of individuals for manual compliance or quality assurance reviews By using well-known CI/CD techniques in the application development lifecycle and applying them to the ML modeling lifecycle, organizations can scale in the era of generative AI.

If you have any thoughts or questions, please leave them in the comments section.


About the Authors

A headshot of JaysonJayson Sizer McIntosh is a Senior Solutions Architect at Amazon Web Services (AWS) in the World Wide Public Sector (WWPS) based in Ottawa (Canada) where he primarily works with public sector customers as an IT generalist with a focus on Dev(Sec)Ops/CICD. Bringing his experience implementing cloud solutions in high compliance environments, he is passionate about helping customers successfully deliver modern cloud-based services to their users.

Nicolas Bernier is an AI/ML Solutions Architect, part of the Canadian Public Sector team at AWS. He is currently conducting research in Federated Learning and holds five AWS certifications, including the ML Specialty Certification. Nicolas is passionate about helping customers deepen their knowledge of AWS by working with them to translate their business challenges into technical solutions.

A headshot of PoojaPooja Ayre is a seasoned IT professional with over 9 years of experience in product development, having worn multiple hats throughout her career. For the past two years, she has been with AWS as a Solutions Architect, specializing in AI/ML. Pooja is passionate about technology and dedicated to finding innovative solutions that help customers overcome their roadblocks and achieve their business goals through the strategic use of technology. Her deep expertise and commitment to excellence make her a trusted advisor in the IT industry.

Read More