CUDA-Free Inference for LLMs

PyTorch Native Architecture Optimization: torchao

By Team PyTorch

We’re happy to officially launch torchao, a PyTorch native library that makes models faster and smaller by leveraging low bit dtypes, quantization and sparsity. torchao is an accessible toolkit of techniques written (mostly) in easy to read PyTorch code spanning both inference and training. This blog will help you pick which techniques matter for your workloads.

We benchmarked our techniques on popular GenAI models like LLama 3 and Diffusion models and saw minimal drops in accuracy. Unless otherwise noted the baselines are bf16 run on A100 80GB GPU.

Our topline metrics for llama 3 are

For inference

  • 97% speedup for Llama 3 8B using autoquant with int4 weight only quantization and hqq
  • 73% peak VRAM reduction for Llama 3.1 8B at 128K context length with a quantized KV cache

For training

  • 50% speedup for Llama 3 70B pretraining using float8 training on H100
  • 30% peak VRAM reduction for Llama 3 8B using 4 bit quantized optimizers.

Our topline metrics for diffusion model inference

  • 53% speedup using float8 dynamic quantization inference with float8 row-wise scaling on flux1.dev onH100
  • 50% reduction in model VRAM for CogVideoX using int8 dynamic quantization

Below we’ll walk through some of the techniques available in torchao you can apply to your models for inference and training.

Inference

Our inference quantization algorithms work over arbitrary PyTorch models that contain nn.Linear layers. Weight only and dynamic activation quantization for various dtypes and sparse layouts can be chosen using our top level quantize_ api

from torchao.quantization import (
quantize_,
int4_weight_only,
)
quantize_(model, int4_weight_only())

Sometimes quantizing a layer can make it slower because of overhead so if you’d rather we just pick how to quantize each layer in a model for you then you can instead run

model = torchao.autoquant(torch.compile(model, mode=’max-autotune’))

quantize_ API has a few different options depending on whether your model is compute bound or memory bound.

from torchao.quantization import (
# Memory bound models
int4_weight_only,
int8_weight_only,

# Compute bound models  
int8_dynamic_activation_int8_semi_sparse_weight,  
int8_dynamic_activation_int8_weight,  
  
# Device capability 8.9+  
float8_weight_only,  
float8_dynamic_activation_float8_weight,   )

We also have extensive benchmarks on diffusion models in collaboration with the HuggingFace diffusers team in diffusers-torchao where we demonstrated 53.88% speedup on Flux.1-Dev and 27.33% speedup on CogVideoX-5b

Our APIs are composable so we’ve for example composed sparsity and quantization to bring 5% speedup for ViT-H inference

But also can do things like quantize weights to int4 and the kv cache to int8 to support Llama 3.1 8B at the full 128K context length running in under 18.9GB of VRAM.

QAT

Post training quantization, especially at less than 4 bit can suffer from serious accuracy degradations. Using Quantization Aware Training (QAT) we’ve managed to recover up to 96% of the accuracy degradation on hellaswag. We’ve integrated this as an end to end recipe in torchtune with a minimal tutorial

Training

Low precision compute and communications

torchao provides easy to use e2e workflows for reducing the precision of training compute and distributed communications, starting with float8 for `torch.nn.Linear` layers.Here is a one-liner to convert the compute gemms of your training run to float8:

from torchao.float8 import convert_to_float8_training
convert_to_float8_training(model)

For an e2e example of how to speed up LLaMa 3 70B pretraining by up to 1.5x with float8, see our README, and torchtitan’s blog and float8 recipe.

Performance and accuracy of float8 pretraining of LLaMa 3 70B, vs bfloat16


(source: https://dev-discuss.pytorch.org/t/enabling-float8-all-gather-in-fsdp2/2359)

We are expanding our training workflows to more dtypes and layouts

  1. NF4 QLoRA in torchtune
  2. Prototype int8 training support
  3. Accelerated sparse 2:4 training

Low bit Optimizers

Inspired by Bits and Bytes we’ve also added prototype support for 8 and 4 bit optimizers as a drop in replacement for AdamW.

from torchao.prototype.low_bit_optim import AdamW8bit, AdamW4bit
optim = AdamW8bit(model.parameters())

Integrations

We’ve been actively working on making sure torchao works well in some of the most important projects in open source.

  1. Huggingface transformers as an inference backend
  2. In diffusers-torchao as a reference implementation for accelerating diffusion models
  3. In HQQ for fast 4 bit inference
  4. In torchtune for PyTorch native QLoRA and QAT recipes
  5. In torchchat for post training quantization
  6. In SGLang for for int4 and int8 post training quantization

#

Conclusion

If you’re interested in making your models faster and smaller for training or inference, we hope you’ll find torchao useful and easy to integrate.

pip install torchao

There are a lot of things we’re excited about next ranging from going lower than 4 bit, performant kernels for high-throughput inference, expanding to more layers, scaling types or granularities, MX hardware support and supporting more hardware backends. If any of the above sounds exciting you can follow our progress at: https://github.com/pytorch/ao

If you’re interested in working on torchao, we’ve created a contributors guide, and if you have any questions we hang out on the #torchao channel on discord.gg/cudamode

Acknowledgements

We are fortunate to stand on the shoulders of giants and collaborate with some of the best people in open source. Thank you!

  1. Bits and Bytes for pioneering work in low bit optimizers and QLoRA
  2. Answer.ai for their engineering work to get FSDP and QLoRA composing
  3. Mobius Labs for the lovely back and forths on quantization algorithms and low bit kernels
  4. HuggingFace transformers for their help in battle testing and integrating our work
  5. HuggingFace diffusers for our collaboration on extensive benchmarks and best practices
  6. torch.compile so we could write our algorithms in pure PyTorch
  7. CUDA MODE for most of our early contributors

Read More

How generative AI is transforming legal tech with AWS

How generative AI is transforming legal tech with AWS

Legal professionals often spend a significant portion of their work searching through and analyzing large documents to draw insights, prepare arguments, create drafts, and compare documents. The rise of generative artificial intelligence (AI) has brought an inflection of foundation models (FMs). These FMs, with simple instructions (prompts), can perform various tasks such as drafting emails, extracting key terms from contracts or briefs, summarizing documents, searching through multiple documents, and more. As a result, these models are fit for legal tech. Goldman Sachs estimated that generative AI could automate 44% of legal tasks in the US. A special report published by Thompson Reuters reported that generative AI awareness is significantly higher among legal professionals, with 91% of respondents saying they have heard of or read about these tools.

However, such models alone are not sufficient due to legal and ethical concerns around data privacy. Security and confidentiality are of paramount importance in the legal field. Legal tech professionals, like any other business handling sensitive customer information, require robust security and confidentiality practices. Advancements in AI and natural language processing (NLP) show promise to help lawyers with their work, but the legal industry also has valid questions around the accuracy and costs of these new techniques, as well as how customer data will be kept private and secure. AWS AI and machine learning (ML) services help address these concerns within the industry.

In this post, we share how legal tech professionals can build solutions for different use cases with generative AI on AWS.

AI/ML on AWS

AI and ML have been a focus for Amazon for over 25 years, and many of the capabilities customers use with Amazon are driven by ML. Ecommerce recommendation engines, Just Walk Out technology, Alexa devices, and route optimizations are some examples. These capabilities are built using the AWS Cloud. At AWS, we have played a key role in and making ML accessible to anyone who wants to use it, including more than 100,000 customers of all sizes and industries. Thomson Reuters, Booking.com, and Merck are some of the customers who are using the generative AI capabilities of AWS services to deliver innovative solutions.

AWS makes it straightforward to build and scale generative AI customized for your data, your use cases, and your customers. AWS gives you the flexibility to choose different FMs that work best for your needs. Your organization can use generative AI for various purposes like chatbots, intelligent document processing, media creation, and product development and design. You can now apply that same technology to the legal field.

When you’re building generative AI applications, FMs are part of the architecture and not the entire solution. There are other components involved, such as knowledge bases, data stores, and document repositories. It’s important to understand how your enterprise data is integrating with different components and the controls that can be put in place.

Security and your data on AWS

Robust security and confidentiality are foundations to the legal tech domain. At AWS, security is our top priority. AWS is architected to be the most secure global cloud infrastructure on which to build, migrate, and manage applications and workloads. This is backed by our deep set of over 300 cloud security tools and the trust of our millions of customers, including the most security sensitive organizations like government, healthcare, and financial services.

Security is a shared responsibility model. Core security disciplines, like identity and access management, data protection, privacy and compliance, application security, and threat modeling, are still critically important for generative AI workloads, just as they are for any other workload. For example, if your generative AI applications is accessing a database, you’ll need to know what the data classification of the database is, how to protect that data, how to monitor for threats, and how to manage access. But beyond emphasizing long-standing security practices, it’s crucial to understand the unique risks and additional security considerations that generative AI workloads bring. To learn more, refer to Securing generative AI: An introduction to the Generative AI Security Scoping Matrix.

Sovereignty has been a priority for AWS since the very beginning, when we were the only major cloud provider to allow you to control the location and movement of your customer data and address stricter data residency requirements. The AWS Digital Sovereignty Pledge is our commitment to offering AWS customers the most advanced set of sovereignty controls and features available in the cloud. We are committed to expanding our capabilities to allow you to meet your digital sovereignty needs, without compromising on the performance, innovation, security, or scale of the AWS Cloud.

AWS generative AI approach for legal tech

AWS solutions enable legal professionals to refocus their expertise on high-value tasks. On AWS, generative AI solutions are now within reach for legal teams of all sizes. With virtually unlimited cloud computing capacity, the ability to fine-tune models for specific legal tasks, and services tailored for confidential client data, AWS provides the ideal environment for applying generative AI in legal tech.

In the following sections, we share how we’re working with several legal customers on different use cases that are focused on improving the productivity of various tasks in legal firms.

Boost productivity to allow a search based on context and conversational Q&A

Legal professionals store their information in different ways, such as on premises, in the cloud, or a combination of the two. It can take hours or days to consolidate the documents prior to reviewing them if they are scattered across different locations. The industry relies on tools where searching is limited to each domain, and may not flexible enough for users to search for information.

To address this issue, AWS used AI/ML and search engines to provide a managed service where users can ask a human-like, open-ended generative AI-powered assistant to answer questions based on data and information. Users can prompt the assistant to extract key attributes that serve as metadata, find relevant documents, and answer legal questions and terms inquiries. What used to take hours can now be done in a matter of minutes, and based on what we have learned with our customers, AWS generative AI has been able to improve productivity of resources by up to a 15% increase compared to manual processes during its initial phases.

Boost productivity with legal document summarization

Legal tech workers can realize a benefit from the generation of first draft that can then be reviewed and revised by the process owner. Multiple use cases are being implemented under this category:

  • Contract summarization for tax approval
  • Approval attachment summarization
  • Case summarization

The summarization of documents can either use existing documents and videos from your document management system or allow users to upload a document and ask questions in real time. Instead of writing the summary, generative AI uses FMs to create the content so the lawyer can review the final content. This approach reduces these laborious tasks to 5–10 minutes instead of 20–60 minutes.

Boost attorney productivity by drafting and reviewing legal documents using generative AI

Generative AI can help boost attorney productivity by automating the creation of legal documents. Tasks like drafting contracts, briefs, and memos can be time-consuming for attorneys. With generative AI, attorneys can describe the key aspects of a document in plain language and instantly generate an initial draft. This new approach uses generative AI to use templates and chatbot interactions to add allowed text to an initial validation prior to legal review.

Another use case is to improve reviewing contracts using generative AI. Attorneys spend valuable time negotiating contracts. Generative AI can streamline this process by reviewing and redlining contracts, and identify potential discrepancies and conflicting provisions. Given a set of documents, this functionality allows attorneys to ask open-ended questions based on the documents along with follow-up questions, enabling human-like conversational experiences with enterprise data.

Start your AWS generative AI journey today

We are at the beginning of a new and exciting foray into generative AI, and we have just scratched the surface of some potential applications in the legal field—from text summarization, drafting legal documents, or searching based on context. The AWS generative AI stack offers you the infrastructure to build and train your own FMs, services to build with existing FMs, or applications that use other FMs. You can start with the following services:

  • Amazon Q Business is a new type of generative AI-powered assistant. It can be tailored to your business to have conversations, solve problems, generate content, and take actions using the data and expertise found in your company’s information repositories, code bases, and enterprise systems. Amazon Q Business provides quick, relevant, and actionable information and advice to help streamline tasks, speed up decision-making and problem-solving, and help spark creativity and innovation.
  • Amazon Bedrock is a fully managed service that offers a choice of high-performing FMs from leading AI companies like AI21 Labs, Anthropic, Cohere, Meta, Mistral AI, Stability AI, and Amazon through a single API, along with a broad set of capabilities to build generative AI applications with security, privacy, and responsible AI. With Amazon Bedrock, you can experiment with and evaluate top FMs for your use case, privately customize them with your data using techniques such as fine-tuning and Retrieval Augmented Generation (RAG), and build agents that perform tasks using your enterprise systems and data sources.

In upcoming posts, we will dive deeper into different architectural patterns that describe how to use AWS generative AI services to solve for these different use cases.

Conclusion

Generative AI solutions are empowering legal professionals to reduce the difficulty in finding documents and performing summarization, and allow your business to standardize and modernize contract generation and revisions. These solutions do not envision to replace law experts, but instead increase their productivity and time working on practicing law.

We are excited about how legal professionals can build with generative AI on AWS. Start exploring our services and find out where generative AI could benefit your organization. Our mission is to make it possible for developers of all skill levels and for organizations of all sizes to innovate using generative AI in a secure and scalable manner. This just the beginning of what we believe will be the next wave of generative AI, powering new possibilities in legal tech.

Resources


About the Authors

Victor FissVictor Fiss a Sr. Solution Architect Leader at AWS, helping customers in their cloud journey from infrastructure to generative AI solutions at scale. In his free time, he enjoys hiking and playing with his family.

Vineet KachhawahaVineet Kachhawaha is a Sr. Solutions Architect at AWS focusing on AI/ML and generative AI. He co-leads the AWS for Legal Tech team within AWS. He is passionate about working with enterprise customers and partners to design, deploy, and scale AI/ML applications to derive business value.

Pallavi NargundPallavi Nargund is a Principal Solutions Architect at AWS. She is a generative AI lead for East – Greenfield. She leads the AWS for Legal Tech team. She is passionate about women in technology and is a core member of Women in AI/ML at Amazon. She speaks at internal and external conferences such as AWS re:Invent, AWS Summits, and webinars. Pallavi holds a Bachelor’s of Engineering from the University of Pune, India. She lives in Edison, New Jersey, with her husband, two girls, and a Labrador pup.

Read More

Deploy generative AI agents in your contact center for voice and chat using Amazon Connect, Amazon Lex, and Amazon Bedrock Knowledge Bases

Deploy generative AI agents in your contact center for voice and chat using Amazon Connect, Amazon Lex, and Amazon Bedrock Knowledge Bases

This post is co-written with Vraj Shah and Chaitanya Hari from DoorDash.

DoorDash connects consumers with their favorite local businesses in more than 30 countries across the globe. Recently, they faced a significant challenge in handling the high volume of calls from its contractor delivery workers, known as Dashers. With a user base of over 37 million active consumers and 2 million monthly active Dashers at the end of 2023, the company recognized the need to reduce the burden on its live agents by providing a more efficient self-service experience for Dashers.

To address this challenge, the contact center team at DoorDash wanted to harness the power of generative AI to deploy a solution quickly, and at scale, while maintaining their high standards for issue resolution and customer satisfaction. Dashers, who generally prefer calling into support rather than texting while they’re on the road, require fast and reliable assistance, with minimal response latency. This low latency requirement became a critical factor in DoorDash’s quest for an effective, voice-enabled self-service solution.

Working with the AWS Generative AI Innovation Center, DoorDash built a solution to provide Dashers with a low-latency self-service voice experience to answer frequently asked questions, reducing the need for live agent assistance, in just 2 months.

The solution uses Amazon Lex, a voice-enabled conversational AI service; Amazon Bedrock, a fully managed service that makes foundation models (FMs) from leading AI startups and Amazon available through an API, so you can choose from a wide range of FMs to find the model that is best suited for your use case; and Amazon Bedrock Knowledge Bases, a fully managed service that connects large language models (LLMs) to your data sources. It’s a fully serverless architecture that uses Amazon OpenSearch Serverless, which can run petabyte-scale workloads, without you having to manage the underlying infrastructure.

The solution fields hundreds of thousands of calls per day, responding to Dashers with answers to their questions in 2.5 seconds or less. It also enables operational capabilities including automated testing, conversation analytics, monitoring and observability, and LLM hallucination prevention and detection.

“We wanted to empower Dashers to get help with their most common questions and issues as quickly and efficiently as possible, saving them time, effort, and increasing their trust in DoorDash’s self-service capabilities,”

– Chaitanya Hari, Contact Center Product Lead at DoorDash.

In this post, we show how you can deploy generative AI agents in your contact center using AWS services.

Solution overview

For this post, we use a fictional hotel chain example called Example Corp Hospitality Group. The following is a sample interaction between a customer and the LLM-powered automated agent:

Customer: Hi, we’re looking for a great place for a family vacation

Agent: Example Corp Family Getaways offers family-friendly hotels and resorts located near top attractions like theme parks, national parks, and historic sites. The resorts feature spacious accommodations, activities for all ages, and diverse dining options to create unforgettable family memories.

Customer: Where are they located?

Agent: Example Corp Family Getaways offers family-friendly resorts in a variety of popular vacation destinations, including Orlando, Florida; Anaheim, California; Williamsburg, Virginia; Pigeon Forge, Tennessee; San Antonio, Texas; Scottsdale, Arizona; Branson, Missouri; Wisconsin Dells, Wisconsin; and Hershey, Pennsylvania.

Customer: Pigeon Forge sounds interesting.

Agent: Our Pigeon Forge resort offers families a chance to experience the great outdoors while enjoying the excitement of Dollywood and other nearby attractions. Guests can enjoy our indoor water park, mini-golf course, and nightly s’mores roasts around the fire pit.

Customer: Do they have kids’ menus?

Agent: Example Corp Family Getaways resorts in Pigeon Forge, Tennessee feature family-friendly dining options, including dedicated kids’ menus with favorites like pizza, chicken nuggets, and mac and cheese.

You can deploy the solution in your own AWS account and try the example solution. The following diagram illustrates the solution architecture.

Solution architecture diagram

We will walk you through deploying and testing these major components of the solution:

  1. An AWS CloudFormation stack to set up an Amazon Bedrock knowledge base, where you store the content used by the solution to answer questions.
  2. A CloudFormation stack to create an Amazon Lex bot and an AWS Lambda fulfillment function, which implement the core Retrieval Augmented Generation (RAG) question answering capability.
  3. An optional CloudFormation stack to deploy a data pipeline to enable a conversation analytics dashboard.
  4. An optional CloudFormation stack to enable an asynchronous LLM hallucination detection feature.
  5. Optional Jupyter notebooks in Amazon SageMaker that provide an automated testing capability that compares generated answers to ground truth answers, providing pass/fail grades with explanations.

Everything you need is also provided as open source in our GitHub repo.

Prerequisites

You need to have an AWS account and an AWS Identity and Access Management (IAM) role and user with permissions to create and manage the necessary resources and components for this application. If you don’t have an AWS account, see How do I create and activate a new Amazon Web Services account?

This solution uses Amazon Bedrock LLMs to find answers to questions from your knowledge base. Before proceeding, if you have not previously done so, request access to at least the following Amazon Bedrock models:

  • Amazon Titan Embeddings G1 – Text
  • Cohere Embed English v3 and Cohere Embed Multilingual v3
  • Anthropic’s Claude 3 Haiku and Anthropic’s Claude 3 Sonnet

If you’ll be integrating with Amazon Connect, make sure you have an instance available in your account. If you don’t already have one, you can create one. If you plan to deploy the conversation analytics stack, you need Amazon QuickSight, so make sure you have enabled it in your AWS account. 

At the time of writing, this solution is available in the following AWS Regions: Asia Pacific (Singapore, Sydney, Tokyo), Canada (Central), Europe (Frankfurt, London), US East (N. Virginia), and US West (Oregon).

Deploy the Amazon Bedrock knowledge base

You can use the provided CloudFormation stack for the Amazon Bedrock knowledge base instances you may need using Amazon Simple Storage Service (Amazon S3) as a data source. Complete the following steps to set up your knowledge base:

  1. Sign in to your AWS account, then choose Launch Stack to deploy the CloudFormation template:

Launch Knowledge Base stack

  1. Provide a stack name, for example contact-center-kb.
  2. Provide the name for an existing S3 bucket, for example contact-center-kb-(your-account-number). This is where the content for the demo solution will be stored. Create this S3 bucket if you don’t already have one.
  3. Do not specify an S3 prefix.
  4. Choose an embedding model, such as amazon.titan-embed-text-v2:0.
  5. Choose the Fixed-sized chunking chunking strategy.
  6. For the maximum tokens per chunk entry, use 600 for the Amazon Titan embeddings model. (If you are using the Cohere embeddings model, use 512). This represents about a full page of text.
  7. For the percentage overlap, use 10%.
  8. Leave the four entries for Index Details at their default values (index name, vector field name, metadata field name, and text field name).
  9. Choose Next.
  10. On the Configure stack options page, choose Next
  11. On the Review and create page, acknowledge the IAM capabilities message and choose Submit.

The stack will take about 10 minutes to deploy.

Upload the sample content and test your knowledge base

The demonstration sample for the solution includes an LLM-based hotel-bot that can answer questions about the fictional hotel chain Example Corp Hospitality Group. You need to load the content for this hotel chain into the S3 bucket that you specified for the knowledge base stack. You can find the S3 bucket used by the CloudFormation stack on the Outputs tab for the stack.

  1. Either using the AWS Command Line Interface (AWS CLI) or the AWS Management Console, upload the following folders from the content section of the GitHub repo:
    • corporate
    • family-getaways
    • luxury-suites
    • party-times
    • seaside-resorts
    • waypoint-inns

You can choose either the PDF versions or the Word document versions (Word versions recommended). When you’re done, the top level of your S3 bucket should contain six folders, each containing a single Word or PDF document.

  1. On the Amazon Bedrock console, choose Knowledge bases in the navigation pane.
  2. Choose your new knowledge base to open it.

A message appears that says “One or more data sources have not been synced.”

  1. Select the data source and choose Sync.

The sync process should only take a minute or two.

After your data source has been synced, you can try some question answering on the Amazon Bedrock console. Make sure you have enabled all the models approved by your organization on the Amazon Bedrock Model access page.

Select an LLM model, such as Anthropic’s Claude 3 Haiku on Amazon Bedrock, and start asking questions! You might want to peruse the sample documents you uploaded for some ideas about questions to ask.

Knowledge base test example

Deploy the hallucination detection stack (optional)

If you want to use the optional asynchronous hallucination detection feature, deploy this stack. Otherwise, move on to the next section. You can use this CloudFormation stack for any RAG-based solution requiring asynchronous hallucination detection.

  1. Choose Launch Stack:

Launch Hallucination Detection stack

  1. Provide a stack name, for example contact-center-hallucination-detection.
  2. Specify an LLM to perform the hallucination detection. At the time of writing, there are seven LLMs that are recommended for hallucination detection. For the demo solution, choose the default (Claude V3 Sonnet).
  3. Optionally, create an Amazon Key Management Service (AWS KMS) customer managed key (CMK) to encrypt the Amazon Simple Queue Service (Amazon SQS) queue and the Amazon CloudWatch Logs log group for the Lambda function (recommended for production).

There are two types of Amazon CloudWatch alarms in this stack:

  • ERROR alarms – For code issues with the Lambda function that does the hallucination detection work
  • WARNING alarms – For when the Lambda function actually detects a hallucination

Both alarm types are optional, but recommended.

  1. Choose yes to enable or no to disable the alarms.
  2. For the alarms that you enable, you can specify an optional email address or distribution list to receive email notifications about the alarms.
  3. Choose Next.
  4. On the Configure stack options page, choose Next
  5. On the Review and create page, acknowledge the IAM capabilities message and choose Submit.

The stack will take about a minute or two to deploy.

When the stack is complete, you can review the resources it creates on the Resources tab for the CloudFormation stack. In particular, review the Lambda function code.

If you entered email addresses for the alarm notifications, you should receive email requests asking you to confirm the subscriptions. Confirm them to receive email notifications about alarms that may occur.

Deploy the RAG solution stack

If you’re integrating with Amazon Connect, make sure you have an instance available in your account. If you don’t already have one, you can create one. Then complete the following steps to deploy the Amazon Lex bot and Lambda fulfillment function:

  1. Choose Launch Stack:

  1. Provide a stack name, for example contact-center-rag-solution.
  2. Provide a name for the Amazon Lex bot, for example hotel-bot.
  3. Specify the number of conversation turns to retain for context. This can be optimized for different use cases and datasets. For the hotel-bot demo, try the default of 4.
  4. Optionally, specify an existing CloudWatch Logs log group ARN for the Amazon Lex conversation logs. You’ll need this if you’re planning to deploy the conversation analytics stack. Create a log group if you don’t already have one.
  5. Optionally, enter a value for Lambda provisioned concurrency units for the Amazon Lex bot handler function. If set to a non-zero number, this will prevent Lambda cold starts and is recommended for production and for internal testing. For development, 0 or 1 is recommended.
  6. Optionally, select the option to create a KMS CMK to encrypt the CloudWatch Logs log groups for the Lambda functions (recommended for production).
  7. If you’re integrating with Amazon Connect, provide the Amazon Connect instance ARN, as well as the name for a new contact flow that the stack will create for you.
  8. Provide the knowledge base ID from the knowledge base stack you just created. You can find this on the Outputs tab of the knowledge base stack.
  9. Provide the S3 bucket used by the knowledge base stack (also referenced on the Outputs tab).
  10. If you created the hallucination detection stack, enter the SQS queue name. You can find this on the Outputs tab of the hallucination detection stack.
  11. If you opted for a KMS key for your hallucination detection stack, enter the KMS key ARN.
  12. Choose Next.
  13. On the Configure stack options page, choose Next
  14. On the Review and create page, acknowledge the IAM capabilities message and choose Submit.

The stack will take a few minutes to complete.

To try the RAG solution, navigate to the Amazon Lex console and open the hotel-bot bot. The bot has a single language section for the English language. Choose Intents in the navigation pane to check out the intents for this sample bot. They include the following:

  • Intents related to questions about the hotel chain and its various hotel brands – This includes Accommodations, Amenities, CorporateOverview, Locations, Parking, and more. These intents are routed to the RAG solution by Amazon Lex. Technically, intents like these could be omitted, allowing the FallbackIntent to handle requests of this nature. However, including these intents (and their sample utterances) provides Amazon Lex with information about the “language” of your solution domain, allowing it to better optimize its speech-to-text engine and improve speech transcription accuracy. In addition, including these intents is useful for conversation analytics.
  • SwitchBrand – This intent is designed to improve conversation flow by allowing the user to say things like “What about at your other hotels?” in the middle of a conversation.
  • Booking – This demonstrates an example of routing the caller to a live agent queue.
  • SpeakToAgent – This intent is for when a caller specifically requests a live agent.
  • Welcome, Goodbye, and Help – These conversation support intents are for starting and ending the conversation, or asking what the bot can do.
  • FallbackIntent – This is the standard intent for questions or requests that don’t match other intents. In this example solution, such requests are also routed to the RAG solution to allow the LLM to answer based on the content in the knowledge base.
  • SelectKnowledgeBase and SelectLLM – These allow the user to direct the RAG solution to use a different knowledge base instance (if more than one is available) or a different LLM. These intents are designed for testing purposes, and should normally be included only in non-production deployments. You can test the RAG solution with any of the LLMs available on Amazon Bedrock. You can also switch to a different knowledge base or LLM mid-conversation, if desired.
  • ToggleLLMGuardrails and ToggleLLMContext – These allow the user to turn the prompt-based LLM guardrails off or on, and to disable or enable the retrieval of information from the knowledge base. These intents are designed for testing purposes, and should normally be included only in non-production environments. You can turn these settings off and on mid-conversation, if desired.

You can choose Test on the Amazon Lex console to try the solution.

Amazon Lex test example

Try some sample conversations, for example:

  • Ask “We’re looking for a nice place for a family vacation” and the bot will respond “Example Corp Family Getaways offers family-friendly accommodations…”
  • Ask “Where are they located?” and the bot will respond “Example Corp Family Getaways has locations in…”
  • Ask “Tell me more about the one in Pigeon Forge” and the bot will respond “The Example Corp Family Getaways resort in Pigeon Forge, Tennessee is…”

You can refer to the sample documents you uploaded for some ideas about questions to ask.

If you deployed the hallucination detection stack, you can look at its assessment of the answers you got when you tested. From the hallucination detection stack details page, on the Resources tab, choose the HallucinationDetectionFunctionLogGroup entry. This opens the CloudWatch Logs log group for the Lambda hallucination detection function. You can inspect the log statements to observe the hallucination detection process in action, as shown in the following screenshot.

Hallucination detection example

If you’re integrating with Amazon Connect, there will be a new contact flow in the Amazon Connect instance you specified, as shown in the following screenshot.

Amazon Connect contact flow example

To test using voice, just claim a phone number, associate it with this contact flow, and give it a call!

Deploy the conversation analytics stack (optional)

This stack uses QuickSight for analytics, so make sure you have already enabled it in your AWS account before deploying this stack.

  1. Choose Launch Stack:

  1. Provide a stack name, for example contact-center-analytics.
  2. Provide the name (not the ARN) of the Amazon Lex conversation logs log group. This is the same CloudWatch Logs log group you used for the the RAG solution CloudFormation stack.
  3. Choose an option for purging source log streams from the log group. For testing, choose no.
  4. Choose an option for redacting sensitive data using from the conversation logs. For testing, choose no.
  5. Leave the personally identifiable information (PII) entity types and confidence score thresholds at their default values.
  6. Choose an option for allowing unredacted logs for the Lambda function in the data pipeline. For testing, choose yes.
  7. Select an option for creating a KMS CMK.

If you create a CMK, it will be used to encrypt the data in the S3 bucket that this stack creates, where the normalized conversation data is housed. This allows you to control which IAM principals are allowed to decrypt the data and view it. This setting is recommended for production.

  1. Select the options for enabling CloudWatch alarms for ERRORS and WARNINGS in the Amazon Lex data pipeline. It is recommended to enable these alarms.
  2. For the alarms that you enable, you can specify an optional email address or distribution list to receive email notifications about the alarms.
  3. Choose Next.
  4. On the Configure stack options page, choose Next
  5. On the Review and create page, acknowledge the IAM capabilities message and choose Submit.

The stack should about 5 minutes to complete.

The following diagram illustrates the architecture of the stack.

As Amazon Lex writes conversation log entries to CloudWatch Logs (1), they are picked up by Amazon Data Firehose and streamed to an S3 bucket (2). Along the way, a Lambda transformation function (3) simplifies the JSON structure of the data to make it more user-friendly for querying purposes. The Lambda function can also redact sensitive data using Amazon Comprehend (4), and optionally purge the entries from the CloudWatch Logs log group as it consumes them.

On a scheduled basis (every 5 minutes), an AWS Glue crawler (5) inspects new data in the S3 bucket, and updates a data schema that is used by Amazon Athena (6) to provide a SQL interface to the data. This allows tools like QuickSight (7) to create near real-time dashboards, analytics, and visualizations of the data.

Set up the QuickSight dashboard (optional)

Before you create the QuickSight dashboard, make sure to return to the Amazon Lex console and ask a few questions, in order to generate some data for the dashboard. It will take about 5 minutes for the pipeline to process this new conversation data and make it available to QuickSight.

To set up dashboards and visualizations in QuickSight, complete the following steps:

  1. On the QuickSight console, choose the user profile icon and choose Manage QuickSight.
  2. Under Security & permissions, choose Manage in the QuickSight access to AWS services
  3. Under Amazon S3, choose Select S3 buckets.
  4. Enable access to the S3 bucket created by the conversation analytics stack (it will have a name with a 12-character unique identifier prepended to lex-conversation-logs). You don’t need to enable write permissions.
  5. Choose Finish, then choose Save.
  6. Choose the QuickSight menu icon to return to the main page in QuickSight.
  7. In the navigation pane, choose Datasets.
  8. Choose New dataset.
  9. From the list of dataset sources, choose Athena.
  10. Enter a data source name (for example contact-center-analytics).
  11. Choose Create data source.
  12. In the Choose your table window, choose your database, select your lex_conversation_logs table, and choose Edit/Preview data.

Quicksight select database table example

This opens your new QuickSight dataset. You can review the various attributes available, and see some results from your testing.

Quicksight dataset example

For improved speed in displaying the data, you can select the SPICE option for Query mode, but that will mean you need to refresh SPICE (or set up an hourly auto-update schedule) when you want to see data updates based on additional testing.

  1. For now, leave the setting as Direct query.
  2. When you’re ready, choose PUBLISH & VISUALIZE.
  3. In the New sheet window, keep the defaults and choose CREATE.

This opens the analysis page, where you can start creating visualizations.

Quicksight analysis example

Automated testing notebooks (optional)

To try the automated testing capability, you need a SageMaker Jupyter notebook. Alternatively, you can run the notebooks locally in your integrated development environment (IDE) or other environment that supports Jupyter notebooks.

  1. On the SageMaker console, under Notebook in the navigation pane, choose Notebook instances.
  2. Choose Create notebook instance.
  3. Give your notebook a name, such as contact-center-rag-testing.
  4. To enable multi-threaded testing, it’s recommended to select a larger instance, such as ml.m5.2xlarge (which has 8 vCPUs) or ml.m5.4xlarge (which has 16 vCPUs). Don’t forget to stop them when they’re not in use.
  5. Keep the default setting for Platform identifier (Amazon Linux 2, Jupyter Lab 3).
  6. Under Additional configuration, increase the Volume size in GB setting to 50 GB.
  7. In the Permissions and encryption section, under IAM role, choose Create a new role in the drop down list (don’t use the role creation wizard).
  8. In the Create an IAM role window, you can specify any S3 buckets you want to provide access to (none are needed for this solution).
  9. Choose Create role.

Amazon Sagemaker create role example

  1. Choose Create notebook instance.

It will take several minutes for your notebook instance to become available. While it’s being created, you can update the IAM role to add some inline policies you’ll need for accessing Amazon Bedrock and Amazon Lex.

  1. On the Notebook instances page, open your notebook instance (for example, contact-center-rag-testing) and then choose the entry under IAM role ARN to open the role.
  2. Add the following inline policies (available in the notebooks/iam-roles folder in the GitHub repository):

You can revise these roles to limit resource access as needed.

  1. After your notebook instance has started, choose Open Jupyter to open the notebook.
  2. Upload the following to your notebook instance (if desired, you can zip the files locally, upload the zip archive, and then unzip it in SageMaker):
    1. bedrock_helpers.py – This script configures LLM instances for the notebooks.
    2. bedrock_utils – You should make sure to upload all subfolders and files, and confirm that the folder structure is correct.
    3. run_tests.ipynb – This notebook runs a set of test cases.
    4. generate_ground_truths.ipynb – Given a set of questions, this notebook generates potential ground truth answers.
    5. test-runs – This folder should contain Excel workbooks.
  3. Open the run_tests.ipynb notebook.
  4. In the second cell, replace the bot_id and bot_alias_id values with the values for your Amazon Lex bot (you can find these on the Outputs tab of the RAG solution stack).
  5. After you updated these values, choose Restart & Run All on the Kernel

If you’re using a ml.m5.2xlarge instance type, it should take about a minute to run the 50 test cases in the test-runs/test-cases-claude-haiku-2024-09-02.xlsx workbook. When it’s complete, you should find a corresponding test-results workbook in the test-runs folder in your notebook.

Sample test results

After a few minutes, you can also see the test results in your conversation analytics dashboard.

Quicksight test run example

Adapt the solution to your use case

You can adapt this solution to your specific use cases with minimal work:

  • Replace the Amazon Bedrock Knowledge Bases sample content with your content – Replace the content in the S3 bucket and organize it into a folder structure that makes sense for your use case. You can create a new knowledge base for your content.
  • Replace the intents in the Amazon Lex bot with intents for your use case – Modify the Amazon Lex bot definition to reflect the interactions you want to enable for your use case.
  • Modify the LLM prompts in the bedrock_utils code – In the Amazon Lex bot fulfillment Lambda function, review the LLM prompt definitions in the bedrock_utils folder. For example, provide a use case-specific definition for the role of the LLM-based agent.
  • Modify the bot handler code if necessary – In the Amazon Lex bot fulfillment Lambda function, review the code in the TopicIntentHandler.py function. For the knowledge base search, this code provides an example that uses the sample hotel brands as topics. You can replace this metadata search query with one appropriate for your use cases.

Clean up

Congratulations! You have completed all the steps for setting up your voice-enabled contact center generative AI agent solution using AWS services.

When you no longer need the solution deployed in your AWS account, you can delete the CloudFormation stacks that you deployed, as well as the SageMaker notebook instance if you created one.

Conclusion

The contact center generative AI agent solution offers a scalable, cost-effective approach to automate Q&A conversations in your contact center, using AWS services like Amazon Bedrock, Amazon Bedrock Knowledge Bases, OpenSearch Serverless, and Amazon Lex.

The solution code is provided as open source—use it as a starting point for your own solution, and help us make it better by contributing back fixes and features through GitHub pull requests. Browse to the GitHub repository to explore the code, and check the CHANGELOG for the latest changes and the README for the latest documentation updates.

For expert assistance, the AWS Generative AI Innovation Center, AWS Professional Services, and our AWS Partners are here to help.


About the Authors

Vraj Shah is a Connect Developer at DoorDash.

Chaitanya Hari is a Voice/Contact Center Product Lead at DoorDash.

Marcelo Silva PhotoMarcelo Silva is a Principal Product Manager at Amazon Web Services, leading strategy and growth for Amazon Bedrock Knowledge Bases and Amazon Lex.

Adam Diesterhaft is a Sr. Pursuit Solutions Architect on the Amazon Connect team.

Brian Yost is a Principal Deep Learning Architect in the AWS Generative AI Innovation Center.

Read More

Migrating to Amazon SageMaker: Karini AI Cut Costs by 23%

Migrating to Amazon SageMaker: Karini AI Cut Costs by 23%

This post is co-written with Deepali Rajale from Karini AI.

Karini AI, a leading generative AI foundation platform built on AWS, empowers customers to quickly build secure, high-quality generative AI apps. GenAI is not just a technology; it’s a transformational tool that is changing how businesses use technology. Depending on where they are in the adoption journey, the adoption of generative AI presents a significant challenge for enterprises. While pilot projects using Generative AI can start effortlessly, most enterprises need help progressing beyond this phase. According to Everest Research, more than a staggering 50% of projects do not move beyond the pilots as they face hurdles due to the absence of standardized or established GenAI operational practices.

Karini AI offers a robust, user-friendly GenAI foundation platform that empowers enterprises to build, manage, and deploy Generative AI applications. It allows beginners and expert practitioners to develop and deploy Gen AI applications for various use cases beyond simple chatbots, including agentic, multi-agentic, Generative BI, and batch workflows. The no-code platform is ideal for quick experimentation, building PoCs, and rapid transition to production with built-in guardrails for safety and observability for troubleshooting. The platform includes an offline and online quality evaluation framework to assess quality during experimentation and continuously monitor applications post-deployment. Karini AI’s intuitive prompt playground allows authoring prompts, comparison with different models across providers, prompt management, and prompt tuning. It supports iterative testing of more straightforward, agentic, and multi-agentic prompts. For production deployment, the no-code recipes enable easy assembly of the data ingestion pipeline to create a knowledge base and deployment of RAG or agentic chains. The platform owners can monitor costs and performance in real-time with detailed observability and seamlessly integrate with Amazon Bedrock for LLM inference, benefiting from extensive enterprise connectors and data preprocessing techniques.

The following diagram illustrates how Karini AI delivers a comprehensive Generative AI foundational platform encompassing the entire application lifecycle. This platform delivers a holistic solution that speeds up time to market and optimizes resource utilization by providing a unified framework for development, deployment, and management.

In this post, we share how Karini AI’s migration of vector embedding models from Kubernetes to Amazon SageMaker endpoints improved concurrency by 30% and saved over 23% in infrastructure costs.

Karini AI’s Data Ingestion Pipeline for creating vector embeddings

Enriching large language models (LLMs) with new data is crucial to building practical generative AI applications. This is where Retrieval Augmented Generation (RAG) comes into play. RAG enhances LLMs’ capabilities by incorporating external data and producing state-of-the-art performance in knowledge-intensive tasks. Karini AI offers no-code solutions for creating Generative AI applications using RAG. These solutions include two primary components: a data ingestion pipeline for building a knowledge base and a system for knowledge retrieval and summarization. Together, these pipelines simplify the development process, enabling the creation of powerful AI applications with ease.

Data Ingestion Pipeline

Ingesting data from diverse sources is essential for executing Retrieval Augmented Generation (RAG). Karini AI’s data ingestion pipeline enables connection to multiple data sources, including Amazon S3, Amazon Redshift, Amazon Relational Database Service (RDS), websites and Confluence, handling structured and unstructured data. This source data is pre-processed, chunked, and transformed into vector embeddings before being stored in a vector database for retrieval. Karini AI’s platform provides flexibility by offering a range of embedding models from their model hub, simplifying the creation of vector embeddings for advanced AI applications.

Here is a screenshot of Karini AI’s no-code data ingestion pipeline.

Karini AI’s model hub streamlines adding models by integrating with leading foundation model providers such as Amazon Bedrock and self-managed serving platforms.

Infrastructure challenges

As customers explore complex use cases and datasets grow in size and complexity, Karini AI scales the data ingestion process efficiently to provide high concurrency for creating vector embeddings using state-of-the-art embedding models, such as those listed in the MTEB leaderboard, which are rapidly evolving and unavailable on managed platforms.

Before migrating to Amazon SageMaker, we deployed our models on self-managed Kubernetes(K8s) on EC2 instances. Kubernetes offered significant flexibility to deploy models from HuggingFace quickly, but soon, our engineering had to manage many aspects of scaling and deployment. We faced the following challenges with our existing setup that must be addressed to improve efficiency and performance.

  • Keeping up with SOTA(State-Of-The-Art) models: We managed different deployment manifests for each model type (such as classifiers, embeddings, and autocomplete), which was time-consuming and error-prone. We also had to maintain the logic to determine the memory allocation for different model types.
  • Managing dynamic concurrency was hard: A significant challenge with using models hosted on Kubernetes was achieving the highest dynamic concurrency level. We aimed to maximize endpoint performance to achieve target transactions per second (TPS) while meeting strict latency requirements.
  • Higher Costs: While Kubernetes (K8s) provides robust capabilities, it has become more costly due to the dynamic nature of data ingestion pipelines, which results in under-utilized instances and higher costs.

Our search for an inference platform led us to Amazon SageMaker, a solution that efficiently manages our models for higher concurrency, meets customer SLAs, and scales down serving when not needed. The reliability of SageMaker’s performance gave us confidence in its capabilities.

Amazon SageMaker for Model Serving

Choosing Amazon SageMaker was a strategic decision for Karini AI. It balanced the need for higher concurrencies at a lower cost, providing a cost-effective solution for our needs. SageMaker’s ability to scale and maximize concurrency while ensuring sub-second latency addresses various generative AI use cases making it a long-lasting investment for our platform.

Amazon SageMaker is a fully managed service that allows developers and data scientists to quickly build, train, and deploy machine learning (ML) models. With SageMaker, you can deploy your ML models on hosted endpoints and get real-time inference results. You can easily view the performance metrics for your endpoints in Amazon CloudWatch, automatically scale endpoints based on traffic, and update your models in production without losing any availability.

Karini AI’s data ingestion pipeline architecture with Amazon SageMaker Model endpoint is here.

Advantages of using SageMaker hosting

Amazon SageMaker offered our Gen AI ingestion pipeline many direct and indirect benefits.

  1. Technical Debt Mitigation: Amazon SageMaker, being a managed service, allowed us to free our ML engineers from the burden of inference, enabling them to focus more on our core platform features—this relief from technical debt is a significant advantage of using SageMaker, reassuring us of its efficiency.
  2. Meet customer SLAs: Knowledgebase creation is a dynamic task that may require higher concurrencies during vector embedding generation and minuscule load during query time. Based on customer SLAs and data volume, we can choose batch inference, real-time hosting with auto-scaling, or serverless hosting. Amazon SageMaker also provides recommendations for instance types suitable for embedding models.
  3. Reduced Infrastructure cost: SageMaker is a pay-as-you-go service that allows you to create batch or real-time endpoints when there is demand and destroy them when work is complete. This approach reduced our infrastructure cost by more than 23% over the Kubernetes (K8s) platform.
  4. SageMaker Jumpstart: SageMaker Jumpstart provides access to SOTA (State-Of-The-Art) models and optimized inference containers, making it ideal for creating new models that are accessible to our customers.
  5. Amazon Bedrock compatibility: Karini AI integrates with Amazon Bedrock for LLM (Large Language Model) inference. The custom model import feature allows us to reuse the model weights used in SageMaker model hosting in Amazon Bedrock to maintain a joint code base and interchange serving between Bedrock and SageMaker as per the workload.

Conclusion

Karini AI significantly improved, achieving high performance and reducing model hosting costs by migrating to Amazon SageMaker. We can deploy custom third-party models to SageMaker and quickly make them available to Karini’s model hub for data ingestion pipelines. We can optimize our infrastructure configuration for model hosting as needed, depending on model size and our expected TPS. Using Amazon SagaMaker for model inference enabled Karini AI to handle increasing data complexities efficiently and meet concurrency needs while optimizing costs. Moreover, Amazon SageMaker allows easy integration and swapping of new models, ensuring that our customers can continuously leverage the latest advancements in AI technology without compromising performance or incurring unnecessary incremental costs.

Amazon SageMaker and Karini.ai offer a powerful platform to build, train, and deploy machine learning models at scale. By leveraging these tools, you can:

  • Accelerate development:Build and train models faster with pre-built algorithms and frameworks.
  • Enhance accuracy: Benefit from advanced algorithms and techniques for improved model performance.
  • Scale effortlessly:Deploy models to production with ease and handle increasing workloads.
  • Reduce costs:Optimize resource utilization and minimize operational overhead.

Don’t miss out on this opportunity to gain a competitive edge.


About Authors

Deepali Rajale is the founder of Karini AI, which is on a mission to democratize generative AI across enterprises. She enjoys blogging about Generative AI and coaching customers to optimize Generative AI practice. In her spare time, she enjoys traveling, seeking new experiences, and keeping up with the latest technology trends. You can find her on LinkedIn.

Ravindra Gupta is the Worldwide GTM lead for SageMaker and with a passion to help customers adopt SageMaker for their Machine Learning/ GenAI workloads.  Ravi is fond of learning new technologies, and enjoy mentoring startups on their Machine Learning practice. You can find him on Linkedin

Read More

Harnessing the power of AI to drive equitable climate solutions: The AI for Equity Challenge

Harnessing the power of AI to drive equitable climate solutions: The AI for Equity Challenge

The climate crisis is one of the greatest challenges facing our world today. Its impacts are far-reaching, affecting every aspect of our lives—from public health and food security to economic stability and social justice. What’s more, the effects of climate change disproportionately burden the world’s most vulnerable populations, exacerbating existing inequities around gender, race, and socioeconomic status.

But we have the power to create change. By harnessing the transformative potential of AI, we can develop innovative solutions to tackle the intersectional challenges at the heart of the climate crisis. That’s why the International Research Centre on Artificial Intelligence (IRCAI), Zindi, and Amazon Web Services (AWS) are proud to announce the launch of the “AI for Equity Challenge: Climate Action, Gender, and Health”—a global virtual competition aimed at empowering organizations to use advanced AI and cloud technologies to drive real-world impact with a focus on benefitting vulnerable populations around the world.

Aligning with the United Nations Sustainable Development Goals (SDGs) 3, 5, and 13—focused on good health and well-being, gender equality, and climate action respectively—this challenge seeks to uncover the most promising AI-powered solutions that address the compounding issues of climate change, gender equity, and public health. By bringing together a diverse global community of innovators, we hope to accelerate the development of equitable, sustainable, and impactful applications of AI for the greater good.

“As artificial intelligence rapidly evolves, it is crucial that we harness its potential to address real-world challenges. At IRCAI, our mission is to guide the ethical development of AI technologies, ensuring they serve the greater good and are inclusive of marginalized AI communities. This challenge, in collaboration with AWS, is an opportunity to discover and support the most innovative minds that are using AI and advanced computing to create impactful solutions for the climate crisis.”

– Davor Orlic, COO at IRCAI.

The challenge will unfold in two phases, welcoming both ideators and solution builders to participate. In the first phase, organizations are invited to submit technical proposals outlining specific challenges at the intersection of climate action, gender equity, and health that they aim to address using AI and cloud technologies. A steering committee convened by IRCAI will evaluate these proposals based on criteria such as innovation, feasibility, and potential for global impact. The competition will be judged and mentored in collaboration with NAIXUS, a network of AI and sustainable development research organizations.

The top two winning proposals from the first phase will then advance to the second round, where they will serve as the foundation for two AI challenges hosted on the Zindi platform. During this phase, developers and data scientists from around the world will compete to build the most successful AI-powered solutions to tackle the real-world problems identified by the first-round winners.

AI for Equity Challenge Timeline

The winning AI solutions from the second-round challenges will belong entirely to the organizations that submitted the original winning proposals, who will also receive $15,000 in AWS credits and technical support from AWS and IRCAI to help implement their solutions. Additionally, the first-place teams in each of the two final Zindi challenges will receive cash prizes of $6,000, $4,000, and $2,500 for first, second, and third place respectively.

But the true reward goes beyond the prizes. By participating in this challenge, organizations and individuals alike will have the opportunity to make a lasting impact on the lives of those most vulnerable to the effects of climate change. Through the power of AI and advanced cloud computing, we can develop groundbreaking solutions that empower women, improve public health outcomes, and drive sustainable progress on the climate action front.

Throughout the hackathon, participants will have access to a wealth of resources, including mentorship from industry experts, training materials, and AWS cloud computing resources. Amazon Sustainability Data Initiative (ASDI), a collaboration between AWS and leading scientific organizations, provides a catalog of over 200 datasets spanning climate projections, satellite imagery, air quality data, and more, enabling participants to build robust and data-driven solutions.

“Climate change is one of the greatest threats of our time, and we believe innovation is key to overcoming it. The AI for Equity Challenge invites innovators to bring forward their most visionary ideas, and we’ll support them with AWS resources — whether that’s computing power or advanced cloud technologies — to turn those ideas into reality. Our goal is to drive cloud innovation, support sustainability solutions, and make a meaningful impact on the climate crisis.”

– Dave Levy, Vice President of Worldwide Public Sector, AWS

This initiative is made possible through the support of ASDI, which provides researchers, scientists, and innovators with access to a wealth of publicly available datasets on AWS to advance their sustainability-focused work. The AI for Equity Challenge: Climate Action, Gender, and Health is open for submissions from September 23 to November 4, 2024. The two winning proposals from the first round will be announced on December 2, 2024, with the final AI challenge winners revealed on February 12, 2025.

Don’t miss your chance to be part of the solution. Visit https://zindi.africa/ai-equity-challenge to learn more and submit your proposal today. Together, we can harness the power of AI to create a more sustainable, equitable, and just world for generations to come.

Visit http://zindi.africa/ai-equity-challenge to learn more and participate.

This contest is hosted in collaboration with:


About the author

Joe Fontaine is the Product marketing lead for AWS AI Builder Programs. He is passionate about making machine learning more accessible to all through hands-on educational experiences. Outside of work he enjoys freeride mountain biking, aerial cinematography, and exploring the wilderness with his family.

Read More

Speculative Streaming: Fast LLM Inference Without Auxiliary Models

Speculative decoding is a prominent technique to speed up the inference of a large target language model based on predictions of an auxiliary draft model. While effective, in application-specific settings, it often involves fine-tuning both draft and target models to achieve high acceptance rates. As the number of downstream tasks grows, these draft models add significant complexity to inference systems. We propose Speculative Streaming, a single-model speculative decoding method that fuses drafting into the target model by changing the fine-tuning objective from next token prediction to…Apple Machine Learning Research

High-Speed AI: Hitachi Rail Advances Real-Time Railway Analysis Using NVIDIA Technology

High-Speed AI: Hitachi Rail Advances Real-Time Railway Analysis Using NVIDIA Technology

Hitachi Rail, a global transportation company powering railway systems in over 50 countries, is integrating NVIDIA AI technology to lower maintenance costs for rail operators, reduce train idling time and improve transit reliability for passengers.

The company is adopting NVIDIA IGX — an industrial-grade, enterprise-level platform that delivers high-bandwidth sensor processing, powerful AI compute, functional safety capabilities and enterprise security — into its new HMAX platform to process sensor and camera data in real time.

By removing the lag time between data collection and analysis, the HMAX platform will enable Hitachi Rail clients to more quickly detect train tracks that need repair, monitor the degradation of overhead power lines and assess the health of trains and signaling equipment.

Hitachi Rail estimates that proactive maintenance costs around 7x less than emergency repairs done after infrastructure fails unexpectedly. Its existing AI monitoring systems are already reducing service delays by up to 20% and train maintenance costs by up to 15% — and are cutting down energy consumption by decreasing fuel costs at train depots by up to 40%.

With real-time analysis using NVIDIA IGX and NVIDIA Holoscan platform for sensor processing, the company aims to further increase these savings.

“Using previous digital monitoring systems, it would take a few days to process the data and discover issues that need attention,” said Koji Agatsuma, executive director and chief technology officer of rail vehicles at Hitachi Rail. “If we can instead conduct real-time prediction using NVIDIA technology, that enables us to avoid service disruptions and significantly improve safety, reliability and operating costs.”

NVIDIA IGX Powers Real-Time AI Engine

Building on its existing collection of HMAX applications — which are currently running on data from 8,000 train cars on 2,000 trains — Hitachi Rail has used NVIDIA IGX and the NVIDIA AI Enterprise software platform to create new accelerated AI applications to help operators monitor train fleets and infrastructure. NVIDIA AI Enterprise offers tools, pretrained models and application frameworks to streamline the development and deployment of production-grade AI applications.

These applications, available soon through the HMAX platform, can be used by the company’s international customer base to process huge quantities of data streaming from sensors onboard trains, taken from existing systems or imported from third-party software already in use by the customer.

In the U.K., for example, each Hitachi Rail train has sensors that report nearly 50,000 data points as frequently as every fifth of a second. AI infrastructure that keeps pace with this data flow can send train operators timely alerts when a component of a train or rail line needs maintenance. The AI insights can also be accessed through a chatbot interface, helping operators easily identify trends and opportunities to optimize maintenance schedules and more.

“If a potential issue isn’t identified and fixed promptly, it can result in a service disruption that causes significant economic loss for our customers and impacts the passengers who rely on these transit lines,” Agatsuma said. “NVIDIA AI infrastructure has enabled us to get immediate alerts on thousands of miles of railway for the first time, which we anticipate will reduce delays and disruptions to passenger travel.”

Driving Benefits Down the Track

The opportunities go beyond monitoring trains and tracks.

By mounting cameras atop trains, Hitachi Rail can monitor power lines overhead to identify degrading electric cables and help prevent disruptive failures. Traditionally, it takes up to 10 days to process one day’s worth of video data collected by the train. With NVIDIA-accelerated sensor processing, data can be processed in real time at the edge, sending only relevant information back to operational control centers for analysis and action.

Learn more about the NVIDIA IGX platform. 

Main image courtesy of Hitachi Rail.

Read More