Pittsburgh Steels Itself for Innovation With Launch of NVIDIA AI Tech Community

Pittsburgh Steels Itself for Innovation With Launch of NVIDIA AI Tech Community

Serving as a bridge for academia, industry and public-sector groups to partner on artificial intelligence innovation, NVIDIA is launching its inaugural AI Tech Community in Pittsburgh, Pennsylvania.

Collaborations with Carnegie Mellon University and the University of Pittsburgh, as well as startups, enterprises and organizations based in the “city of bridges,” are part of the new NVIDIA AI Tech Community initiative, announced today during the NVIDIA AI Summit in Washington, D.C.

The initiative aims to supercharge public-private partnerships across communities rich with potential for enabling technological transformation using AI.

Two NVIDIA joint technology centers will be established in Pittsburgh to tap into expertise in the region.

NVIDIA’s Joint Center with Carnegie Mellon University (CMU) for Robotics, Autonomy and AI will equip higher-education faculty, students and researchers with the latest technologies and boost innovation in the fields of AI and robotics.

NVIDIA’s Joint Center with the University of Pittsburgh for AI and Intelligent Systems will focus on computational opportunities across the health sciences, including applications of AI in clinical medicine and biomanufacturing.

CMU — the nation’s No. 1 AI university according to the U.S. News & World Report — has pioneered work in autonomous vehicles and natural language processing.

CMU’s Robotics Institute, the world’s largest university-affiliated robotics research group, brings a diverse group of more than a thousand faculty, staff, students, post-doctoral fellows and visitors together to solve humanity’s toughest challenges through robotics.

The University of Pittsburgh — designated as an R1 research university at the forefront of innovation — is ranked No. 6 among U.S. universities in research funding from the National Institutes of Health, topping more than $1 billion in research expenditures in fiscal year 2022 and ranking No. 14 among U.S. universities granted utility patents.

The university has a long history of learning-technology innovations that are interdisciplinary and conducted within research-practice partnerships. By prioritizing inclusivity and practical experience without technical barriers, Pitt is leading the way in democratizing AI education in healthcare and medicine.

By working with these universities, NVIDIA aims to accelerate the innovation, commercialization and operationalization of a technical community for physical AI, robotics, autonomous systems and AI across the nation — and the globe.

These centers will tap into NVIDIA’s full-stack AI platform and accelerated computing expertise to gear up tomorrow’s technology leaders for next-generation innovation.

Establishing the Centers for AI Development 

Generative AI and accelerated computing are transforming workflows across use cases. Three key AI platforms comprise the engine behind this transformation: NVIDIA DGX for AI training, NVIDIA Omniverse for simulation and NVIDIA Jetson for edge computing.

Through the new centers and public-sector-sponsored research opportunities, NVIDIA will provide CMU and Pitt with access to these and more of the company’s latest AI software and frameworks — such as NVIDIA Isaac Lab for robot learning, NVIDIA Isaac Sim for designing and testing robots, NVIDIA NeMo for custom generative AI and NVIDIA NIM microservices, available through the NVIDIA AI Enterprise software platform.

Advanced NVIDIA technological support can help accelerate the research groups’ workflows and enhance the scalability and resiliency of their AI applications.

In addition, the universities will have access to certain generative AI, data science and accelerated computing resources through the NVIDIA Deep Learning Institute, which provides training to meet diverse learning needs and upskill students and developers in AI.

“Pairing Carnegie Mellon University’s existing deep expertise and resources in AI and robotics with NVIDIA’s cutting-edge platform, software and tools has tremendous potential to power Pittsburgh’s already vibrant innovation ecosystem,” said Theresa Mayer, vice president for research at CMU. “This unique collaboration will accelerate innovation, commercialization and operationalization of robotics and autonomy, advancing the best impacts of AI on society.”

“Pitt has a long history and extraordinary research strengths in life sciences and learning sciences,” said Rob A. Rutenbar, senior vice chancellor for research at the University of Pittsburgh. “By focusing on computational and AI opportunities across these ‘meds and eds’ areas, we plan to leverage our collaboration with NVIDIA to explore new ways to connect these breakthroughs to improved health and education outcomes for everybody.”

Fostering Cross-Industry Collaboration

As part of the AI Tech Community initiative, NVIDIA is also increasing its engagement with Pittsburgh-based members of the NVIDIA Inception program for cutting-edge AI startups and the NVIDIA Connect program for software development companies and service providers.

For example, Inception member Lovelace AI is developing AI solutions using NVIDIA accelerated computing and CUDA to enhance the analysis of kinetic data, providing predictive analytics and actionable insights for national security customers.

Skild AI, a startup founded by two Carnegie Mellon professors, is developing a scalable robotics foundation model, called Skild Brain, that can easily adapt across hardware and tasks.

Skild AI is exploring NVIDIA Isaac Lab, a unified, modular framework for robot learning built on the NVIDIA Isaac Sim reference application for designing, simulating and training AI-based robots.

NVIDIA is also engaging with Pittsburgh’s broader robotics ecosystem through its collaborations with the Pittsburgh Robotics Network — which speeds the commercialization of robotics, AI and other advanced technologies — and technology accelerators like AlphaLab and the Robotics Factory at Innovation Works, which supports startups based in the city that are focused on AI, robotics and autonomy.

And through its Deep Learning Institute, which has trained more than 650,000 people, NVIDIA is committed to furthering AI workforce development worldwide.

Learn more about how NVIDIA is propelling the next era of computing in higher education and research, including at the NVIDIA AI Summit, running through Oct. 9. NVIDIA Vice President of Developer Programs Greg Estes will discuss scaling AI skills and economic growth through public-private collaboration.

Featured image courtesy of Wikimedia Commons.

Read More

PyTorch Foundation Technical Advisory Council Elects New Leadership

PyTorch Foundation Technical Advisory Council Elects New Leadership

We are pleased to announce the first-ever Chair and Vice Chair of the PyTorch Foundation’s Technical Advisory Council (TAC): Luca Antiga as the Chair and Jiong Gong as Vice Chair. Both leaders bring extensive experience and deep commitment to the PyTorch community, and they are set to guide the TAC in its mission to foster an open, diverse, and innovative PyTorch technical community.

Meet the New Leadership

Luca Antiga

Luca Antiga is the CTO at Lightning AI since 2022. He is an early contributor to PyTorch core and co-authored “Deep Learning with PyTorch” (published by Manning). He started his journey as a researcher in Bioengineering, and later co-founded Orobix, a company focused on building and deploying AI in production settings.

“I am looking forward to taking on the role of the chair of the PyTorch TAC,” says Luca. “As the TAC chair, I will ensure effective, timely topic selection and enhance visibility of technical needs from the board members and from the ecosystem at large. I will strive for directional, cohesive messaging throughout the transition of PyTorch from Meta to the Linux Foundation.”

Jiong Gong

Jiong Gong is a Principal Engineer and SW Architect for PyTorch Optimization from Intel. He serves as one of the PyTorch CPU module maintainers and is an active contributor to the TorchInductor CPU backend.

“I plan to further strengthen the collaboration between PyTorch developers and hardware vendors, promoting innovation and performance optimization across various hardware platforms, enhancing PyTorch ecosystem and streamlining the decision-making process,” says Jiong. “I am honored to serve as the vice chair of the TAC.”

What Does the TAC Do?

The PyTorch Foundation’s TAC provides a forum for technical communication, leadership, and collaboration for the PyTorch Foundation. The committee members are members of the PyTorch Foundation. The committee holds open meetings once a month that anyone in the community can attend. The committee provides thought leadership on technical topics, knowledge sharing, and a forum to discuss issues with other technical experts in the community.

New TAC Webpage

Stay connected with the PyTorch Foundation’s Technical Advisory Council (TAC) by visiting our new TAC webpage. Here you can find the TAC members, where to view upcoming meeting agendas, access presentations, attend public meetings, watch meeting recordings and participate in discussions on key technical topics.

Plus stay tuned on our blog for regular updates from the PyTorch Foundation TAC leadership.

Read More

Foxconn to Build Taiwan’s Fastest AI Supercomputer With NVIDIA Blackwell

Foxconn to Build Taiwan’s Fastest AI Supercomputer With NVIDIA Blackwell

NVIDIA and Foxconn are building Taiwan’s largest supercomputer, marking a milestone in the island’s AI advancement.

The project, Hon Hai Kaohsiung Super Computing Center, revealed Tuesday at Hon Hai Tech Day, will be built around NVIDIA’s groundbreaking Blackwell architecture and feature the GB200 NVL72 platform, which includes a total of 64 racks and 4,608 Tensor Core GPUs.

With an expected performance of over 90 exaflops of AI performance, the machine would easily be considered the fastest in Taiwan.

Foxconn plans to use the supercomputer, once operational, to power breakthroughs in cancer research, large language model development and smart city innovations, positioning Taiwan as a global leader in AI-driven industries.

Foxconn’s “three-platform strategy” focuses on smart manufacturing, smart cities and electric vehicles. The new supercomputer will play a pivotal role in supporting Foxconn’s ongoing efforts in digital twins, robotic automation and smart urban infrastructure, bringing AI-assisted services to urban areas like Kaohsiung.

Construction has started on the new supercomputer housed in Kaohsiung, Taiwan. The first phase is expected to be operational by mid-2025. Full deployment is targeted for 2026.

The project will integrate with NVIDIA technologies, such as  NVIDIA Omniverse and Isaac robotics platforms for AI and digital twins technologies to help transform manufacturing processes.

“Powered by NVIDIA’s Blackwell platform, Foxconn’s new AI supercomputer is one of the most powerful in the world, representing a significant leap forward in AI computing and efficiency,” said Foxconn Vice President and Spokesperson James Wu.

The GB200 NVL72 is a state-of-the-art data center platform optimized for AI and accelerated computing.

Each rack features 36 NVIDIA Grace CPUs and 72 NVIDIA Blackwell GPUs connected via NVIDIA’s NVLink technology, delivering 130TB/s of bandwidth.

NVIDIA NVLink Switch allows the 72-GPU system to function as a single, unified GPU. This makes it ideal for training large AI models and executing complex inference tasks in real time on trillion-parameter models.

Taiwan-based Foxconn, officially known as Hon Hai Precision Industry Co., is the world’s largest electronics manufacturer, known for producing a wide range of products, from smartphones to servers, for the world’s top technology brands.

With a vast global workforce and manufacturing facilities across the globe, Foxconn is key in supplying the world’s technology infrastructure. It is a leader in smart manufacturing as one of the pioneers of industrial AI as it digitalizes its factories in NVIDIA Omniverse.

Foxconn was also one of the first companies to use NVIDIA NIM microservices in the development of domain-specific large language models, or LLMs, embedded into a variety of internal systems and processes in its AI factories for smart manufacturing, smart electric vehicles and smart cities.

The Hon Hai Kaohsiung Super Computing Center is part of a growing global network of advanced supercomputing facilities powered by NVIDIA. This network includes several notable installations across Europe and Asia.

These supercomputers represent a significant leap forward in computational power, putting NVIDIA’s cutting-edge technology to work to advance research and innovation across various scientific disciplines.

Learn more about Hon Hai Tech Day.

Read More

Build a generative AI Slack chat assistant using Amazon Bedrock and Amazon Kendra

Build a generative AI Slack chat assistant using Amazon Bedrock and Amazon Kendra

Despite the proliferation of information and data in business environments, employees and stakeholders often find themselves searching for information and struggling to get their questions answered quickly and efficiently. This can lead to productivity losses, frustration, and delays in decision-making.

A generative AI Slack chat assistant can help address these challenges by providing a readily available, intelligent interface for users to interact with and obtain the information they need. By using the natural language processing and generation capabilities of generative AI, the chat assistant can understand user queries, retrieve relevant information from various data sources, and provide tailored, contextual responses.

By harnessing the power of generative AI and Amazon Web Services (AWS) services Amazon Bedrock, Amazon Kendra, and Amazon Lex, this solution provides a sample architecture to build an intelligent Slack chat assistant that can streamline information access, enhance user experiences, and drive productivity and efficiency within organizations.

Why use Amazon Kendra for building a RAG application?

Amazon Kendra is a fully managed service that provides out-of-the-box semantic search capabilities for state-of-the-art ranking of documents and passages. You can use Amazon Kendra to quickly build high-accuracy generative AI applications on enterprise data and source the most relevant content and documents to maximize the quality of your Retrieval Augmented Generation (RAG) payload, yielding better large language model (LLM) responses than using conventional or keyword-based search solutions. Amazon Kendra offers simple-to-use deep learning search models that are pre-trained on 14 domains and don’t require machine learning (ML) expertise. Amazon Kendra can index content from a wide range of sources, including databases, content management systems, file shares, and web pages.

Further, the FAQ feature in Amazon Kendra complements the broader retrieval capabilities of the service, allowing the RAG system to seamlessly switch between providing prewritten FAQ responses and dynamically generating responses by querying the larger knowledge base. This makes it well-suited for powering the retrieval component of a RAG system, allowing the model to access a broad knowledge base when generating responses. By integrating the FAQ capabilities of Amazon Kendra into a RAG system, the model can use a curated set of high-quality, authoritative answers for commonly asked questions. This can improve the overall response quality and user experience, while also reducing the burden on the language model to generate these basic responses from scratch.

This solution balances retaining customizations in terms of model selection, prompt engineering, and adding FAQs with not having to deal with word embeddings, document chunking, and other lower-level complexities typically required for RAG implementations.

Solution overview

The chat assistant is designed to assist users by answering their questions and providing information on a variety of topics. The purpose of the chat assistant is to be an internal-facing Slack tool that can help employees and stakeholders find the information they need.

The architecture uses Amazon Lex for intent recognition, AWS Lambda for processing queries, Amazon Kendra for searching through FAQs and web content, and Amazon Bedrock for generating contextual responses powered by LLMs. By combining these services, the chat assistant can understand natural language queries, retrieve relevant information from multiple data sources, and provide humanlike responses tailored to the user’s needs. The solution showcases the power of generative AI in creating intelligent virtual assistants that can streamline workflows and enhance user experiences based on model choices, FAQs, and modifying system prompts and inference parameters.

Architecture diagram

The following diagram illustrates a RAG approach where the user sends a query through the Slack application and receives a generated response based on the data indexed in Amazon Kendra. In this post, we use Amazon Kendra Web Crawler as the data source and include FAQs stored on Amazon Simple Storage Service (Amazon S3). See Data source connectors for a list of supported data source connectors for Amazon Kendra.

ML-16837-arch-diag

The step-by-step workflow for the architecture is the following:

  1. The user sends a query such as What is the AWS Well-Architected Framework? through the Slack app.
  2. The query goes to Amazon Lex, which identifies the intent.
  3. Currently two intents are configured in Amazon Lex (Welcome and FallbackIntent).
  4. The welcome intent is configured to respond with a greeting when a user enters a greeting such as “hi” or “hello.” The assistant responds with “Hello! I can help you with queries based on the documents provided. Ask me a question.”
  5. The fallback intent is fulfilled with a Lambda function.
    1. The Lambda function searches Amazon Kendra FAQs through the search_Kendra_FAQ method by taking the user query and Amazon Kendra index ID as inputs. If there’s a match with a high confidence score, the answer from the FAQ is returned to the user.
      def search_Kendra_FAQ(question, kendra_index_id):
          """
          This function takes in the question from the user, and checks if the question exists in the Kendra FAQs.
          :param question: The question the user is asking that was asked via the frontend input text box.
          :param kendra_index_id: The kendra index containing the documents and FAQs
          :return: If found in FAQs, returns the answer along with any relevant links. If not, returns False and then calls kendra_retrieve_document function.
          """
          kendra_client = boto3.client('kendra')
          response = kendra_client.query(IndexId=kendra_index_id, QueryText=question, QueryResultTypeFilter='QUESTION_ANSWER')
          for item in response['ResultItems']:
              score_confidence = item['ScoreAttributes']['ScoreConfidence']
              # Taking answers from FAQs that have a very high confidence score only
              if score_confidence == 'VERY_HIGH' and len(item['AdditionalAttributes']) > 1:
                  text = item['AdditionalAttributes'][1]['Value']['TextWithHighlightsValue']['Text']
                  url = "None"
                  if item['DocumentURI'] != '':
                      url = item['DocumentURI']
                  return (text, url)
          return (False, False)

    2. If there isn’t a match with a high enough confidence score, relevant documents from Amazon Kendra with a high confidence score are retrieved through the kendra_retrieve_document method and sent to Amazon Bedrock to generate a response as the context.
      def kendra_retrieve_document(question, kendra_index_id):
          """
          This function takes in the question from the user, and retrieves relevant passages based on default PageSize of 10.
          :param question: The question the user is asking that was asked via the frontend input text box.
          :param kendra_index_id: The kendra index containing the documents and FAQs
          :return: Returns the context to be sent to the LLM and document URIs to be returned as relevant data sources.
          """
          kendra_client = boto3.client('kendra')
          documents = kendra_client.retrieve(IndexId=kendra_index_id, QueryText=question)
          text = ""
          uris = set()
          if len(documents['ResultItems']) > 0:
              for i in range(len(documents['ResultItems'])):
                  score_confidence = documents['ResultItems'][i]['ScoreAttributes']['ScoreConfidence']
                  if score_confidence == 'VERY_HIGH' or score_confidence == 'HIGH':
                      text += documents['ResultItems'][i]['Content'] + "n"
                      uris.add(documents['ResultItems'][i]['DocumentURI'])
          return (text, uris)

    3. The response is generated from Amazon Bedrock with the invokeLLM method. The following is a snippet of the invokeLLM method within the fulfillment function. Read more on inference parameters and system prompts to modify parameters that are passed into the Amazon Bedrock invoke model request.
      def invokeLLM(question, context, modelId):
          """
          This function takes in the question from the user, along with the Kendra responses as context to generate an answer
          for the user on the frontend.
          :param question: The question the user is asking that was asked via the frontend input text box.
          :param documents: The response from the Kendra document retrieve query, used as context to generate a better
          answer.
          :return: Returns the final answer that will be provided to the end-user of the application who asked the original
          question.
          """
          # Setup Bedrock client
          bedrock = boto3.client('bedrock-runtime')
          # configure model specifics such as specific model
          modelId = modelId
      
          # body of data with parameters that is passed into the bedrock invoke model request
          body = json.dumps({"max_tokens": 350,
                  "system": "You are a truthful AI assistant. Your goal is to provide informative and substantive responses to queries based on the documents provided. If you do not know the answer to a question, you truthfully say you do not know.",
                  "messages": [{"role": "user", "content": "Answer this user query:" + question + "with the following context:" + context}],
                  "anthropic_version": "bedrock-2023-05-31",
                      "temperature":0,
                  "top_k":250,
                  "top_p":0.999})
      
          # Invoking the bedrock model with your specifications
          response = bedrock.invoke_model(body=body,
                                          modelId=modelId)
          # the body of the response that was generated
          response_body = json.loads(response.get('body').read())
          # retrieving the specific completion field, where you answer will be
          answer = response_body.get('content')
          # returning the answer as a final result, which ultimately gets returned to the end user
          return answer

    4. Finally, the response generated from Amazon Bedrock along with the relevant referenced URLs are returned to the end user.

    When selecting websites to index, adhere to the AWS Acceptable Use Policy and other AWS terms. Remember that you can only use Amazon Kendra Web Crawler to index your own web pages or web pages that you have authorization to index. Visit the Amazon Kendra Web Crawler data source guide to learn more about using the web crawler as a data source. Using Amazon Kendra Web Crawler to aggressively crawl websites or web pages you don’t own is not considered acceptable use.

    Supported features

    The chat assistant supports the following features:

    1. Support for the following Anthropic’s models on Amazon Bedrock:
      • claude-v2
      • claude-3-haiku-20240307-v1:0
      • claude-instant-v1
      • claude-3-sonnet-20240229-v1:0
    2. Support for FAQs and the Amazon Kendra Web Crawler data source
    3. Returns FAQ answers only if the confidence score is VERY_HIGH
    4. Retrieves only documents from Amazon Kendra that have a HIGH or VERY_HIGH confidence score
    5. If documents with a high confidence score aren’t found, the chat assistant returns “No relevant documents found”

    Prerequisites

    To perform the solution, you need to have following prerequisites:

    • Basic knowledge of AWS
    • An AWS account with access to Amazon S3 and Amazon Kendra
    • An S3 bucket to store your documents. For more information, see Step 1: Create your first S3 bucket and the Amazon S3 User Guide.
    • A Slack workspace to integrate the chat assistant
    • Permission to install Slack apps in your Slack workspace
    • Seed URLs for the Amazon Kendra Web Crawler data source
      • You’ll need authorization to crawl and index any websites provided
    • AWS CloudFormation for deploying the solution resources

    Build a generative AI Slack chat assistant

    To build a Slack application, use the following steps:

    1. Request model access on Amazon Bedrock for all Anthropic models
    2. Create an S3 bucket in the us-east-1 (N. Virginia) AWS Region.
    3. Upload the AIBot-LexJson.zip and SampleFAQ.csv files to the S3 bucket
    4. Launch the CloudFormation stack in the us-east-1 (N. Virginia) AWS Region.Launch Stack to create solution resources
    5. Enter a Stack name of your choice
    6. For S3BucketName, enter the name of the S3 bucket created in Step 2
    7. For S3KendraFAQKey, enter the name of the SampleFAQs uploaded to the S3 bucket in Step 3
    8. For S3LexBotKey, enter the name of the Amazon Lex .zip file uploaded to the S3 bucket in Step 3
    9. For SeedUrls, enter up to 10 URLs for the web crawler as a comma delimited list. In the example in this post, we give the publicly available Amazon Bedrock service page as the seed URL
    10. Leave the rest as defaults. Choose Next. Choose Next again on the Configure stack options
    11. Acknowledge by selecting the box and choose Submit, as shown in the following screenshot
      ML-16837-cfn-checkbox
    12. Wait for the stack creation to complete
    13. Verify all resources are created
    14. Test on the AWS Management Console for Amazon Lex
      1. On the Amazon Lex console, choose your chat assistant ${YourStackName}-AIBot
      2. Choose Intents
      3. Choose Version 1 and choose Test, as shown in the following screenshot
        ML-16837-lex-version1
      4. Select the AIBotProdAlias and choose Confirm, as shown in the following screenshot. If you want to make changes to the chat assistant, you can use the draft version, publish a new version, and assign the new version to the AIBotProdAlias. Learn more about Versioning and Aliases.
      5. Test the chat assistant with questions such as, “Which AWS service has 11 nines of durability?” and “What is the AWS Well-Architected Framework?” and verify the responses. The following table shows that there are three FAQs in the sample .csv file.
        _question _answer _source_uri
        Which AWS service has 11 nines of durability? Amazon S3 https://aws.amazon.com/s3/
        What is the AWS Well-Architected Framework? The AWS Well-Architected Framework enables customers and partners to review their architectures using a consistent approach and provides guidance to improve designs over time. https://aws.amazon.com/architecture/well-architected/
        In what Regions is Amazon Kendra available? Amazon Kendra is currently available in the following AWS Regions: Northern Virginia, Oregon, and Ireland https://aws.amazon.com/about-aws/global-infrastructure/regional-product-services/
      6. The following screenshot shows the question “Which AWS service has 11 nines of durability?” and its response. You can observe that the response is the same as in the FAQ file and includes a link.
        ML-16837-Q1inLex
      7. Based on the pages you have crawled, ask a question in the chat. For this example, the publicly available Amazon Bedrock page was crawled and indexed. The following screenshot shows the question, “What are agents in Amazon Bedrock?” and and a generated response that includes relevant links.
        ML-16837-Q2inLex
    1. For integration of the Amazon Lex chat assistant with Slack, see Integrating an Amazon Lex V2 bot with Slack. Choose the AIBotProdAlias under Alias in the Channel Integrations

    Run sample queries to test the solution

    1. In Slack, go to the Apps section. In the dropdown menu, choose Manage and select Browse apps.
      ML-16837-slackBrowseApps
    2. Search for ${AIBot} in App Directory and choose the chat assistant. This will add the chat assistant to the Apps section in Slack. You can now start asking questions in the chat. The following screenshot shows the question “Which AWS service has 11 nines of durability?” and its response. You can observe that the response is the same as in the FAQ file and includes a link.
      ML-16837-Q1slack
    3. The following screenshot shows the question, “What is the AWS Well-Architected Framework?” and its response.
      ML-16837-Q2slack
    4. Based on the pages you have crawled, ask a question in the chat. For this example, the publicly available Amazon Bedrock page was crawled and indexed. The following screenshot shows the question, “What are agents in Amazon Bedrock?” and and a generated response that includes relevant links.
      ML-16837-Q3slack
    5. The following screenshot shows the question, “What is amazon polly?” Because there is no Amazon Polly documentation indexed, the chat assistant responds with “No relevant documents found,” as expected.
      ML-16837-Q4slack

    These examples show how the chat assistant retrieves documents from Amazon Kendra and provides answers based on the documents retrieved. If no relevant documents are found, the chat assistant responds with “No relevant documents found.”

    Clean up

    To clean up the resources created by this solution:

    1. Delete the CloudFormation stack by navigating to the CloudFormation console
    2. Select the stack you created for this solution and choose Delete
    3. Confirm the deletion by entering the stack name in the provided field. This will remove all the resources created by the CloudFormation template, including the Amazon Kendra index, Amazon Lex chat assistant, Lambda function, and other related resources.

    Conclusion

    This post describes the development of a generative AI Slack application powered by Amazon Bedrock and Amazon Kendra. This is designed to be an internal-facing Slack chat assistant that helps answer questions related to the indexed content. The solution architecture includes Amazon Lex for intent identification, a Lambda function for fulfilling the fallback intent, Amazon Kendra for FAQ searches and indexing crawled web pages, and Amazon Bedrock for generating responses. The post walks through the deployment of the solution using a CloudFormation template, provides instructions for running sample queries, and discusses the steps for cleaning up the resources. Overall, this post demonstrates how to use various AWS services to build a powerful generative AI–powered chat assistant application.

    This solution demonstrates the power of generative AI in building intelligent chat assistants and search assistants. Explore the generative AI Slack chat assistant: Invite your teams to a Slack workspace and start getting answers to your indexed content and FAQs. Experiment with different use cases and see how you can harness the capabilities of services like Amazon Bedrock and Amazon Kendra to enhance your business operations. For more information about using Amazon Bedrock with Slack, refer to Deploy a Slack gateway for Amazon Bedrock.


    About the authors

    Kruthi Jayasimha Rao is a Partner Solutions Architect with a focus on AI and ML. She provides technical guidance to AWS Partners in following best practices to build secure, resilient, and highly available solutions in the AWS Cloud.

    Mohamed Mohamud is a Partner Solutions Architect with a focus on Data Analytics. He specializes in streaming analytics, helping partners build real-time data pipelines and analytics solutions on AWS. With expertise in services like Amazon Kinesis, Amazon MSK, and Amazon EMR, Mohamed enables data-driven decision-making through streaming analytics.

Read More

VQAScore: Evaluating and Improving Vision-Language Generative Models

VQAScore: Evaluating and Improving Vision-Language Generative Models

Introduction

Text-to-image/video models like Midjourney, Imagen3, Stable Diffusion, and Sora can generate aesthetic, photo-realistic visuals from natural language prompts, for example, given “Several giant woolly mammoths approach, treading through a snowy meadow…”, Sora generates:

But how do we know if these models generate what we desire? For example, if the prompt is “The brown dog chases the black dog around a tree”, how can we tell if the model shows the dogs “chasing around a tree” rather than “playing in a backyard”? More generally, how should we evaluate these generative models? While humans can easily judge whether a generated image aligns with a prompt, large-scale human evaluation is costly. To address this, we introduce a new evaluation metric (VQAScore) and benchmark dataset (GenAI-Bench) [Lin et al., ECCV 2024] for automated evaluation of text-to-visual generative models. Our evaluation framework was recently employed by Google Deepmind to evaluate their Imagen3 model!

We introduce VQAScore [Lin et al., ECCV 2024] —a simple yet powerful metric to evaluate state-of-the-art generative models such as DALL-E 3, Midjourney, and Stable Diffusion (SD-XL). VQAScore aligns more closely with human judgments and significantly outperforms the popular CLIPScore [Hessel et al., 2021] on challenging compositional prompts collected from professional designers in our GenAI-Bench [Li et al., CVPR 2024].

Background

While state-of-the-art text-to-visual models perform well on simple prompts, they struggle with complex prompts which involve multiple objects and require higher-order reasoning like negation. Recent models like DALL-E 3 [Betker et al., OpenAI 2023] and Stable Diffusion [Esser et al., Stability AI 2024] address this by training on higher-quality image-text pairs (often using language models such as GPT-4 to rewrite captions) or using strong language encoders like T5 [Raffel et al., JMLR 2020].

As text-to-visual models advance, evaluating them has become a challenging task. To measure similarity between two images, perceptual metrics like Learned Perceptual Image Patch Similarity (LPIPS) [Zhang et al., CVPR 2018] uses a pre-trained image encoder to embed and compare image features, with higher similarity indicating the images look alike. For measuring similarity between a text prompt and an image (image-text alignment), the common practice is to rely on OpenAI’s pre-trained CLIP model [Radford et al., OpenAI 2021]. CLIP includes both an image encoder and a text encoder, trained on millions of image-text pairs, to embed images and texts into the same feature space, where higher similarity suggests stronger image-text alignment. This approach is commonly referred to as CLIPScore [Hessel et al., EMNLP 2021].

Previous evaluation metrics for generative models: Perceptual metrics like LPIPS use a pre-trained image encoder to embed the original and reconstructed images into two 1D vectors, and then compute their distance. As a result, perceptually similar images will have a higher LPIPS score. On the other hand, CLIPScore uses the dual encoders of a pre-trained CLIP model to embed images and texts into the same space, where semantically aligned pairs will have a higher CLIPScore.

However, CLIPScore suffers from a notorious “bag-of-words” issue. This means that when embedding texts, CLIP can ignore word order, leading to mistakes like confusing “The moon is over the cow” with “The cow is over the moon”.

Examples from the challenging image-text matching benchmark Winoground [Thrush et al., CVPR 2022], where CLIPScore often assigns higher scores to incorrect image-text pairs. In general, CLIPScore struggles with multiple objects, attribute bindings, object relationships, and complex numerical (counting) and logical reasoning. In contrast, our VQAScore excels in these challenging scenarios.

Why is CLIPScore limited? Our prior work [Lin et al., ICML 2024], along with others [Yuksekgonul et al., ICLR 2023], suggests its bottleneck lies in its discriminative training approach. The structure of CLIP’s loss function causes it to maximize similarity between an image and its caption and minimize similarity between an image and a small set of unrelated captions. However, this structure allows for shortcut — CLIP often minimizes similarity to negatives by simply recognizing main objects, ignoring finer details. In contrast, we suspect that generative vision-language models trained for image-to-text generation (e.g., image captioning) are more robust because they cannot rely on shortcuts—generating the correct text sequence requires a precise understanding of word order.

VQAScore: A Strong and Simple Text-to-Visual Metric

Based on generative vision-language models trained for visual-question-answering (VQA) tasks that generate an answer from an image and a question, we propose a simple metric, VQAScore. Given an image and a text prompt, we define their alignment as the probability of the model responding “Yes” to the question, “Does this image show ‘{text}’? Please answer yes or no.” For example, given an image and the text prompt “the cow over the moon”, we would compute the following probability:

(P(“Yes” | image, “Does this figure show ‘the cow over the moon’? Please answer yes or no.”) )

VQAScore is calculated as the probability of a visual-question-answering (VQA) model responding “Yes” to a simple yes-or-no question like, “Does this figure show [prompt]? Please answer yes or no.” VQAScore can be implemented in most VQA models trained with next-token prediction loss, where the model predicts the next token based on the current tokens. This figure illustrates the implementation of VQAScore: on the left, the image and question are tokenized and fed into an image-question encoder; on the right, an answer decoder calculates the probability of the next answer token (i.e., “Yes”) auto-regressively based on the output tokens from the image-question encoder.

Our paper [Lin et al., ECCV 2024] shows that VQAScore outperforms CLIPScore and all other evaluation metrics across benchmarks measuring correlation with human judgements on image-text alignment, including Winoground [Thrush et al., CVPR 2022], TIFA160 [Hu et al., ICCV 2023], Pick-a-pic [Kirstain et al., NeurIPS 2023]. VQAScore even outperforms metrics that use additional fine-tuning data or proprietary models like GPT-4 (Vision). These metrics can be grouped into three types:

(1) Human-feedback approaches, like ImageReward, PickScore, and Human Preference Score, fine-tune CLIP using human ratings of generated images.
(2) LLM-as-a-judge approaches, like VIEScore, use LLMs such as GPT-4 (Vision) to directly output image-text alignment scores, e.g., asking the model to output a score between 0 to 100.
(3) Divide-and-conquer approaches like TIFA, Davidsonian, and Gecko decompose text prompts into simpler question-answer pairs (often using LLMs like GPT-4) and then use VQA models to assess alignment based on answer accuracy.

Compared to these metrics, VQAScore offers several key advantages:

(1) No fine-tuning: VQAScore performs well using off-the-shelf VQA models without the need for fine-tuning on human feedback.
(2) Token probability is more precise than text generation: LLM-as-a-judge methods often assign similar and random scores (like 90) to most image-text pairs, regardless of alignment.
(3) No prompt decomposition: While divide-and-conquer approaches may seem promising, prompt decomposition is error-prone. For example, with the prompt “someone talks on the phone happily while another person sits angrily,” the state-of-the-art method Davidsonian wrongly asks irrelevant questions such as, “Is there another person?

In addition, our paper also demonstrates VQAScore’s preliminary success in evaluating text-to-video and 3D generation. We are encouraged by recent work like Generative Verifier, which supports a similar approach for evaluating language models. Finally, DeepMind’s Imagen3 suggests that stronger models like Gemini could further enhance VQAScore, indicating that it scales well with future image-to-text models.

GenAI-Bench: A Compositional Text-to-Visual Generation Benchmark

During our studies, we found that previous text-to-visual benchmarks like COCO and PartiPrompt lacked sufficiently challenging prompts. To address this, we collected 1,600 real prompts from graphic designers using tools like Midjourney. This results in GenAI-Bench [Li et al., CVPR 2024], which covers a broader range of compositional reasoning and presents a tougher challenge to text-to-visual models.

Image illustrating GenAI-Bench
GenAI-Bench [Li et al., CVPR 2024] reflects how users seek precise control in text-to-visual generation using compositional prompts. For example, users often add details by specifying compositions of objects, scenes, attributes, and relationships (spatial/action/part). Additionally, user prompts may involve higher-order reasoning, including counting, comparison, differentiation, and logic (negation/universality).

After gathering these diverse, real-world prompts, we collected 1-to-5 Likert-scale ratings on the generated images from state-of-the-art models like Midjourney and Stable Diffusion, with three annotators evaluating each image-text pair. We also discuss in the paper how these human ratings can be used to better evaluate future automated metrics.

Image showing GenAI-Bench collection
We collect prompts from professional designers to ensure GenAI-Bench reflects real-world needs. Designers write prompts on general topics (e.g., food, animals, household objects) without copyrighted characters or celebrities. We carefully tag each prompt with its evaluated skills and hire human annotators to rate images and videos generated by state-of-the-art models.

Importantly, we found that most models still struggle with GenAI-Bench prompts, indicating significant room for improvement:

Image comparing GenAI-Bench to other benchmarks
State-of-the-art models such as DALL-E 3, SD-XL, Pika, and Gen2 still fail to handle compositional prompts of GenAI-Bench!

Improving Text-to-Image Generation with VQAScore

Lastly, we demonstrate how VQAScore can improve text-to-image generation in a black-box manner [Liu et al., CVPR 2024] by selecting the highest-VQAScore images from as few as three generated candidates:

Image illustrating VQAScore
VQAScore can improve DALL-E 3 on challenging GenAI-Bench prompts using its black-box API to rank the three generated candidate images. We encourage readers to refer to our paper for the full experimental setup and human evaluation results!

Conclusion

Metrics and benchmarks play a crucial role in the evolution of science. We hope that VQAScore and GenAI-Bench provide new insights into the evaluation of text-to-visual models and offer a robust, reproducible alternative to costly human evaluations.

References:

  • Lin et al., Evaluating Text-to-Visual Generation with Image-to-Text Generation. ECCV 2024.
  • Li et al., GenAI-Bench: Evaluating and Improving Compositional Text-to-Visual Generation. CVPR SynData 2024 Workshop, Best Short Paper.
  • Lin et al., Revisiting the Role of Language Priors in Vision-Language Models. ICML 2024.
  • Liu et al., Language Models as Black-Box Optimizers for Vision-Language Models. CVPR 2024.
  • Parashar et al., The Neglected Tails in Vision-Language Models. CVPR 2024.
  • Hessel et al., A Reference-free Evaluation Metric for Image Captioning. EMNLP 2021.
  • Heusel et al., GANs Trained by a Two Time-Scale Update Rule Converge to a Local Nash Equilibrium. NeurIPS 2017.
  • Betker et al., Improving Image Generation with Better Captions (DALL-E 3). OpenAI 2023.
  • Esser et al., Scaling Rectified Flow Transformers for High-Resolution Image Synthesis. Stability AI 2024.
  • Zhang et al., The Unreasonable Effectiveness of Deep Features as a Perceptual Metric. CVPR 2018.
  • Thrush et al., Winoground: Probing Vision and Language Models for Visio-Linguistic Compositionality. CVPR 2022.
  • Yuksekgonul et al., When and why vision-language models behave like bags-of-words, and what to do about it? ICLR 2023.

Read More

Depth Pro: Sharp Monocular Metric Depth in Less Than a Second

We present a foundation model for zero-shot metric monocular depth estimation. Our model, Depth Pro, synthesizes high-resolution depth maps with unparalleled sharpness and high-frequency details. The predictions are metric, with absolute scale, without relying on the availability of metadata such as camera intrinsics. And the model is fast, producing a 2.25-megapixel depth map in 0.3 seconds on a standard GPU. These characteristics are enabled by a number of technical contributions, including an efficient multi-scale vision transformer for dense prediction, a training protocol that combines…Apple Machine Learning Research

Improving How Machine Translations Handle Grammatical Gender Ambiguity

Machine Translation (MT) enables people to connect with others and engage with content across language barriers. Grammatical gender presents a difficult challenge for these systems, as some languages require specificity for terms that can be ambiguous or neutral in other languages. For example, when translating the English word “nurse” into Spanish, one must decide whether the feminine “enfermera” or the masculine “enfermero” is appropriate. However, particularly when contextual clues are absent, such as in translating a single sentence, a model cannot determine which would be correct. This…Apple Machine Learning Research

Create your fashion assistant application using Amazon Titan models and Amazon Bedrock Agents

Create your fashion assistant application using Amazon Titan models and Amazon Bedrock Agents

In the generative AI era, agents that simulate human actions and behaviors are emerging as a powerful tool for enterprises to create production-ready applications. Agents can interact with users, perform tasks, and exhibit decision-making abilities, mimicking humanlike intelligence. By combining agents with foundation models (FMs) from the Amazon Titan in Amazon Bedrock family, customers can develop multimodal, complex applications that enable the agent to understand and generate natural language or images.

For example, in the fashion retail industry, an assistant powered by agents and multimodal models can provide customers with a personalized and immersive experience. The assistant can engage in natural language conversations, understanding the customer’s preferences and intents. It can then use the multimodal capabilities to analyze images of clothing items and make recommendations based on the customer’s input. Additionally, the agent can generate visual aids, such as outfit suggestions, enhancing the overall customer experience.

In this post, we implement a fashion assistant agent using Amazon Bedrock Agents and the Amazon Titan family models. The fashion assistant provides a personalized, multimodal conversational experience. Among others, the capabilities of Amazon Titan Image Generator to inpaint and outpaint images can be used to generate fashion inspirations and edit user photos. Amazon Titan Multimodal Embeddings models can be used to search for a style on a database using both a prompt text or a reference image provided by the user to find similar styles. Anthropic Claude 3 Sonnet is used by the agent to orchestrate the agent’s actions, for example, search for the current weather to receive weather-appropriate outfit recommendations. A simple web UI through Streamlit provides the user with the best experience to interact with the agent.

The fashion assistant agent can be smoothly integrated into existing ecommerce platforms or mobile applications, providing customers with a seamless and delightful experience. Customers can upload their own images, describe their desired style, or even provide a reference image, and the agent will generate personalized recommendations and visual inspirations.

The code used in this solution is available in the GitHub repository.

Solution overview

The fashion assistant agent uses the power of Amazon Titan models and Amazon Bedrock Agents to provide users with a comprehensive set of style-related functionalities:

  • Image-to-image or text-to-image search – This tool allows customers to find products similar to styles they like from the catalog, enhancing their user experience. We use the Titan Multimodal Embeddings model to embed each product image and store them in Amazon OpenSearch Serverless for future retrieval.
  • Text-to-image generation – If the desired style is not available in the database, this tool generates unique, customized images based on the user’s query, enabling the creation of personalized styles.
  • Weather API connection – By fetching weather information for a given location mentioned in the user’s prompt, the agent can suggest appropriate styles for the occasion, making sure the customer is dressed for the weather.
  • Outpainting – Users can upload an image and request to change the background, allowing them to visualize their preferred styles in different settings.
  • Inpainting – This tool enables users to modify specific clothing items in an uploaded image, such as changing the design or color, while keeping the background intact.

The following flow chart illustrates the decision-making process:

Agent Execution Flowchart

And the corresponding architecture diagram:

Prerequisites

To set up the fashion assistant agent, make sure you have the following:

  • An active AWS account and AWS Identity and Access Management (IAM) role with Amazon Bedrock, AWS Lambda, and Amazon Simple Storage (Amazon S3) access
  • Installation of required Python libraries such as Streamlit
  • Anthropic Claude 3 Sonnet, Amazon Titan Image Generator and Amazon Titan Multimodal Embeddings models enabled in Amazon Bedrock. You can confirm these are enabled on the Model access page of the Amazon Bedrock console. If these models are enabled, the access status will show as Access granted, as shown in the following screenshot.

Before executing the notebook provided in the GitHub repo to start building the infrastructure, make sure your AWS account has permission to:

  • Create managed IAM roles and policies
  • Create and invoke Lambda functions
  • Create, read from, and write to S3 buckets
  • Access and manage Amazon Bedrock agents and models

If you want to enable the image-to-image or text-to-image search capabilities, additional permissions for your AWS account are required:

  • Create security policy, access policy, collect, index, and index mapping on OpenSearch Serverless
  • Call the BatchGetCollection on OpenSearch Serverless

Set up the fashion assistant agent

To set up the fashion assistant agent, follow these steps:

  1. Clone the GitHub repository using the command
    git clone

  2. Complete the prerequisites to grant sufficient permissions
  3. Follow the deployment steps outlined in the README.md
  4. (Optional) If you want to use the image_lookup feature, execute code snippets in opensearch_ingest.ipynb to use Amazon Titan Multimodal Embeddings to embed and store sample images
  5. Run the Streamlit UI to interact with the agent using the command
    streamlit run frontend/app.py

By following these steps, you can create a powerful and engaging fashion assistant agent that combines the capabilities of Amazon Titan models with the automation and decision-making capabilities of Amazon Bedrock Agents.

Test the fashion assistant

After the fashion assistant is set up, you can interact with it through the Streamlit UI. Follow these steps:

  1. Navigate to your Streamlit UI, as shown in the following screenshot

  1. Upload an image or enter a text prompt describing the desired style, according to the desired action, for example, image search, image generation, outpainting, or inpainting. The following screenshot shows an example prompt.

Streamlit UI Example Two

  1. Press enter to send the prompt to the agent. You can view the chain-of-thought (CoT) process of the agent in the UI, as shown in the following screenshot

Streamlit UI Example Three

  1. When the response is ready, you can view the agent’s response in the UI, as shown in the following screenshot. The response may include generated images, similar style recommendations, or modified images based on your request. You can download the generated images directly from the UI or check the image in your S3 bucket.

Streamlit UI Example Four

Clean up

To avoid unnecessary costs, make sure to delete the resources used in this solution. You can do this by running the following command.

cdk destroy

Conclusion

The fashion assistant agent, powered by Amazon Titan models and Amazon Bedrock Agents, is an example of how retailers can create innovative applications that enhance the customer experience and drive business growth. By using this solution, retailers can gain a competitive edge, offering personalized style recommendations, visual inspirations, and interactive fashion advice to their customers.

We encourage you to explore the potential of building more agents like this fashion assistant by checking out the examples available on the aws-samples GitHub repository.


 About the Authors

Akarsha Sehwag is a Data Scientist and ML Engineer in AWS Professional Services with over 5 years of experience building ML based solutions. Leveraging her expertise in Computer Vision and Deep Learning, she empowers customers to harness the power of the ML in AWS cloud efficiently. With the advent of Generative AI, she worked with numerous customers to identify good use-cases, and building it into production-ready solutions.

Yanyan Zhang is a Senior Generative AI Data Scientist at Amazon Web Services, where she has been working on cutting-edge AI/ML technologies as a Generative AI Specialist, helping customers leverage GenAI to achieve their desired outcomes. Yanyan graduated from Texas A&M University with a Ph.D. degree in Electrical Engineering. Outside of work, she loves traveling, working out and exploring new things.

antoniaAntonia Wiebeler is a Data Scientist at the AWS Generative AI Innovation Center, where she enjoys building proofs of concept for customers. Her passion is exploring how generative AI can solve real-world problems and create value for customers. While she is not coding, she enjoys running and competing in triathlons.

Alex Newton is a Data Scientist at the AWS Generative AI Innovation Center, helping customers solve complex problems with generative AI and machine learning. He enjoys applying state of the art ML solutions to solve real world challenges. In his free time you’ll find Alex playing in a band or watching live music.

Chris Pecora is a Generative AI Data Scientist at Amazon Web Services. He is passionate about building innovative products and solutions while also focused on customer-obsessed science. When not running experiments and keeping up with the latest developments in generative AI, he loves spending time with his kids.

Maira Ladeira Tanke is a Senior Generative AI Data Scientist at AWS. With a background in machine learning, she has over 10 years of experience architecting and building AI applications with customers across industries. As a technical lead, she helps customers accelerate their achievement of business value through generative AI solutions on Amazon Bedrock. In her free time, Maira enjoys traveling, playing with her cat, and spending time with her family someplace warm.

Read More

How Aviva built a scalable, secure, and reliable MLOps platform using Amazon SageMaker

How Aviva built a scalable, secure, and reliable MLOps platform using Amazon SageMaker

This post is co-written with Dean Steel and Simon Gatie from Aviva.

With a presence in 16 countries and serving over 33 million customers, Aviva is a leading insurance company headquartered in London, UK. With a history dating back to 1696, Aviva is one of the oldest and most established financial services organizations in the world. Aviva’s mission is to help people protect what matters most to them—be it their health, home, family, or financial future. To achieve this effectively, Aviva harnesses the power of machine learning (ML) across more than 70 use cases. Previously, ML models at Aviva were developed using a graphical UI-driven tool and deployed manually. This approach led to data scientists spending more than 50% of their time on operational tasks, leaving little room for innovation, and posed challenges in monitoring model performance in production.

In this post, we describe how Aviva built a fully serverless MLOps platform based on the AWS Enterprise MLOps Framework and Amazon SageMaker to integrate DevOps best practices into the ML lifecycle. This solution establishes MLOps practices to standardize model development, streamline ML model deployment, and provide consistent monitoring. We illustrate the entire setup of the MLOps platform using a real-world use case that Aviva has adopted as its first ML use case.

The Challenge: Deploying and operating ML models at scale

Approximately 47% of ML projects never reach production, according to Gartner. Despite the advancements in open source data science frameworks and cloud services, deploying and operating these models remains a significant challenge for organizations. This struggle highlights the importance of establishing consistent processes, integrating effective monitoring, and investing in the necessary technical and cultural foundations for a successful MLOps implementation.

For companies like Aviva, which handles approximately 400,000 insurance claims annually, with expenditures of about £3 billion in settlements, the pressure to deliver a seamless digital experience to customers is immense. To meet this demand amidst rising claim volumes, Aviva recognizes the need for increased automation through AI technology. Therefore, developing and deploying more ML models is crucial to support their growing workload.

To prove the platform can handle onboarding and industrialization of ML models, Aviva picked their Remedy use case as their first project. This use case concerns a claim management system that employs a data-driven approach to determine whether submitted car insurance claims qualify as either total loss or repair cases, as illustrated in the following diagram

Remedy Use Case

  1. The workflow consists of the following steps:
  2. The workflow begins when a customer experiences a car accident.
  3. The customer contacts Aviva, providing information about the incident and details about the damage.
  4. To determine the estimated cost of repair, 14 ML models and a set of business rules are used to process the request.
  5. The estimated cost is compared with the car’s current market value from external data sources.
  6. Information related to similar cars for sale nearby is included in the analysis.
  7. Based on the processed data, a recommendation is made by the model to either repair or write off the car. This recommendation, along with the supporting data, is provided to the claims handler, and the pipeline reaches its final state.

The successful deployment and evaluation of the Remedy use case on the MLOps platform was intended to serve as a blueprint for future use cases, providing maximum efficiency by using templated solutions.

Solution overview of the MLOps platform

To handle the complexity of operationalizing ML models at scale, AWS offers provides an MLOps offering called AWS Enterprise MLOps Framework, which can be used for a wide variety of use cases. The offering encapsulates a best practices approach to build and manage MLOps platforms based on the consolidated knowledge gained from a multitude of customer engagements carried out by AWS Professional Services in the last five 5 years. The proposed baseline architecture can be logically divided into four building blocks which that are sequentially deployed into the provided AWS accounts, as illustrated in the following diagram below.

ML Ops Framework

The building blocks are as follows:

  • Networking – A virtual private cloud (VPC), subnets, security groups, and VPC endpoints are deployed across all accounts.
  • Amazon SageMaker Studio – SageMaker Studio offers a fully integrated ML integrated development environment (IDE) acting as a data science workbench and control panel for all ML workloads.
  • Amazon SageMaker Projects templates – These ready-made infrastructure sets cover the ML lifecycle, including continuous integration and delivery (CI/CD) pipelines and seed code. You can launch these from SageMaker Studio with a few clicks, either choosing from preexisting templates or creating custom ones.
  • Seed code – This refers to the data science code tailored for a specific use case, divided between two repositories: training (covering processing, training, and model registration) and inference (related to SageMaker endpoints). The majority of time in developing a use case should be dedicated to modifying this code.

The framework implements the infrastructure deployment from a primary governance account to separate development, staging, and production accounts. Developers can use the AWS Cloud Development Kit (AWS CDK) to customize the solution to align with the company’s specific account setup. In adapting the AWS Enterprise MLOps Framework to a three-account structure, Aviva has designated accounts as follows: development, staging, and production. This structure is depicted in the following architecture diagram. The governance components, which facilitate model promotions with consistent processes across accounts, have been integrated into the development account.

Architecture Diagram

Building reusable ML pipelines

The processing, training, and inference code for the Remedy use case was developed by Aviva’s data science team in SageMaker Studio, a cloud-based environment designed for collaborative work and rapid experimentation. When experimentation is complete, the resulting seed code is pushed to an AWS CodeCommit repository, initiating the CI/CD pipeline for the construction of a SageMaker pipeline. This pipeline comprises a series of interconnected steps for data processing, model training, parameter tuning, model evaluation, and the registration of the generated models in the Amazon SageMaker Model Registry.

SageMaker Pipeline

Amazon SageMaker Automatic Model Tuning enabled Aviva to utilize advanced tuning strategies and overcome the complexities associated with implementing parallelism and distributed computing. The initial step involved a hyperparameter tuning process (Bayesian optimization), during which approximately 100 model variations were trained (5 steps with 20 models trained concurrently in each step). This feature integrates with Amazon SageMaker Experiments to provide data scientists with insights into the tuning process. The optimal model is then evaluated in terms of accuracy, and if it exceeds a use case-specific threshold, it is registered in the SageMaker Model Registry. A custom approval step was constructed, such that only Aviva’s lead data scientist can permit the deployment of a model through a CI/CD pipeline to a SageMaker real-time inference endpoint in the development environment for further testing and subsequent promotion to the staging and production environment.

Serverless workflow for orchestrating ML model inference

To realize the actual business value of Aviva’s ML model, it was necessary to integrate the inference logic with Aviva’s internal business systems. The inference workflow is responsible for combining the model predictions, external data, and business logic to generate a recommendation for claims handlers. The recommendation is based on three possible outcomes:

  • Write off a vehicle (expected repairs cost exceeds the value of the vehicle)
  • Seek a repair (value of the vehicle exceeds repair cost)
  • Require further investigation given a borderline estimation of the value of damage and the price for a replacement vehicle

The following diagram illustrates the workflow.

Inference Workflow

The workflow starts with a request to an API endpoint hosted on Amazon API Gateway originating from a claims management system, which invokes an AWS Step Functions workflow that uses AWS Lambda to complete the following steps:

  1. The input data of the REST API request is transformed into encoded features, which is utilized by the ML model.
  2. ML model predictions are generated by feeding the input to the SageMaker real-time inference endpoints. Because Aviva processes daily claims at irregular intervals, real-time inference endpoints help overcome the challenge of providing predictions consistently at low latency.
  3. ML model predictions are further processed by a custom business logic to derive a final decision (of the three aforementioned options).
  4. The final decision, along with the generated data, is consolidated and transmitted back to the claims management system as a REST API response.

Monitor ML model decisions to elevate confidence amongst users

The ability to obtain real-time access to detailed data for each state machine run and task is critically important for effective oversight and enhancement of the system. This includes providing claim handlers with comprehensive details behind decision summaries, such as model outputs, external API calls, and applied business logic, to make sure recommendations are based on accurate and complete information. Snowflake is the preferred data platform, and it receives data from Step Functions state machine runs through Amazon CloudWatch logs. A series of filters screen for data pertinent to the business. This data then transmits to an Amazon Data Firehose delivery stream and subsequently relays to an Amazon Simple Storage Service (Amazon S3) bucket, which is accessed by Snowflake. The data generated by all runs is used by Aviva business analysts to create dashboards and management reports, facilitating insights such as monthly views of total losses by region or average repair costs by vehicle manufacturer and model.

Security

The described solution processes personally identifiable information (PII), making customer data protection the core security focus of the solution. The customer data is protected by employing networking restrictions, because processing is run inside the VPC, where data is logically separated in transit. The data is encrypted in transit between steps of the processing and encrypted at rest using AWS Key Management Service (AWS KMS). Access to the production customer data is restricted on a need-to-know basis, where only the authorized parties are allowed to access production environment where this data resides.

The second security focus of the solution is protecting Aviva’s intellectual property. The code the data scientists and engineers are working on is stored securely in the dev AWS account, private to Aviva, in the CodeCommit git repositories. The training data and the artifacts of the trained models are stored securely in the S3 buckets in the dev account, protected by AWS KMS encryption at rest, with AWS Identity and Access Management (IAM) policies restricting access to the buckets to only the authorized SageMaker endpoints. The code pipelines are private to the account as well, and reside in the customer’s AWS environment.

The auditability of the workflows is provided by logging the steps of inference and decision-making in the CloudWatch logs. The logs are encrypted at rest as well with AWS KMS, and are configured with a lifecycle policy, guaranteeing availability of audit information for the required compliance period. To maintain security of the project and operate it securely, the accounts are enabled with Amazon GuardDuty and AWS Config. AWS CloudTrail is used to monitor the activity within the accounts. The software to monitor for security vulnerabilities resides primarily in the Lambda functions implementing the business workflows. The processing code is primarily written in Python using libraries that are periodically updated.

Conclusion

This post provided an overview of the partnership between Aviva and AWS, which resulted in the construction of a scalable MLOps platform. This platform was developed using the open source AWS Enterprise MLOps Framework, which integrated DevOps best practices into the ML lifecycle. Aviva is now capable of replicating consistent processes and deploying hundreds of ML use cases in weeks rather than months. Furthermore, Aviva has transitioned entirely to a pay-as-you-go model, resulting in a 90% reduction in infrastructure costs compared to the company’s previous on-premises ML platform solution.

Explore the AWS Enterprise MLOps Framework on GitHub and learn more about MLOps on Amazon SageMaker to see how it can accelerate your organization’s MLOps journey.


About the Authors

Dean Steel is a Senior MLOps Engineer at Aviva with a background in Data Science and actuarial work. He is passionate about all forms of AI/ML with experience developing and deploying a diverse range of models for insurance-specific applications, from large transformers through to linear models. With an engineering focus, Dean is a strong advocate of combining AI/ML with DevSecOps in the cloud using AWS. In his spare time, Dean enjoys exploring music technology, restaurants and film.

Simon Gatie, Principle Analytics Domain Authority at Aviva in Norwich brings a diverse background in Physics, Accountancy, IT, and Data Science to his role. He leads Machine Learning projects at Aviva, driving innovation in data science and advanced technologies for financial services.

Gabriel Rodriguez is a Machine Learning Engineer at AWS Professional Services in Zurich. In his current role, he has helped customers achieve their business goals on a variety of ML use cases, ranging from setting up MLOps pipelines to developing a fraud detection application. Whenever he is not working, he enjoys doing physical exercises, listening to podcasts, or traveling.

Marco Geiger is a Machine Learning Engineer at AWS Professional Services based in Zurich. He works with customers from various industries to develop machine learning solutions that use the power of data for achieving business goals and innovate on behalf of the customer. Besides work, Marco is a passionate hiker, mountain biker, football player, and hobby barista.

Andrew Odendaal is a Senior DevOps Consultant at AWS Professional Services based in Dubai. He works across a wide range of customers and industries to bridge the gap between software and operations teams and provides guidance and best practices for senior management when he’s not busy automating something. Outside of work, Andrew is a family man that loves nothing more than a binge-watching marathon with some good coffee on tap.

Read More