NVIDIA Honors Partners of the Year in Europe, Middle East, Africa

NVIDIA Honors Partners of the Year in Europe, Middle East, Africa

NVIDIA today recognized 18 partners in Europe, the Middle East and Africa for their achievements and commitment to driving AI adoption.

The recipients were honored at the annual EMEA Partner Day hosted by the NVIDIA Partner Network (NPN). The awards span seven categories that highlight the various ways partners work with NVIDIA to transform the region’s industries with AI.

“This year marks another milestone for NVIDIA and our partners across EMEA as we pioneer technological breakthroughs and unlock new business opportunities using NVIDIA’s full-stack platform,” said Dirk Barfuss, director of EMEA channel at NVIDIA. “These awards celebrate our partners’ dedication and expertise in delivering groundbreaking solutions that drive cost efficiencies, enhance productivity and inspire innovation.”

The 2024 NPN award winners for EMEA are:

Rising Star Awards

  • Vesper Technologies received the Rising Star Northern Europe award for its exceptional revenue growth and broad customer base deploying NVIDIA AI solutions in data centers. The company has demonstrated outstanding growth in recent years, augmenting the success of its existing business.
  • AMBER AI & Data Science Solutions GmbH received the Rising Star Central Europe award for its revenue growth of more than 100% across the complete portfolio of NVIDIA technologies. Through extensive collaboration with NVIDIA, the company has become a cornerstone of the NVIDIA partner landscape in Germany.
  • HIPER Global Enterprise Ltd. received the Rising Star Southern Europe & Middle East award for its excellence in serving its broad customer base with NVIDIA compute technologies. Last year, it supported one of the largest customer projects in the region, further accelerating its growth rate.

Star Performer Awards

  • Boston Limited received the Star Performer Northern Europe award for its consistent success in delivering full-stack implementations of NVIDIA technologies for customers across industries. The company over the last year achieved record revenue growth across its business areas.
  • DELTA Computer Products GmbH received the Star Performer Central Europe award for its outstanding sales achievements and strong customer relationships. With a massive technical knowledge base, the company has served as a trusted advisor for customers deploying NVIDIA technologies across industry, higher education and research.
  • COMMit DMCC received the Star Performer Southern Europe & Middle East award for its exceptional execution of strategic and complex solutions built on NVIDIA technologies, which led to record revenues for the United Arab Emirates-based company.

Distributor of the Year

  • PNY received the Distributor of the Year award for the third consecutive year, underscoring its consistent investment in technology training and commitment to providing NVIDIA accelerated computing platforms and software across markets.
  • TD Synnex received the Networking Distributor of the Year award for the second year in a row, highlighting its massive investments in NVIDIA’s portfolio of technologies —  especially networking — and dedication to delivering technical expertise to customers.

Go-to-Market Excellence 

  • Bynet Data Communications Ltd. received the Go-to-Market Excellence award for its collaboration with NVIDIA regional leads to devise and execute effective go-to-market strategies for the Israeli market. This included identifying key opportunities and creating localized marketing campaigns. Its efforts led to great success with the installation of NVIDIA DGX SuperPODs into several new industries in the region.
  • Vesper Technologies was Highly Commended in the Go-to-Market Excellence category for its fully integrated go-to-market strategy around the launch of the NVIDIA GH200 Grace Hopper Superchip. The company successfully deployed a results-driven marketing campaign, demonstrated a commitment to technical training and developed a pre-sales trial and evaluation platform.
  • M Computers s.r.o. was Highly Commended in the Go-to-Market Excellence category for its success and leadership in engaging AI customers in eastern Europe with NVIDIA technologies. The company’s marketing efforts, including speeches at AI events and social media campaigns, helped lead to the first NVIDIA DGX H100 and NVIDIA Grace CPU Superchip projects in the region.

Industry Innovation 

  • WPP received the Industry Innovation award for its innovative applications of AI and NVIDIA technology in the marketing and advertising sector. The company worked with NVIDIA to build a groundbreaking generative AI-powered content engine, built on the NVIDIA Omniverse platform, that enables the creation of brand-consistent content at scale.
  • Ascon Systems was Highly Commended in the Industry Innovation category for its cutting-edge Industrial Metaverse Portal, powered by NVIDIA Omniverse, that helped transform BMW Group’s manufacturing processes with real-time product control and enhanced visualization and interaction.
  • Gcore was Highly Commended in the Industry Innovation category for its creation of the first speech-to-text technology for Luxembourgish, using its fine-tuned LuxemBERT AI model. The technology integrates seamlessly into corporate systems and Luxembourgish messaging platforms, fostering the preservation of the traditionally spoken language, which lacked adequate tools for written communication.

Pioneer

  • Arrow Electronics – Intelligent Business Solutions received the Pioneer Award for its work promoting the NVIDIA IGX Orin platform for healthcare applications and building strategies to drive adoption of the technology. The company’s innovative approach and support led to the first integration of the NVIDIA IGX Orin Developer Kit with an NVIDIA RTX 6000 Ada Generation GPU for a robotic surgery platform.

Consulting Partner of the Year

  • SoftServe received the Consulting Partner of the Year award for its excellence in working with partners to drive the adoption of NVIDIA’s full-stack technologies, helping transform customers’ business with generative AI and NVIDIA Omniverse. Through its SoftServe University corporate learning hub, SoftServe trained its employees, customers and partners to expertly use NVIDIA technology.
  • Deloitte was Highly Commended in the Consulting Partner of the Year category for its focus on building sales and technical skills, efforts to deliver meaningful impact through projects and go-to-market strategy that helped drive enterprise-level AI transformation in the region.
  • Data Monsters was highly commended in the Consulting Partner of the Year category for its development of a virtual assistant with lifelike hearing, speech and animation capabilities using NVIDIA Avatar Cloud Engine and large language models.

​​Learn how to join NPN, or find a local NPN partner.

Read More

Seeing Beyond: Living Optics CEO Robin Wang on Democratizing Hyperspectral Imaging

Seeing Beyond: Living Optics CEO Robin Wang on Democratizing Hyperspectral Imaging

Step into the realm of the unseen with Robin Wang, CEO of Living Optics. The startup cofounder discusses the power of hyperspectral imaging with AI Podcast host Noah Kravitz in an episode recorded live at the NVIDIA GTC global AI conference. Living Optics’ hyperspectral imaging camera, which can capture visual data across 96 colors, reveals details invisible to the human eye. Potential applications are as diverse as monitoring plant health to detecting cracks in bridges. The startup aims to empower users across industries to gain new insights from richer, more informative datasets fueled by hyperspectral imaging technology.

Living Optics is a member of the NVIDIA Inception program for cutting-edge startups.

Stay tuned for more episodes recorded live from GTC.

Time Stamps

1:05: What is hyperspectral imaging?

1:45: The Living Optics camera’s ability to capture 96 colors

3:36: Where is hyperspectral imaging being used, and why is it so important?

7:19: How are hyperspectral images represented and accessed by the user?

9:34: Other use cases of hyperspectral imaging

13:07: What’s unique about Living Optics’ hyperspectral imaging camera?

18:36: Breakthroughs, challenges during the technology’s development

23:27: What’s next for Living Optics and hyperspectral imaging?

You Might Also Like…

Dotlumen CEO Cornel Amariei on Assisstive Technology for the Visually Impaired – Ep. 217

Dotlumen is illuminating a new technology to help people with visual impairments navigate the world. In this episode of NVIDIA’s AI Podcast, recorded live at the NVIDIA GTC global AI conference, host Noah Kravitz spoke with the Romanian startup’s founder and CEO, Cornel Amariei, about developing its flagship Dotlumen Glasses.

DigitalPath’s Ethan Higgins on Using AI to Fight Wildfires – Ep. 211

DigitalPath is igniting change in the golden state — using computer vision, generative adversarial networks and a network of thousands of cameras to detect signs of fire in real time. In the latest episode of NVIDIA’s AI Podcast, host Noah Kravtiz spoke with DigitalPath system architect Ethan Higgins about the company’s role in the ALERTCalifornia initiative, a collaboration between California’s wildfire fighting agency CAL FIRE and the University of California, San Diego.

MosaicML’s Naveen Rao on Making Custom LLMs More Accessible – Ep. 199

Startup MosaicML is on a mission to help the AI community enhance prediction accuracy, decrease costs, and save time by providing tools for easy training and deployment of large AI models. In this episode of NVIDIA’s AI Podcast, host Noah Kravitz speaks with MosaicML CEO and co-founder Naveen Rao about how the company aims to democratize access to large language models.

Peter Ma on Using AI to Find Promising Signals for Alien Life – Ep. 191

In this episode of the NVIDIA AI Podcast, host Noah Kravitz interviews Ma, an undergraduate student at the University of Toronto, about how he developed an AI algorithm that outperformed traditional methods in the search for extraterrestrial intelligence.

Subscribe to the AI Podcast

Get the AI Podcast through iTunes, Google Podcasts, Google Play, Amazon Music, Castbox, DoggCatcher, Overcast, PlayerFM, Pocket Casts, Podbay, PodBean, PodCruncher, PodKicker, Soundcloud, Spotify, Stitcher and TuneIn.

Make the AI Podcast better: Have a few minutes to spare? Fill out this listener survey.

Read More

Research Focus: Week of April 15, 2024

Research Focus: Week of April 15, 2024

Welcome to Research Focus, a series of blog posts that highlights notable publications, events, code/datasets, new hires and other milestones from across the research community at Microsoft.

Research Focus April 15, 2024

Appropriate reliance on Generative AI: Research synthesis

Appropriate reliance on AI happens when people accept correct AI outputs and reject incorrect ones. It requires users of AI systems to know when to trust the AI and when to trust themselves. But fostering appropriate reliance comes with new complexities when generative AI (genAI) systems are involved. Though their capabilities are advancing, genAI systems, which use generative models to produce content such as text, music, images, and videos, have limitations as well. Inappropriate reliance – either under-reliance or overreliance – on genAI can have negative consequences, such as poor task performance and even product abandonment.  

In a recent paper: Appropriate reliance on Generative AI: Research synthesis, researchers from Microsoft, who reviewed 50 papers from various disciplines, provide an overview of the factors that affect overreliance on genAI, the effectiveness of different mitigation strategies for overreliance on genAI, and potential design strategies to facilitate appropriate reliance on genAI. 


Characterizing Power Management Opportunities for LLMs in the Cloud

Cloud providers and datacenter operators are grappling with increased demand for graphics processing units (GPUs) due to expanding use of large language models (LLMs). To try to keep up, enterprises are exploring various means to address the challenge, such as power oversubscription and adding more servers. Proper power usage analysis and management could help providers meet demand safely and more efficiently. 

In a recent paper: Characterizing Power Management Opportunities for LLMs in the Cloud, researchers from Microsoft analyze power patterns for several popular, open-source LLMs across commonly used configurations and identify opportunities to improve power management for LLMs in the cloud. They present a new framework called POLCA, which enables power oversubscription in LLM inference clouds. POLCA is robust, reliable, and readily deployable. Using open-source models to replicate the power patterns observed in production, POLCA simulations demonstrate it could deploy 30% more servers in existing clusters while incurring minimal power throttling events. POLCA improves power efficiency, reduces the need for additional energy sources and datacenters, and helps to promptly meet demand for running additional LLM workloads. 

Microsoft Research Podcast

Collaborators: Renewable energy storage with Bichlien Nguyen and David Kwabi

Dr. Bichlien Nguyen and Dr. David Kwabi explore their work in flow batteries and how machine learning can help more effectively search the vast organic chemistry space to identify compounds with properties just right for storing waterpower and other renewables.


LLMLingua-2: Data Distillation for Efficient and Faithful Task-Agnostic Prompt Compression

Various prompting techniques, such as chain-of-thought (CoT), in-context learning (ICL), and retrieval augmented generation (RAG), can empower large language models (LLMs) to handle complex and varied tasks through rich and informative prompts. However, these prompts are lengthy, sometimes exceeding tens of thousands of tokens, which increases computational and financial overhead and degrades the LLMs’ ability to perceive information. Recent efforts to compress prompts in a task-aware manner, without losing essential information, have resulted in shorter prompts tailored to a specific task or query. This typically enhances performance on downstream tasks, particularly in question answering. However, the task-specific features present challenges in efficiency and generalizability. 

In a recent paper: LLMLingua-2: Data Distillation for Efficient and Faithful Task-Agnostic Prompt Compression, researchers from Microsoft and Tsinghua University propose a data distillation procedure to derive knowledge from an LLM (GPT-4) and compress the prompts without losing crucial information. They introduce an extractive text compression dataset, containing pairs of original texts from MeetingBank and their compressed versions. Despite its small size, their model shows significant performance gains over strong baselines and demonstrates robust generalization ability across different LLMs. The new model is 3x-6x faster than existing prompt compression methods, while accelerating the end-to-end latency by 1.6x-2.9x with compression ratios of 2x-5x. 


AfriMTE and AfriCOMET: Enhancing COMET to Embrace Under-resourced African Languages

Despite recent progress in scaling multilingual machine translation (MT) to several under-resourced African languages, accurately measuring this progress remains challenging. Evaluation is often performed using n-gram matching metrics such as BLEU, which typically show a weaker correlation with human judgments. Learned metrics like COMET have a higher correlation; however, challenges such as the lack of evaluation data with human ratings for under-resourced languages, the complexity of annotation guidelines like Multidimensional Quality Metrics (MQM), and the limited language coverage of multilingual encoders, have hampered their applicability to African languages. 

In a recent paper: AfriMTE and AfriCOMET: Enhancing COMET to Embrace Under-resourced African Languages (opens in new tab), researchers from University College London, University of Maryland, Unbabel, Microsoft and the Masakhane Community (opens in new tab), address these challenges, creating high-quality human evaluation data with simplified MQM guidelines for error detection and direct assessment (DA) scoring for 13 typologically diverse African languages. They also develop AFRICOMET: COMET evaluation metrics for African languages by leveraging DA data from well-resourced languages and an African-centric multilingual encoder (AfroXLMR) to create state-of-the-art MT evaluation metrics for African languages with respect to Spearman-rank correlation with human judgments (0.441). 


Comparing the Agency of Hybrid Meeting Remote Users in 2D and 3D Interfaces of the Hybridge System

Video communication often lacks the inclusiveness and simultaneity enabled by physical presence in a shared space. This is especially apparent during hybrid meetings, where some attendees meet physically in a room while others join remotely. Remote participants are at a disadvantage, unable to navigate the physical space like in-room participants. 

In a Late Breaking Work paper to be presented at CHI2024: Comparing the Agency of Hybrid Meeting Remote Users in 2D and 3D Interfaces of the Hybridge System,” Microsoft researchers present an experimental system for exploring designs for improving the inclusion of remote attendees in hybrid meetings. In-room users see remote participants on individual displays positioned around a table. Remote participants see video feeds from the room integrated into a digital twin of the meeting room, choosing where they appear in the meeting room and from where they view it. The researchers designed both a 2D and a 3D version of the interface. They found that 3D outperformed 2D in participants’ perceived sense of awareness, sense of agency, and physical presence. A majority of participants also subjectively preferred 3D over 2D. The next step in this research will test the inclusiveness of Hybridge 3D meetings against fully in-room meetings and traditional hybrid meetings. 


FeatUp: A Model-Agnostic Framework for Features at Any Resolution

Deep features are a cornerstone of computer vision research, capturing image semantics and enabling the community to solve downstream tasks even in the zero- or few-shot regime. However, these features often lack the spatial resolution to directly perform dense prediction tasks like segmentation and depth prediction. This is because models like transformers and convolutional networks aggressively pool information over large areas. 

In a paper that was published at ICLR 2024: FeatUp: A Model-Agnostic Framework for Features at Any Resolution, researchers from Microsoft and external colleagues introduce a task- and model-agnostic framework to restore lost spatial information in deep features. The paper introduces two variants of FeatUp: one that guides features with high-resolution signal in a single forward pass, and one that fits an implicit model to a single image to reconstruct features at any resolution. Both approaches use a multiview consistency loss with deep analogies to neural radiance fields (NeRFs), a deep learning method of building 3D representations of a scene using sparse 2D images. In the new research, features retain their original semantics and can be swapped into existing applications to yield resolution and performance gains, even without re-training. FeatUp significantly outperforms other feature upsampling and image super-resolution approaches in class activation map generation, transfer learning for segmentation and depth prediction, and end-to-end training for semantic segmentation. 

The post Research Focus: Week of April 15, 2024 appeared first on Microsoft Research.

Read More

Uncover hidden connections in unstructured financial data with Amazon Bedrock and Amazon Neptune

Uncover hidden connections in unstructured financial data with Amazon Bedrock and Amazon Neptune

In asset management, portfolio managers need to closely monitor companies in their investment universe to identify risks and opportunities, and guide investment decisions. Tracking direct events like earnings reports or credit downgrades is straightforward—you can set up alerts to notify managers of news containing company names. However, detecting second and third-order impacts arising from events at suppliers, customers, partners, or other entities in a company’s ecosystem is challenging.

For example, a supply chain disruption at a key vendor would likely negatively impact downstream manufacturers. Or the loss of a top customer for a major client poses a demand risk for the supplier. Very often, such events fail to make headlines featuring the impacted company directly, but are still important to pay attention to. In this post, we demonstrate an automated solution combining knowledge graphs and generative artificial intelligence (AI) to surface such risks by cross-referencing relationship maps with real-time news.

Broadly, this entails two steps: First, building the intricate relationships between companies (customers, suppliers, directors) into a knowledge graph. Second, using this graph database along with generative AI to detect second and third-order impacts from news events. For instance, this solution can highlight that delays at a parts supplier may disrupt production for downstream auto manufacturers in a portfolio though none are directly referenced.

With AWS, you can deploy this solution in a serverless, scalable, and fully event-driven architecture. This post demonstrates a proof of concept built on two key AWS services well suited for graph knowledge representation and natural language processing: Amazon Neptune and Amazon Bedrock. Neptune is a fast, reliable, fully managed graph database service that makes it straightforward to build and run applications that work with highly connected datasets. Amazon Bedrock is a fully managed service that offers a choice of high-performing foundation models (FMs) from leading AI companies like AI21 Labs, Anthropic, Cohere, Meta, Stability AI, and Amazon through a single API, along with a broad set of capabilities to build generative AI applications with security, privacy, and responsible AI.

Overall, this prototype demonstrates the art of possible with knowledge graphs and generative AI—deriving signals by connecting disparate dots. The takeaway for investment professionals is the ability to stay on top of developments closer to the signal while avoiding noise.

Build the knowledge graph

The first step in this solution is building a knowledge graph, and a valuable yet often overlooked data source for knowledge graphs is company annual reports. Because official corporate publications undergo scrutiny before release, the information they contain is likely to be accurate and reliable. However, annual reports are written in an unstructured format meant for human reading rather than machine consumption. To unlock their potential, you need a way to systematically extract and structure the wealth of facts and relationships they contain.

With generative AI services like Amazon Bedrock, you now have the capability to automate this process. You can take an annual report and trigger a processing pipeline to ingest the report, break it down into smaller chunks, and apply natural language understanding to pull out salient entities and relationships.

For example, a sentence stating that “[Company A] expanded its European electric delivery fleet with an order for 1,800 electric vans from [Company B]” would allow Amazon Bedrock to identify the following:

  • [Company A] as a customer
  • [Company B] as a supplier
  • A supplier relationship between [Company A] and [Company B]
  • Relationship details of “supplier of electric delivery vans”

Extracting such structured data from unstructured documents requires providing carefully crafted prompts to large language models (LLMs) so they can analyze text to pull out entities like companies and people, as well as relationships such as customers, suppliers, and more. The prompts contain clear instructions on what to look out for and the structure to return the data in. By repeating this process across the entire annual report, you can extract the relevant entities and relationships to construct a rich knowledge graph.

However, before committing the extracted information to the knowledge graph, you need to first disambiguate the entities. For instance, there may already be another ‘[Company A]’ entity in the knowledge graph, but it could represent a different organization with the same name. Amazon Bedrock can reason and compare the attributes such as business focus area, industry, and revenue-generating industries and relationships to other entities to determine if the two entities are actually distinct. This prevents inaccurately merging unrelated companies into a single entity.

After disambiguation is complete, you can reliably add new entities and relationships into your Neptune knowledge graph, enriching it with the facts extracted from annual reports. Over time, the ingestion of reliable data and integration of more reliable data sources will help build a comprehensive knowledge graph that can support revealing insights through graph queries and analytics.

This automation enabled by generative AI makes it feasible to process thousands of annual reports and unlocks an invaluable asset for knowledge graph curation that would otherwise go untapped due to the prohibitively high manual effort needed.

The following screenshot shows an example of the visual exploration that’s possible in a Neptune graph database using the Graph Explorer tool.

Process news articles

The next step of the solution is automatically enriching portfolio managers’ news feeds and highlighting articles relevant to their interests and investments. For the news feed, portfolio managers can subscribe to any third-party news provider through AWS Data Exchange or another news API of their choice.

When a news article enters the system, an ingestion pipeline is invoked to process the content. Using techniques similar to the processing of annual reports, Amazon Bedrock is used to extract entities, attributes, and relationships from the news article, which are then used to disambiguate against the knowledge graph to identify the corresponding entity in the knowledge graph.

The knowledge graph contains connections between companies and people, and by linking article entities to existing nodes, you can identify if any subjects are within two hops of the companies that the portfolio manager has invested in or is interested in. Finding such a connection indicates the article may be relevant to the portfolio manager, and because the underlying data is represented in a knowledge graph, it can be visualized to help the portfolio manager understand why and how this context is relevant. In addition to identifying connections to the portfolio, you can also use Amazon Bedrock to perform sentiment analysis on the entities referenced.

The final output is an enriched news feed surfacing articles likely to impact the portfolio manager’s areas of interest and investments.

Solution overview

The overall architecture of the solution looks like the following diagram.

The workflow consists of the following steps:

  1. A user uploads official reports (in PDF format) to an Amazon Simple Storage Service (Amazon S3) bucket. The reports should be officially published reports to minimize the inclusion of inaccurate data into your knowledge graph (as opposed to news and tabloids).
  2. The S3 event notification invokes an AWS Lambda function, which sends the S3 bucket and file name to an Amazon Simple Queue Service (Amazon SQS) queue. The First-In-First-Out (FIFO) queue makes sure that the report ingestion process is performed sequentially to reduce the likelihood of introducing duplicate data into your knowledge graph.
  3. An Amazon EventBridge time-based event runs every minute to start the run of an AWS Step Functions state machine asynchronously.
  4. The Step Functions state machine runs through a series of tasks to process the uploaded document by extracting key information and inserting it into your knowledge graph:
    1. Receive the queue message from Amazon SQS.
    2. Download the PDF report file from Amazon S3, split it into multiple smaller text chunks (approximately 1,000 words) for processing, and store the text chunks in Amazon DynamoDB.
    3. Use Anthropic’s Claude v3 Sonnet on Amazon Bedrock to process the first few text chunks to determine the main entity that the report is referring to, together with relevant attributes (such as industry).
    4. Retrieve the text chunks from DynamoDB and for each text chunk, invoke a Lambda function to extract out entities (such as company or person), and its relationship (customer, supplier, partner, competitor, or director) to the main entity using Amazon Bedrock.
    5. Consolidate all extracted information.
    6. Filter out noise and irrelevant entities (for example, generic terms such as “consumers”) using Amazon Bedrock.
    7. Use Amazon Bedrock to perform disambiguation by reasoning using the extracted information against the list of similar entities from the knowledge graph. If the entity does not exist, insert it. Otherwise, use the entity that already exists in the knowledge graph. Insert all relationships extracted.
    8. Clean up by deleting the SQS queue message and the S3 file.
  5. A user accesses a React-based web application to view the news articles that are supplemented with the entity, sentiment, and connection path information.
  6. Using the web application, the user specifies the number of hops (default N=2) on the connection path to monitor.
  7. Using the web application, the user specifies the list of entities to track.
  8. To generate fictional news, the user chooses Generate Sample News to generate 10 sample financial news articles with random content to be fed into the news ingestion process. Content is generated using Amazon Bedrock and is purely fictional.
  9. To download actual news, the user chooses Download Latest News to download the top news happening today (powered by NewsAPI.org).
  10. The news file (TXT format) is uploaded to an S3 bucket. Steps 8 and 9 upload news to the S3 bucket automatically, but you can also build integrations to your preferred news provider such as AWS Data Exchange or any third-party news provider to drop news articles as files into the S3 bucket. News data file content should be formatted as <date>{dd mmm yyyy}</date><title>{title}</title><text>{news content}</text>.
  11. The S3 event notification sends the S3 bucket or file name to Amazon SQS (standard), which invokes multiple Lambda functions to process the news data in parallel:
    1. Use Amazon Bedrock to extract entities mentioned in the news together with any related information, relationships, and sentiment of the mentioned entity.
    2. Check against the knowledge graph and use Amazon Bedrock to perform disambiguation by reasoning using the available information from the news and from within the knowledge graph to identify the corresponding entity.
    3. After the entity has been located, search for and return any connection paths connecting to entities marked with INTERESTED=YES in the knowledge graph that are within N=2 hops away.
  12. The web application auto refreshes every 1 second to pull out the latest set of processed news to display on the web application.

Deploy the prototype

You can deploy the prototype solution and start experimenting yourself. The prototype is available from GitHub and includes details on the following:

  • Deployment prerequisites
  • Deployment steps
  • Cleanup steps

Summary

This post demonstrated a proof of concept solution to help portfolio managers detect second- and third-order risks from news events, without direct references to companies they track. By combining a knowledge graph of intricate company relationships with real-time news analysis using generative AI, downstream impacts can be highlighted, such as production delays from supplier hiccups.

Although it’s only a prototype, this solution shows the promise of knowledge graphs and language models to connect dots and derive signals from noise. These technologies can aid investment professionals by revealing risks faster through relationship mappings and reasoning. Overall, this is a promising application of graph databases and AI that warrants exploration to augment investment analysis and decision-making.

If this example of generative AI in financial services is of interest to your business, or you have a similar idea, reach out to your AWS account manager, and we will be delighted to explore further with you.


About the Author

Xan Huang is a Senior Solutions Architect with AWS and is based in Singapore. He works with major financial institutions to design and build secure, scalable, and highly available solutions in the cloud. Outside of work, Xan spends most of his free time with his family and getting bossed around by his 3-year-old daughter. You can find Xan on LinkedIn.

Read More

Open source observability for AWS Inferentia nodes within Amazon EKS clusters

Open source observability for AWS Inferentia nodes within Amazon EKS clusters

Recent developments in machine learning (ML) have led to increasingly large models, some of which require hundreds of billions of parameters. Although they are more powerful, training and inference on those models require significant computational resources. Despite the availability of advanced distributed training libraries, it’s common for training and inference jobs to need hundreds of accelerators (GPUs or purpose-built ML chips such as AWS Trainium and AWS Inferentia), and therefore tens or hundreds of instances.

In such distributed environments, observability of both instances and ML chips becomes key to model performance fine-tuning and cost optimization. Metrics allow teams to understand workload behavior and optimize resource allocation and utilization, diagnose anomalies, and increase overall infrastructure efficiency. For data scientists, ML chips utilization and saturation are also relevant for capacity planning.

This post walks you through the Open Source Observability pattern for AWS Inferentia, which shows you how to monitor the performance of ML chips, used in an Amazon Elastic Kubernetes Service (Amazon EKS) cluster, with data plane nodes based on Amazon Elastic Compute Cloud (Amazon EC2) instances of type Inf1 and Inf2.

The pattern is part of the AWS CDK Observability Accelerator, a set of opinionated modules to help you set observability for Amazon EKS clusters. The AWS CDK Observability Accelerator is organized around patterns, which are reusable units for deploying multiple resources. The open source observability set of patterns instruments observability with Amazon Managed Grafana dashboards, an AWS Distro for OpenTelemetry collector to collect metrics, and Amazon Managed Service for Prometheus to store them.

Solution overview

The following diagram illustrates the solution architecture.

This solution deploys an Amazon EKS cluster with a node group that includes Inf1 instances.

The AMI type of the node group is AL2_x86_64_GPU, which uses the Amazon EKS optimized accelerated Amazon Linux AMI. In addition to the standard Amazon EKS-optimized AMI configuration, the accelerated AMI includes the NeuronX runtime.

To access the ML chips from Kubernetes, the pattern deploys the AWS Neuron device plugin.

Metrics are exposed to Amazon Managed Service for Prometheus by the neuron-monitor DaemonSet, which deploys a minimal container, with the Neuron tools installed. Specifically, the neuron-monitor DaemonSet runs the neuron-monitor command piped into the neuron-monitor-prometheus.py companion script (both commands are part of the container):

neuron-monitor | neuron-monitor-prometheus.py --port <port>

The command uses the following components:

  • neuron-monitor collects metrics and stats from the Neuron applications running on the system and streams the collected data to stdout in JSON format
  • neuron-monitor-prometheus.py maps and exposes the telemetry data from JSON format into Prometheus-compatible format

Data is visualized in Amazon Managed Grafana by the corresponding dashboard.

The rest of the setup to collect and visualize metrics with Amazon Managed Service for Prometheus and Amazon Managed Grafana is similar to that used in other open source based patterns, which are included in the AWS Observability Accelerator for CDK GitHub repository.

Prerequisites

You need the following to complete the steps in this post:

Set up the environment

Complete the following steps to set up your environment:

  1. Open a terminal window and run the following commands:
export AWS_REGION=<YOUR AWS REGION>
export ACCOUNT_ID=$(aws sts get-caller-identity --query 'Account' --output text)
  1. Retrieve the workspace IDs of any existing Amazon Managed Grafana workspace:
aws grafana list-workspaces

The following is our sample output:

{
  "workspaces": [
    {
      "authentication": {
        "providers": [
          "AWS_SSO"
        ]
      },
      "created": "2023-06-07T12:23:56.625000-04:00",
      "description": "accelerator-workspace",
      "endpoint": "g-XYZ.grafana-workspace.us-east-2.amazonaws.com",
      "grafanaVersion": "9.4",
      "id": "g-XYZ",
      "modified": "2023-06-07T12:30:09.892000-04:00",
      "name": "accelerator-workspace",
      "notificationDestinations": [
        "SNS"
      ],
      "status": "ACTIVE",
      "tags": {}
    }
  ]
}
  1. Assign the values of id and endpoint to the following environment variables:
export COA_AMG_WORKSPACE_ID="<<YOUR-WORKSPACE-ID, similar to the above g-XYZ, without quotation marks>>"
export COA_AMG_ENDPOINT_URL="<<https://YOUR-WORKSPACE-URL, including protocol (i.e. https://), without quotation marks, similar to the above https://g-XYZ.grafana-workspace.us-east-2.amazonaws.com>>"

COA_AMG_ENDPOINT_URL needs to include https://.

  1. Create a Grafana API key from the Amazon Managed Grafana workspace:
export AMG_API_KEY=$(aws grafana create-workspace-api-key 
--key-name "grafana-operator-key" 
--key-role "ADMIN" 
--seconds-to-live 432000 
--workspace-id $COA_AMG_WORKSPACE_ID 
--query key 
--output text)
  1. Set up a secret in AWS Systems Manager:
aws ssm put-parameter --name "/cdk-accelerator/grafana-api-key" 
--type "SecureString" 
--value $AMG_API_KEY 
--region $AWS_REGION

The secret will be accessed by the External Secrets add-on and made available as a native Kubernetes secret in the EKS cluster.

Bootstrap the AWS CDK environment

The first step to any AWS CDK deployment is bootstrapping the environment. You use the cdk bootstrap command in the AWS CDK CLI to prepare the environment (a combination of AWS account and AWS Region) with resources required by AWS CDK to perform deployments into that environment. AWS CDK bootstrapping is needed for each account and Region combination, so if you already bootstrapped AWS CDK in a Region, you don’t need to repeat the bootstrapping process.

cdk bootstrap aws://$ACCOUNT_ID/$AWS_REGION

Deploy the solution

Complete the following steps to deploy the solution:

  1. Clone the cdk-aws-observability-accelerator repository and install the dependency packages. This repository contains AWS CDK v2 code written in TypeScript.
git clone https://github.com/aws-observability/cdk-aws-observability-accelerator.git
cd cdk-aws-observability-accelerator

The actual settings for Grafana dashboard JSON files are expected to be specified in the AWS CDK context. You need to update context in the cdk.json file, located in the current directory. The location of the dashboard is specified by the fluxRepository.values.GRAFANA_NEURON_DASH_URL parameter, and neuronNodeGroup is used to set the instance type, number, and Amazon Elastic Block Store (Amazon EBS) size used for the nodes.

  1. Enter the following snippet into cdk.json, replacing context:
"context": {
    "fluxRepository": {
      "name": "grafana-dashboards",
      "namespace": "grafana-operator",
      "repository": {
        "repoUrl": "https://github.com/aws-observability/aws-observability-accelerator",
        "name": "grafana-dashboards",
        "targetRevision": "main",
        "path": "./artifacts/grafana-operator-manifests/eks/infrastructure"
      },
      "values": {
        "GRAFANA_CLUSTER_DASH_URL" : "https://raw.githubusercontent.com/aws-observability/aws-observability-accelerator/main/artifacts/grafana-dashboards/eks/infrastructure/cluster.json",
        "GRAFANA_KUBELET_DASH_URL" : "https://raw.githubusercontent.com/aws-observability/aws-observability-accelerator/main/artifacts/grafana-dashboards/eks/infrastructure/kubelet.json",
        "GRAFANA_NSWRKLDS_DASH_URL" : "https://raw.githubusercontent.com/aws-observability/aws-observability-accelerator/main/artifacts/grafana-dashboards/eks/infrastructure/namespace-workloads.json",
        "GRAFANA_NODEEXP_DASH_URL" : "https://raw.githubusercontent.com/aws-observability/aws-observability-accelerator/main/artifacts/grafana-dashboards/eks/infrastructure/nodeexporter-nodes.json",
        "GRAFANA_NODES_DASH_URL" : "https://raw.githubusercontent.com/aws-observability/aws-observability-accelerator/main/artifacts/grafana-dashboards/eks/infrastructure/nodes.json",
        "GRAFANA_WORKLOADS_DASH_URL" : "https://raw.githubusercontent.com/aws-observability/aws-observability-accelerator/main/artifacts/grafana-dashboards/eks/infrastructure/workloads.json",
        "GRAFANA_NEURON_DASH_URL" : "https://raw.githubusercontent.com/aws-observability/aws-observability-accelerator/main/artifacts/grafana-dashboards/eks/neuron/neuron-monitor.json"
      },
      "kustomizations": [
        {
          "kustomizationPath": "./artifacts/grafana-operator-manifests/eks/infrastructure"
        },
        {
          "kustomizationPath": "./artifacts/grafana-operator-manifests/eks/neuron"
        }
      ]
    },
     "neuronNodeGroup": {
      "instanceClass": "inf1",
      "instanceSize": "2xlarge",
      "desiredSize": 1, 
      "minSize": 1, 
      "maxSize": 3,
      "ebsSize": 512
    }
  }

You can replace the Inf1 instance type with Inf2 and change the size as needed. To check availability in your selected Region, run the following command (amend Values as you see fit):

aws ec2 describe-instance-type-offerings 
--filters Name=instance-type,Values="inf1*" 
--query "InstanceTypeOfferings[].InstanceType" 
--region $AWS_REGION
  1. Install the project dependencies:
npm install
  1. Run the following commands to deploy the open source observability pattern:
make build
make pattern single-new-eks-inferentia-opensource-observability deploy

Validate the solution

Complete the following steps to validate the solution:

  1. Run the update-kubeconfig command. You should be able to get the command from the output message of the previous command:
aws eks update-kubeconfig --name single-new-eks-inferentia-opensource... --region <your region> --role-arn arn:aws:iam::xxxxxxxxx:role/single-new-eks-....
  1. Verify the resources you created:
kubectl get pods -A

The following screenshot shows our sample output.

  1. Make sure the neuron-device-plugin-daemonset DaemonSet is running:
kubectl get ds neuron-device-plugin-daemonset --namespace kube-system

The following is our expected output:

NAME                             DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR   AGE
neuron-device-plugin-daemonset   1         1         1       1            1           <none>          2h
  1. Confirm that the neuron-monitor DaemonSet is running:
kubectl get ds neuron-monitor --namespace kube-system

The following is our expected output:

NAME             DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR   AGE
neuron-monitor   1         1         1       1            1           <none>          2h
  1. To verify that the Neuron devices and cores are visible, run the neuron-ls and neuron-top commands from, for example, your neuron-monitor pod (you can get the pod’s name from the output of kubectl get pods -A):
kubectl exec -it {your neuron-monitor pod} -n kube-system -- /bin/bash -c "neuron-ls"

The following screenshot shows our expected output.

kubectl exec -it {your neuron-monitor pod} -n kube-system -- /bin/bash -c "neuron-top"

The following screenshot shows our expected output.

Visualize data using the Grafana Neuron dashboard

Log in to your Amazon Managed Grafana workspace and navigate to the Dashboards panel. You should see a dashboard named Neuron / Monitor.

To see some interesting metrics on the Grafana dashboard, we apply the following manifest:

curl https://raw.githubusercontent.com/aws-observability/aws-observability-accelerator/main/artifacts/k8s-deployment-manifest-templates/neuron/pytorch-inference-resnet50.yml | kubectl apply -f -

This is a sample workload that compiles the torchvision ResNet50 model and runs repetitive inference in a loop to generate telemetry data.

To verify the pod was successfully deployed, run the following code:

kubectl get pods

You should see a pod named pytorch-inference-resnet50.

After a few minutes, looking into the Neuron / Monitor dashboard, you should see the gathered metrics similar to the following screenshots.

Grafana Operator and Flux always work together to synchronize your dashboards with Git. If you delete your dashboards by accident, they will be re-provisioned automatically.

Clean up

You can delete the whole AWS CDK stack with the following command:

make pattern single-new-eks-inferentia-opensource-observability destroy

Conclusion

In this post, we showed you how to introduce observability, with open source tooling, into an EKS cluster featuring a data plane running EC2 Inf1 instances. We started by selecting the Amazon EKS-optimized accelerated AMI for the data plane nodes, which includes the Neuron container runtime, providing access to AWS Inferentia and Trainium Neuron devices. Then, to expose the Neuron cores and devices to Kubernetes, we deployed the Neuron device plugin. The actual collection and mapping of telemetry data into Prometheus-compatible format was achieved via neuron-monitor and neuron-monitor-prometheus.py. Metrics were sourced from Amazon Managed Service for Prometheus and displayed on the Neuron dashboard of Amazon Managed Grafana.

We recommend that you explore additional observability patterns in the AWS Observability Accelerator for CDK GitHub repo. To learn more about Neuron, refer to the AWS Neuron Documentation.


About the Author

Riccardo Freschi is a Sr. Solutions Architect at AWS, focusing on application modernization. He works closely with partners and customers to help them transform their IT landscapes in their journey to the AWS Cloud by refactoring existing applications and building new ones.

Read More

Moving Pictures: Transform Images Into 3D Scenes With NVIDIA Instant NeRF

Moving Pictures: Transform Images Into 3D Scenes With NVIDIA Instant NeRF

Editor’s note: This post is part of the AI Decoded series, which demystifies AI by making the technology more accessible, and which showcases new hardware, software, tools and accelerations for RTX PC users.

Imagine a gorgeous vista, like a cliff along the water’s edge. Even as a 2D image, the scene would be beautiful and inviting. Now imagine exploring that same view in 3D – without needing to be there.

NVIDIA RTX technology-powered AI helps turn such an imagination into reality. Using Instant NeRF, creatives are transforming collections of still images into digital 3D scenes in just seconds.

Simply Radiant

A NeRF, or neural radiance field, is an AI model that takes 2D images representing a scene as input and interpolates between them to render a complete 3D scene. The model operates as a neural network — a model that replicates how the brain is organized and is often used for tasks that require pattern recognition.

Using spatial location and volumetric rendering, a NeRF uses the camera pose from the images to render a 3D iteration of the scene. Traditionally, these models are computationally intensive, requiring large amounts of rendering power and time.

A recent NVIDIA AI research project changed that.

Instant NeRF takes NeRFs to the next level, using AI-accelerated inverse rendering to approximate how light behaves in the real world. It enables researchers to construct a 3D scene from 2D images taken at different angles. Scenes can now be generated in seconds, and the longer the NeRF model is trained, the more detailed the resulting 3D renders.

NVIDIA researchers released four neural graphics primitives, or pretrained datasets, as part of the Instant-NGP training toolset at the SIGGRAPH computer graphics conference in 2022. The tools let anyone create NeRFs with their own data. The researchers won a best paper award for the work, and TIME Magazine named Instant NeRF a best invention of 2022.

In addition to speeding rendering for NeRFs, Instant NeRF makes the entire image reconstruction process accessible using NVIDIA RTX and GeForce RTX desktop and laptop GPUs. While the time it takes to render a scene depends on factors like dataset size and the mix of image and video source content, the AI training doesn’t require server-grade or cloud-based hardware.

NVIDIA RTX workstations and GeForce RTX PCs are ideally suited to meet the computational demands of rendering NeRFs. NVIDIA RTX and GeForce RTX GPUs feature Tensor Cores, dedicated AI hardware accelerators that provide the horsepower to run generative AI locally.

Ready, Set, Go

Get started with Instant NeRF to learn about radiance fields and experience imagery in a new way.

Developers and tech enthusiasts can download the source-code base to compile. Nontechnical users can download the Windows installers for Instant-NGP software, available on GitHub.

While the installer is available for a wide range of RTX GPUs, the program performs best on the latest-architecture GeForce RTX 40 Series and NVIDIA RTX Ada Generation GPUs.

The “Getting Started With Instant NeRF” guide walks users through the process, including loading one of the primitives, such as “NeRF Fox,” to get a sense of what’s possible. Detailed instructions and video walkthroughs — like the one above — demonstrate how to create NeRFs with custom data, including tips for capturing good input imagery and compiling codebases (if built from source). The guide also covers using the Instant NeRF graphical user interface, optimizing scene parameters and creating an animation from the scene.

The NeRF community also offers many tips and tricks to help users get started. For example, check out the livestream below and this technical blog post.

Show and Tell

Digital artists are composing beautiful scenes and telling fresh stories with NVIDIA Instant NeRF. The Instant NeRF gallery showcases some of the most innovative and thought-provoking examples, viewable as video clips in any web browser.

Here are a few:

  • “Through the Looking Glass” by Karen X. Cheng and James Perlman A pianist practices her song, part of her daily routine, though there’s nothing mundane about what happens next. The viewer peers into the mirror, a virtual world that can be observed but not traversed; it’s unreachable by normal means. Then, crossing the threshold, it’s revealed that this mirror is in fact a window into an inverted reality that can be explored from within. Which one is real?

 

  • “Meditation” by Franc Lucent As soon as they walked into one of many rooms in Nico Santucci’s estate, Lucent knew they needed to turn it into a NeRF. Playing with the dynamic range and reflections in the pond, it presented the artist with an unknown exploration. They were pleased with the softness of the light and the way the NeRF elevates the room into what looks like something out of a dream — the perfect place to meditate. A NeRF can freeze a moment in a way that’s more immersive than a photo or video.

 

  • “Zeus” by Hugues Bruyère These rendered 3D scenes with Instant NeRF use the data Bruyère previously captured for traditional photogrammetry using mirrorless digital cameras, smartphones, 360-degree cameras and drones. Instant NeRF gives him a powerful tool to help preserve and share cultural artifacts through online libraries, museums, virtual-reality experiences and heritage-conservation projects. This NeRF was trained using a dataset of photos taken with an iPhone at the Royal Ontario Museum.

 

From Image to Video to Reality

Transforming images into a 3D scene with AI is cool. Stepping into that 3D creation is next level.

Thanks to a recent Instant NeRF update, users can render their scenes from static images and virtually step inside the environments, moving freely within the 3D space. In virtual-reality (VR) environments, users can feel complete immersion into new worlds, all within their headsets.

The potential benefits are nearly endless.

For example, a realtor can create and share a 3D model of a property, offering virtual tours at new levels. Retailers can showcase products in an online shop, powered by a collection of images and AI running on RTX GPUs. These AI models power creativity and are helping drive the accessibility of 3D immersive experiences across other industries.

Instant NeRF comes with the capability to clean up scenes easily in VR, making the creation of high-quality NeRFs more intuitive than ever. Learn more about navigating Instant NeRF spaces in VR.

Download Instant-NGP to get started, and share your creations on social media with the hashtag #InstantNeRF.

Generative AI is transforming gaming, videoconferencing and interactive experiences of all kinds. Make sense of what’s new and what’s next by subscribing to the AI Decoded newsletter.

Read More

Explore data with ease: Use SQL and Text-to-SQL in Amazon SageMaker Studio JupyterLab notebooks

Explore data with ease: Use SQL and Text-to-SQL in Amazon SageMaker Studio JupyterLab notebooks

Amazon SageMaker Studio provides a fully managed solution for data scientists to interactively build, train, and deploy machine learning (ML) models. In the process of working on their ML tasks, data scientists typically start their workflow by discovering relevant data sources and connecting to them. They then use SQL to explore, analyze, visualize, and integrate data from various sources before using it in their ML training and inference. Previously, data scientists often found themselves juggling multiple tools to support SQL in their workflow, which hindered productivity.

We’re excited to announce that JupyterLab notebooks in SageMaker Studio now come with built-in support for SQL. Data scientists can now:

  • Connect to popular data services including Amazon Athena, Amazon Redshift, Amazon DataZone, and Snowflake directly within the notebooks
  • Browse and search for databases, schemas, tables, and views, and preview data within the notebook interface
  • Mix SQL and Python code in the same notebook for efficient exploration and transformation of data for use in ML projects
  • Use developer productivity features such as SQL command completion, code formatting assistance, and syntax highlighting to help accelerate code development and improve overall developer productivity

In addition, administrators can securely manage connections to these data services, allowing data scientists to access authorized data without the need to manage credentials manually.

In this post, we guide you through setting up this feature in SageMaker Studio, and walk you through various capabilities of this feature. Then we show how you can enhance the in-notebook SQL experience using Text-to-SQL capabilities provided by advanced large language models (LLMs) to write complex SQL queries using natural language text as input. Finally, to enable a broader audience of users to generate SQL queries from natural language input in their notebooks, we show you how to deploy these Text-to-SQL models using Amazon SageMaker endpoints.

Solution overview

With SageMaker Studio JupyterLab notebook’s SQL integration, you can now connect to popular data sources like Snowflake, Athena, Amazon Redshift, and Amazon DataZone. This new feature enables you to perform various functions.

For example, you can visually explore data sources like databases, tables, and schemas directly from your JupyterLab ecosystem. If your notebook environments are running on SageMaker Distribution 1.6 or higher, look for a new widget on the left side of your JupyterLab interface. This addition enhances data accessibility and management within your development environment.

If you’re not currently on suggested SageMaker Distribution (1.5 or lower) or in a custom environment, refer to appendix for more information.

After you have set up connections (illustrated in the next section), you can list data connections, browse databases and tables, and inspect schemas.

The SageMaker Studio JupyterLab built-in SQL extension also enables you to run SQL queries directly from a notebook. Jupyter notebooks can differentiate between SQL and Python code using the %%sm_sql magic command, which must be placed at the top of any cell that contains SQL code. This command signals to JupyterLab that the following instructions are SQL commands rather than Python code. The output of a query can be displayed directly within the notebook, facilitating seamless integration of SQL and Python workflows in your data analysis.

The output of a query can be displayed visually as HTML tables, as shown in the following screenshot.

They can also be written to a pandas DataFrame.

Prerequisites

Make sure you have satisfied the following prerequisites in order to use the SageMaker Studio notebook SQL experience:

  • SageMaker Studio V2 – Make sure you’re running the most up-to-date version of your SageMaker Studio domain and user profiles. If you’re currently on SageMaker Studio Classic, refer to Migrating from Amazon SageMaker Studio Classic.
  • IAM role – SageMaker requires an AWS Identity and Access Management (IAM) role to be assigned to a SageMaker Studio domain or user profile to manage permissions effectively. An execution role update may be required to bring in data browsing and the SQL run feature. The following example policy enables users to grant, list, and run AWS Glue, Athena, Amazon Simple Storage Service (Amazon S3), AWS Secrets Manager, and Amazon Redshift resources:
    {
       "Version":"2012-10-17",
       "Statement":[
          {
             "Sid":"SQLRelatedS3Permissions",
             "Effect":"Allow",
             "Action":[
                "s3:ListBucket",
                "s3:GetObject",
                "s3:GetBucketLocation",
                "s3:ListMultipartUploadParts",
                "s3:AbortMultipartUpload",
                "s3:PutObject"
             ],
             "Resource":[
                "arn:aws:s3:::sagemaker*/*",
                "arn:aws:s3:::sagemaker*"
             ]
          },
          {
             "Sid":"GlueDataAccess",
             "Effect":"Allow",
             "Action":[
                "glue:GetDatabases",
                "glue:GetSchema",
                "glue:GetTables",
                "glue:GetDatabase",
                "glue:GetTable",
                "glue:ListSchemas",
                "glue:GetPartitions",
                "glue:GetConnections",
                "glue:GetConnection",
                "glue:CreateConnection"
             ],
             "Resource":[
                "arn:aws:glue:<region>:<account>:table/sagemaker*/*",
                "arn:aws:glue:<region>:<account>:database/sagemaker*",
                "arn:aws:glue:<region>:<account>:schema/sagemaker*",
                "arn:aws:glue:<region>:<account>:connection/sagemaker*",
                "arn:aws:glue:<region>:<account>:registry/sagemaker*",
                "arn:aws:glue:<region>:<account>:catalog"
             ]
          },
          {
             "Sid":"AthenaQueryExecution",
             "Effect":"Allow",
             "Action":[
                "athena:ListDataCatalogs",
                "athena:ListDatabases",
                "athena:ListTableMetadata",
                "athena:StartQueryExecution",
                "athena:GetQueryExecution",
                "athena:RunQuery",
                "athena:StartSession",
                "athena:GetQueryResults",
                "athena:ListWorkGroups",
                "athena:GetDataCatalog",
                "athena:GetWorkGroup"
             ],
             "Resource":[
                "arn:aws:athena:<region>:<account>:workgroup/sagemaker*",
                "arn:aws:athena:<region>:<account>:datacatalog/sagemaker*"
             ]
          },
          {
             "Sid":"GetSecretsAndCredentials",
             "Effect":"Allow",
             "Action":[
                "secretsmanager:GetSecretValue",
                "redshift:GetClusterCredentials"
             ],
             "Resource":[
                "arn:aws:secretsmanager:<region>:<account>:secret:sagemaker*",
                "arn:aws:redshift:<region>:<account>:dbuser:sagemaker*/sagemaker*",
                "arn:aws:redshift:<region>:<account>:dbgroup:sagemaker*/sagemaker*",
                "arn:aws:redshift:<region>:<account>:dbname:sagemaker*/sagemaker*"
             ]
          }
       ]
    }

  • JupyterLab Space – You need access to the updated SageMaker Studio and JupyterLab Space with SageMaker Distribution v1.6 or later image versions. If you’re using custom images for JupyterLab Spaces or older versions of SageMaker Distribution (v1.5 or lower), refer to the appendix for instructions to install necessary packages and modules to enable this feature in your environments. To learn more about SageMaker Studio JupyterLab Spaces, refer to Boost productivity on Amazon SageMaker Studio: Introducing JupyterLab Spaces and generative AI tools.
  • Data source access credentials – This SageMaker Studio notebook feature requires user name and password access to data sources such as Snowflake and Amazon Redshift. Create user name and password-based access to these data sources if you do not already have one. OAuth-based access to Snowflake is not a supported feature as of this writing.
  • Load SQL magic – Before you run SQL queries from a Jupyter notebook cell, it’s essential to load the SQL magics extension. Use the command %load_ext amazon_sagemaker_sql_magic to enable this feature. Additionally, you can run the %sm_sql? command to view a comprehensive list of supported options for querying from a SQL cell. These options include setting a default query limit of 1,000, running a full extraction, and injecting query parameters, among others. This setup allows for flexible and efficient SQL data manipulation directly within your notebook environment.

Create database connections

The built-in SQL browsing and execution capabilities of SageMaker Studio are enhanced by AWS Glue connections. An AWS Glue connection is an AWS Glue Data Catalog object that stores essential data such as login credentials, URI strings, and virtual private cloud (VPC) information for specific data stores. These connections are used by AWS Glue crawlers, jobs, and development endpoints to access various types of data stores. You can use these connections for both source and target data, and even reuse the same connection across multiple crawlers or extract, transform, and load (ETL) jobs.

To explore SQL data sources in the left pane of SageMaker Studio, you first need to create AWS Glue connection objects. These connections facilitate access to different data sources and allow you to explore their schematic data elements.

In the following sections, we walk through the process of creating SQL-specific AWS Glue connectors. This will enable you to access, view, and explore datasets across a variety of data stores. For more detailed information about AWS Glue connections, refer to Connecting to data.

Create an AWS Glue connection

The only way to bring data sources into SageMaker Studio is with AWS Glue connections. You need to create AWS Glue connections with specific connection types. As of this writing, the only supported mechanism of creating these connections is using the AWS Command Line Interface (AWS CLI).

Connection definition JSON file

When connecting to different data sources in AWS Glue, you must first create a JSON file that defines the connection properties—referred to as the connection definition file. This file is crucial for establishing an AWS Glue connection and should detail all the necessary configurations for accessing the data source. For security best practices, it’s recommended to use Secrets Manager to securely store sensitive information such as passwords. Meanwhile, other connection properties can be managed directly through AWS Glue connections. This approach makes sure that sensitive credentials are protected while still making the connection configuration accessible and manageable.

The following is an example of a connection definition JSON:

{
    "ConnectionInput": {
        "Name": <GLUE_CONNECTION_NAME>,
        "Description": <GLUE_CONNECTION_DESCRIPTION>,
        "ConnectionType": "REDSHIFT | SNOWFLAKE | ATHENA",
        "ConnectionProperties": {
            "PythonProperties": "{"aws_secret_arn": <SECRET_ARN>, "database": <...>}"
        }
    }
}

When setting up AWS Glue connections for your data sources, there are a few important guidelines to follow to provide both functionality and security:

  • Stringification of properties – Within the PythonProperties key, make sure all properties are stringified key-value pairs. It’s crucial to properly escape double-quotes by using the backslash () character where necessary. This helps maintain the correct format and avoid syntax errors in your JSON.
  • Handling sensitive information – Although it’s possible to include all connection properties within PythonProperties, it is advisable not to include sensitive details like passwords directly in these properties. Instead, use Secrets Manager for handling sensitive information. This approach secures your sensitive data by storing it in a controlled and encrypted environment, away from the main configuration files.

Create an AWS Glue connection using the AWS CLI

After you include all the necessary fields in your connection definition JSON file, you’re ready to establish an AWS Glue connection for your data source using the AWS CLI and the following command:

aws --region <REGION> glue create-connection 
--cli-input-json file:///path/to/file/connection/definition/file.json

This command initiates a new AWS Glue connection based on the specifications detailed in your JSON file. The following is a quick breakdown of the command components:

  • –region <REGION> – This specifies the AWS Region where your AWS Glue connection will be created. It is crucial to select the Region where your data sources and other services are located to minimize latency and comply with data residency requirements.
  • –cli-input-json file:///path/to/file/connection/definition/file.json – This parameter directs the AWS CLI to read the input configuration from a local file that contains your connection definition in JSON format.

You should be able to create AWS Glue connections with the preceding AWS CLI command from your Studio JupyterLab terminal. On the File menu, choose New and Terminal.

If the create-connection command runs successfully, you should see your data source listed in the SQL browser pane. If you don’t see your data source listed, choose Refresh to update the cache.

Create a Snowflake connection

In this section, we focus on integrating a Snowflake data source with SageMaker Studio. Creating Snowflake accounts, databases, and warehouses falls outside the scope of this post. To get started with Snowflake, refer to the Snowflake user guide. In this post, we concentrate on creating a Snowflake definition JSON file and establishing a Snowflake data source connection using AWS Glue.

Create a Secrets Manager secret

You can connect to your Snowflake account by either using a user ID and password or using private keys. To connect with a user ID and password, you need to securely store your credentials in Secrets Manager. As mentioned previously, although it’s possible to embed this information under PythonProperties, it is not recommended to store sensitive information in plain text format. Always make sure that sensitive data is handled securely to avoid potential security risks.

To store information in Secrets Manager, complete the following steps:

  1. On the Secrets Manager console, choose Store a new secret.
  2. For Secret type, choose Other type of secret.
  3. For the key-value pair, choose Plaintext and enter the following:
    {
        "user":"TestUser",
        "password":"MyTestPassword",
        "account":"AWSSAGEMAKERTEST"
    }

  4. Enter a name for your secret, such as sm-sql-snowflake-secret.
  5. Leave the other settings as default or customize if required.
  6. Create the secret.

Create an AWS Glue connection for Snowflake

As discussed earlier, AWS Glue connections are essential for accessing any connection from SageMaker Studio. You can find a list of all supported connection properties for Snowflake. The following is a sample connection definition JSON for Snowflake. Replace the placeholder values with the appropriate values before saving it to disk:

{
    "ConnectionInput": {
        "Name": "Snowflake-Airlines-Dataset",
        "Description": "SageMaker-Snowflake Airlines Dataset",
        "ConnectionType": "SNOWFLAKE",
        "ConnectionProperties": {
            "PythonProperties": "{"aws_secret_arn": "arn:aws:secretsmanager:<region>:<account>:secret:sm-sql-snowflake-secret", "database": "SAGEMAKERDEMODATABASE1"}"
        }
    }
}

To create an AWS Glue connection object for the Snowflake data source, use the following command:

aws --region <REGION> glue create-connection 
--cli-input-json file:///path/to/file/snowflake/definition/file.json

This command creates a new Snowflake data source connection in your SQL browser pane that’s browsable, and you can run SQL queries against it from your JupyterLab notebook cell.

Create an Amazon Redshift connection

Amazon Redshift is a fully managed, petabyte-scale data warehouse service that simplifies and reduces the cost of analyzing all your data using standard SQL. The procedure for creating an Amazon Redshift connection closely mirrors that for a Snowflake connection.

Create a Secrets Manager secret

Similar to the Snowflake setup, to connect to Amazon Redshift using a user ID and password, you need to securely store the secrets information in Secrets Manager. Complete the following steps:

  1. On the Secrets Manager console, choose Store a new secret.
  2. For Secret type, choose Credentials for Amazon Redshift cluster.
  3. Enter the credentials used to log in to access Amazon Redshift as a data source.
  4. Choose the Redshift cluster associated with the secrets.
  5. Enter a name for the secret, such as sm-sql-redshift-secret.
  6. Leave the other settings as default or customize if required.
  7. Create the secret.

By following these steps, you make sure your connection credentials are handled securely, using the robust security features of AWS to manage sensitive data effectively.

Create an AWS Glue connection for Amazon Redshift

To set up a connection with Amazon Redshift using a JSON definition, fill in the necessary fields and save the following JSON configuration to disk:

{
    "ConnectionInput": {
        "Name": "Redshift-US-Housing-Dataset",
        "Description": "sagemaker redshift us housing dataset connection",
        "ConnectionType": "REDSHIFT",
        "ConnectionProperties": {
            "PythonProperties": "{"aws_secret_arn": "arn:aws:secretsmanager:<region>:<account>:sm-sql-redshift-secret", "database": "us-housing-database"}"
        }
    }
}

To create an AWS Glue connection object for the Redshift data source, use the following AWS CLI command:

aws --region <REGION> glue create-connection 
--cli-input-json file:///path/to/file/redshift/definition/file.json

This command creates a connection in AWS Glue linked to your Redshift data source. If the command runs successfully, you will be able to see your Redshift data source within the SageMaker Studio JupyterLab notebook, ready for running SQL queries and performing data analysis.

Create an Athena connection

Athena is a fully managed SQL query service from AWS that enables analysis of data stored in Amazon S3 using standard SQL. To set up an Athena connection as a data source in the JupyterLab notebook’s SQL browser, you need to create an Athena sample connection definition JSON. The following JSON structure configures the necessary details to connect to Athena, specifying the data catalog, the S3 staging directory, and the Region:

{
    "ConnectionInput": {
        "Name": "Athena-Credit-Card-Fraud",
        "Description": "SageMaker-Athena Credit Card Fraud",
        "ConnectionType": "ATHENA",
        "ConnectionProperties": {
            "PythonProperties": "{"catalog_name": "AwsDataCatalog","s3_staging_dir": "s3://sagemaker-us-east-2-123456789/athena-data-source/credit-card-fraud/", "region_name": "us-east-2"}"
        }
    }
}

To create an AWS Glue connection object for the Athena data source, use the following AWS CLI command:

aws --region <REGION> glue create-connection 
--cli-input-json file:///path/to/file/athena/definition/file.json

If the command is successful, you will be able to access Athena data catalog and tables directly from the SQL browser within your SageMaker Studio JupyterLab notebook.

Query data from multiple sources

If you have multiple data sources integrated into SageMaker Studio through the built-in SQL browser and the notebook SQL feature, you can quickly run queries and effortlessly switch between data source backends in subsequent cells within a notebook. This capability allows for seamless transitions between different databases or data sources during your analysis workflow.

You can run queries against a diverse collection of data source backends and bring the results directly into the Python space for further analysis or visualization. This is facilitated by the %%sm_sql magic command available in SageMaker Studio notebooks. To output the results of your SQL query into a pandas DataFrame, there are two options:

  • From your notebook cell toolbar, choose the output type DataFrame and name your DataFrame variable
  • Append the following parameter to your %%sm_sql command:
    --output '{"format": "DATAFRAME", "dataframe_name": "df"}'

The following diagram illustrates this workflow and showcases how you can effortlessly run queries across various sources in subsequent notebook cells, as well as train a SageMaker model using training jobs or directly within the notebook using local compute. Additionally, the diagram highlights how the built-in SQL integration of SageMaker Studio simplifies the processes of extraction and building directly within the familiar environment of a JupyterLab notebook cell.

Text to SQL: Using natural language to enhance query authoring

SQL is a complex language that requires an understanding of databases, tables, syntaxes, and metadata. Today, generative artificial intelligence (AI) can enable you to write complex SQL queries without requiring in-depth SQL experience. The advancement of LLMs has significantly impacted natural language processing (NLP)-based SQL generation, allowing for the creation of precise SQL queries from natural language descriptions—a technique referred to as Text-to-SQL. However, it is essential to acknowledge the inherent differences between human language and SQL. Human language can sometimes be ambiguous or imprecise, whereas SQL is structured, explicit, and unambiguous. Bridging this gap and accurately converting natural language into SQL queries can present a formidable challenge. When provided with appropriate prompts, LLMs can help bridge this gap by understanding the intent behind the human language and generating accurate SQL queries accordingly.

With the release of the SageMaker Studio in-notebook SQL query feature, SageMaker Studio makes it straightforward to inspect databases and schemas, and author, run, and debug SQL queries without ever leaving the Jupyter notebook IDE. This section explores how the Text-to-SQL capabilities of advanced LLMs can facilitate the generation of SQL queries using natural language within Jupyter notebooks. We employ the cutting-edge Text-to-SQL model defog/sqlcoder-7b-2 in conjunction with Jupyter AI, a generative AI assistant specifically designed for Jupyter notebooks, to create complex SQL queries from natural language. By using this advanced model, we can effortlessly and efficiently create complex SQL queries using natural language, thereby enhancing our SQL experience within notebooks.

Notebook prototyping using the Hugging Face Hub

To begin prototyping, you need the following:

  • GitHub code – The code presented in this section is available in the following GitHub repo and by referencing the example notebook.
  • JupyterLab Space – Access to a SageMaker Studio JupyterLab Space backed by GPU-based instances is essential. For the defog/sqlcoder-7b-2 model, a 7B parameter model, using an ml.g5.2xlarge instance is recommended. Alternatives such as defog/sqlcoder-70b-alpha or defog/sqlcoder-34b-alpha are also viable for natural language to SQL conversion, but larger instance types may be required for prototyping. Make sure you have the quota to launch a GPU-backed instance by navigating to the Service Quotas console, searching for SageMaker, and searching for Studio JupyterLab Apps running on <instance type>.

Launch a new GPU-backed JupyterLab Space from your SageMaker Studio. It’s recommended to create a new JupyterLab Space with at least 75 GB of Amazon Elastic Block Store (Amazon EBS) storage for a 7B parameter model.

  • Hugging Face Hub – If your SageMaker Studio domain has access to download models from the Hugging Face Hub, you can use the AutoModelForCausalLM class from huggingface/transformers to automatically download models and pin them to your local GPUs. The model weights will be stored in your local machine’s cache. See the following code:
    model_id = "defog/sqlcoder-7b-2" # or use "defog/sqlcoder-34b-alpha", "defog/sqlcoder-70b-alpha
    
    # download model and tokenizer in fp16 and pin model to local notebook GPUs
    model = AutoModelForCausalLM.from_pretrained(
        model_id, 
        device_map="auto",
        torch_dtype=torch.float16
    )
    
    tokenizer = AutoTokenizer.from_pretrained(model_id)
    tokenizer.pad_token = tokenizer.eos_token

After the model has been fully downloaded and loaded into memory, you should observe an increase in GPU utilization on your local machine. This indicates that the model is actively using the GPU resources for computational tasks. You can verify this in your own JupyterLab space by running nvidia-smi (for a one-time display) or nvidia-smi —loop=1 (to repeat every second) from your JupyterLab terminal.

Text-to-SQL models excel at understanding the intent and context of a user’s request, even when the language used is conversational or ambiguous. The process involves translating natural language inputs into the correct database schema elements, such as table names, column names, and conditions. However, an off-the-shelf Text-to-SQL model will not inherently know the structure of your data warehouse, the specific database schemas, or be able to accurately interpret the content of a table based solely on column names. To effectively use these models to generate practical and efficient SQL queries from natural language, it is necessary to adapt the SQL text-generation model to your specific warehouse database schema. This adaptation is facilitated through the use of LLM prompts. The following is a recommended prompt template for the defog/sqlcoder-7b-2 Text-to-SQL model, divided into four parts:

  • Task – This section should specify a high-level task to be accomplished by the model. It should include the type of database backend (such as Amazon RDS, PostgreSQL, or Amazon Redshift) to make the model aware of any nuanced syntactical differences that may affect the generation of the final SQL query.
  • Instructions – This section should define task boundaries and domain awareness for the model, and may include few-shot examples to guide the model in generating finely tuned SQL queries.
  • Database Schema – This section should detail your warehouse database schemas, outlining the relationships between tables and columns to aid the model in understanding the database structure.
  • Answer – This section is reserved for the model to output the SQL query response to the natural language input.

An example of the database schema and prompt used in this section is available in the GitHub Repo.

### Task
Generate a SQL query to answer [QUESTION]{user_question}[/QUESTION]

### Instructions
- If you cannot answer the question with the available database schema, return 'I do not know'

### Database Schema
The query will run on a database with the following schema:
{table_metadata_string_DDL_statements}

### Answer
Given the database schema, here is the SQL query that 
 [QUESTION]
    {user_question}
 [/QUESTION]

[SQL]

Prompt engineering is not just about forming questions or statements; it’s a nuanced art and science that significantly impacts the quality of interactions with an AI model. The way you craft a prompt can profoundly influence the nature and usefulness of the AI’s response. This skill is pivotal in maximizing the potential of AI interactions, especially in complex tasks requiring specialized understanding and detailed responses.

It’s important to have the option to quickly build and test a model’s response for a given prompt and optimize the prompt based on the response. JupyterLab notebooks provide the ability to receive instant model feedback from a model running on local compute and optimize the prompt and tune a model’s response further or change a model entirely. In this post, we use a SageMaker Studio JupyterLab notebook backed by ml.g5.2xlarge’s NVIDIA A10G 24 GB GPU to run Text-to-SQL model inference on the notebook and interactively build our model prompt until the model’s response is sufficiently tuned to provide responses that are directly executable in JupyterLab’s SQL cells. To run model inference and simultaneously stream model responses, we use a combination of model.generate and TextIteratorStreamer as defined in the following code:

streamer = TextIteratorStreamer(
    tokenizer=tokenizer, 
    timeout=240.0, 
    skip_prompt=True, 
    skip_special_tokens=True
)


def llm_generate_query(user_question):
    """ Generate text-gen SQL responses"""
    
    updated_prompt = prompt.format(question=user_question)
    inputs = tokenizer(updated_prompt, return_tensors="pt").to("cuda")
    
    return model.generate(
        **inputs,
        num_return_sequences=1,
        eos_token_id=tokenizer.eos_token_id,
        pad_token_id=tokenizer.eos_token_id,
        max_new_tokens=1024,
        temperature=0.1,
        do_sample=False,
        num_beams=1, 
        streamer=streamer,
    )

The model’s output can be decorated with SageMaker SQL magic %%sm_sql ..., which allows the JupyterLab notebook to identify the cell as a SQL cell.

Host Text-to-SQL models as SageMaker endpoints

At the end of the prototyping stage, we have selected our preferred Text-to-SQL LLM, an effective prompt format, and an appropriate instance type for hosting the model (either single-GPU or multi-GPU). SageMaker facilitates the scalable hosting of custom models through the use of SageMaker endpoints. These endpoints can be defined according to specific criteria, allowing for the deployment of LLMs as endpoints. This capability enables you to scale the solution to a wider audience, allowing users to generate SQL queries from natural language inputs using custom hosted LLMs. The following diagram illustrates this architecture.

To host your LLM as a SageMaker endpoint, you generate several artifacts.

The first artifact is model weights. SageMaker Deep Java Library (DJL) Serving containers allow you to set up configurations through a meta serving.properties file, which enables you to direct how models are sourced—either directly from the Hugging Face Hub or by downloading model artifacts from Amazon S3. If you specify model_id=defog/sqlcoder-7b-2, DJL Serving will attempt to directly download this model from the Hugging Face Hub. However, you may incur networking ingress/egress charges each time the endpoint is deployed or elastically scaled. To avoid these charges and potentially speed up the download of model artifacts, it is recommended to skip using model_id in serving.properties and save model weights as S3 artifacts and only specify them with s3url=s3://path/to/model/bin.

Saving a model (with its tokenizer) to disk and uploading it to Amazon S3 can be accomplished with just a few lines of code:

# save model and tokenizer to local disk
model.save_pretrained(local_model_path)
tokenizer.save_pretrained(local_model_path)
...
...
...
# upload file to s3
s3_bucket_name = "<my llm artifact bucket name>>"
# s3 prefix to save model weights and tokenizer defs
model_s3_prefix = "sqlcoder-7b-instruct/weights"
# s3 prefix to store s
meta_model_s3_prefix = "sqlcoder-7b-instruct/meta-model"

sagemaker.s3.S3Uploader.upload(local_model_path,  f"s3://{s3_bucket_name}/{model_s3_prefix}")

You also use a database prompt file. In this setup, the database prompt is composed of Task, Instructions, Database Schema, and Answer sections. For the current architecture, we allocate a separate prompt file for each database schema. However, there is flexibility to expand this setup to include multiple databases per prompt file, allowing the model to run composite joins across databases on the same server. During our prototyping stage, we save the database prompt as a text file named <Database-Glue-Connection-Name>.prompt, where Database-Glue-Connection-Name corresponds to the connection name visible in your JupyterLab environment. For instance, this post refers to a Snowflake connection named Airlines_Dataset, so the database prompt file is named Airlines_Dataset.prompt. This file is then stored on Amazon S3 and subsequently read and cached by our model serving logic.

Moreover, this architecture permits any authorized users of this endpoint to define, store, and generate natural language to SQL queries without the need for multiple redeployments of the model. We use the following example of a database prompt to demonstrate the Text-to-SQL functionality.

Next, you generate custom model service logic. In this section, you outline a custom inference logic named model.py. This script is designed to optimize the performance and integration of our Text-to-SQL services:

  • Define the database prompt file caching logic – To minimize latency, we implement a custom logic for downloading and caching database prompt files. This mechanism makes sure that prompts are readily available, reducing the overhead associated with frequent downloads.
  • Define custom model inference logic – To enhance inference speed, our text-to-SQL model is loaded in the float16 precision format and then converted into a DeepSpeed model. This step allows for more efficient computation. Additionally, within this logic, you specify which parameters users can adjust during inference calls to tailor the functionality according to their needs.
  • Define custom input and output logic – Establishing clear and customized input/output formats is essential for smooth integration with downstream applications. One such application is JupyterAI, which we discuss in the subsequent section.
%%writefile {meta_model_filename}/model.py
...

predictor = None
prompt_for_db_dict_cache = {}

def download_prompt_from_s3(prompt_filename):

    print(f"downloading prompt file: {prompt_filename}")
    s3 = boto3.resource('s3')
    ...


def get_model(properties):
    
    ...
    print(f"Loading model from {cwd}")
    model = AutoModelForCausalLM.from_pretrained(
        cwd, 
        low_cpu_mem_usage=True, 
        torch_dtype=torch.bfloat16
    )
    model = deepspeed.init_inference(
        model, 
        mp_size=properties["tensor_parallel_degree"]
    )
    
    ...


def handle(inputs: Input) -> None:

    ...

    global predictor
    if not predictor:
        predictor = get_model(inputs.get_properties())

    ...
    result = f"""%%sm_sql --metastore-id {prompt_for_db_key.split('.')[0]} --metastore-type GLUE_CONNECTIONnn{result}n"""
    result = [{'generated_text': result}]
    
    return Output().add(result)

Additionally, we include a serving.properties file, which acts as a global configuration file for models hosted using DJL serving. For more information, refer to Configurations and settings.

Lastly, you can also include a requirements.txt file to define additional modules required for inference and package everything into a tarball for deployment.

See the following code:

os.system(f"tar czvf {meta_model_filename}.tar.gz ./{meta_model_filename}/")

>>>./deepspeed-djl-serving-7b/
>>>./deepspeed-djl-serving-7b/serving.properties
>>>./deepspeed-djl-serving-7b/model.py
>>>./deepspeed-djl-serving-7b/requirements.txt

Integrate your endpoint with the SageMaker Studio Jupyter AI assistant

Jupyter AI is an open source tool that brings generative AI to Jupyter notebooks, offering a robust and user-friendly platform for exploring generative AI models. It enhances productivity in JupyterLab and Jupyter notebooks by providing features like the %%ai magic for creating a generative AI playground inside notebooks, a native chat UI in JupyterLab for interacting with AI as a conversational assistant, and support for a wide array of LLMs from providers like Amazon Titan, AI21, Anthropic, Cohere, and Hugging Face or managed services like Amazon Bedrock and SageMaker endpoints. For this post, we use Jupyter AI’s out-of-the-box integration with SageMaker endpoints to bring the Text-to-SQL capability into JupyterLab notebooks. The Jupyter AI tool comes pre-installed in all SageMaker Studio JupyterLab Spaces backed by SageMaker Distribution images; end-users are not required to make any additional configurations to start using the Jupyter AI extension to integrate with a SageMaker hosted endpoint. In this section, we discuss the two ways to use the integrated Jupyter AI tool.

Jupyter AI inside a notebook using magics

Jupyter AI’s %%ai magic command allows you to transform your SageMaker Studio JupyterLab notebooks into a reproducible generative AI environment. To begin using AI magics, make sure you have loaded the jupyter_ai_magics extension to use %%ai magic, and additionally load amazon_sagemaker_sql_magic to use %%sm_sql magic:

# load sm_sql magic extension and ai magic extension
%load_ext jupyter_ai_magics
%load_ext amazon_sagemaker_sql_magic

To run a call to your SageMaker endpoint from your notebook using the %%ai magic command, provide the following parameters and structure the command as follows:

  • –region-name – Specify the Region where your endpoint is deployed. This makes sure that the request is routed to the correct geographic location.
  • –request-schema – Include the schema of the input data. This schema outlines the expected format and types of the input data that your model needs to process the request.
  • –response-path – Define the path within the response object where the output of your model is located. This path is used to extract the relevant data from the response returned by your model.
  • -f (optional) – This is an output formatter flag that indicates the type of output returned by the model. In the context of a Jupyter notebook, if the output is code, this flag should be set accordingly to format the output as executable code at the top of a Jupyter notebook cell, followed by a free text input area for user interaction.

For example, the command in a Jupyter notebook cell might look like the following code:

%%ai sagemaker-endpoint:<endpoint-name> --region-name=us-east-1 
--request-schema={
    "inputs":"<prompt>", 
    "parameters":{
        "temperature":0.1,
        "top_p":0.2,
        "max_new_tokens":1024,
        "return_full_text":false
    }, 
    "db_prompt":"Airlines_Dataset.prompt"
  } 
--response-path=[0].generated_text -f code

My natural language query goes here...

Jupyter AI chat window

Alternatively, you can interact with SageMaker endpoints through a built-in user interface, simplifying the process of generating queries or engaging in dialogue. Before beginning to chat with your SageMaker endpoint, configure the relevant settings in Jupyter AI for the SageMaker endpoint, as shown in the following screenshot.

Conclusion

SageMaker Studio now simplifies and streamlines the data scientist workflow by integrating SQL support into JupyterLab notebooks. This allows data scientists to focus on their tasks without the need to manage multiple tools. Furthermore, the new built-in SQL integration in SageMaker Studio enables data personas to effortlessly generate SQL queries using natural language text as input, thereby accelerating their workflow.

We encourage you to explore these features in SageMaker Studio. For more information, refer to Prepare data with SQL in Studio.

Appendix

Enable the SQL browser and notebook SQL cell in custom environments

If you’re not using a SageMaker Distribution image or using Distribution images 1.5 or below, run the following commands to enable the SQL browsing feature inside your JupyterLab environment:

npm install -g vscode-jsonrpc
npm install -g sql-language-server
pip install amazon-sagemaker-sql-execution==0.1.0
pip install amazon-sagemaker-sql-editor
restart-jupyter-server

Relocate the SQL browser widget

JupyterLab widgets allow for relocation. Depending on your preference, you can move widgets to the either side of JupyterLab widgets pane. If you prefer, you can move the direction of the SQL widget to the opposite side (right to left) of the sidebar with a simple right-click on the widget icon and choosing Switch Sidebar Side.


About the authors

Pranav Murthy is an AI/ML Specialist Solutions Architect at AWS. He focuses on helping customers build, train, deploy and migrate machine learning (ML) workloads to SageMaker. He previously worked in the semiconductor industry developing large computer vision (CV) and natural language processing (NLP) models to improve semiconductor processes using state of the art ML techniques. In his free time, he enjoys playing chess and traveling. You can find Pranav on LinkedIn.

Varun Shah is a Software Engineer working on Amazon SageMaker Studio at Amazon Web Services. He is focused on building interactive ML solutions which simplify data processing and data preparation journeys . In his spare time, Varun enjoys outdoor activities including hiking and skiing, and is always up for discovering new, exciting places.

Sumedha Swamy is a Principal Product Manager at Amazon Web Services where he leads SageMaker Studio team in its mission to develop IDE of choice for data science and machine learning. He has dedicated the past 15 years building Machine Learning based consumer and enterprise products.

Bosco Albuquerque is a Sr. Partner Solutions Architect at AWS and has over 20 years of experience working with database and analytics products from enterprise database vendors and cloud providers. He has helped technology companies design and implement data analytics solutions and products.

Read More

Distributed training and efficient scaling with the Amazon SageMaker Model Parallel and Data Parallel Libraries

Distributed training and efficient scaling with the Amazon SageMaker Model Parallel and Data Parallel Libraries

There has been tremendous progress in the field of distributed deep learning for large language models (LLMs), especially after the release of ChatGPT in December 2022. LLMs continue to grow in size with billions or even trillions of parameters, and they often won’t fit into a single accelerator device such as GPU or even a single node such as ml.p5.32xlarge because of memory limitations. Customers training LLMs often must distribute their workload across hundreds or even thousands of GPUs. Enabling training at such scale remains a challenge in distributed training, and training efficiently in such a large system is another equally important problem. Over the past years, the distributed training community has introduced 3D parallelism (data parallelism, pipeline parallelism, and tensor parallelism) and other techniques (such as sequence parallelism and expert parallelism) to address such challenges.

In December 2023, Amazon announced the release of the SageMaker model parallel library 2.0 (SMP), which achieves state-of-the-art efficiency in large model training, together with the SageMaker distributed data parallelism library (SMDDP). This release is a significant update from 1.x: SMP is now integrated with open source PyTorch Fully Sharded Data Parallel (FSDP) APIs, which allows you to use a familiar interface when training large models, and is compatible with Transformer Engine (TE), unlocking tensor parallelism techniques alongside FSDP for the first time. To learn more about the release, refer to Amazon SageMaker model parallel library now accelerates PyTorch FSDP workloads by up to 20%.

In this post, we explore the performance benefits of Amazon SageMaker (including SMP and SMDDP), and how you can use the library to train large models efficiently on SageMaker. We demonstrate the performance of SageMaker with benchmarks on ml.p4d.24xlarge clusters up to 128 instances, and FSDP mixed precision with bfloat16 for the Llama 2 model. We start with a demonstration of near-linear scaling efficiencies for SageMaker, followed by analyzing contributions from each feature for optimal throughput, and end with efficient training with various sequence lengths up to 32,768 through tensor parallelism.

Near-linear scaling with SageMaker

To reduce the overall training time for LLM models, preserving high throughput when scaling to large clusters (thousands of GPUs) is crucial given the inter-node communication overhead. In this post, we demonstrate robust and near-linear scaling (by varying the number of GPUs for a fixed total problem size) efficiencies on p4d instances invoking both SMP and SMDDP.

In this section, we demonstrate SMP’s near-linear scaling performance. Here we train Llama 2 models of various sizes (7B, 13B, and 70B parameters) using a fixed sequence length of 4,096, the SMDDP backend for collective communication, TE enabled, a global batch size of 4 million, with 16 to 128 p4d nodes. The following table summarizes our optimal configuration and training performance (model TFLOPs per second).

Model size Number of nodes TFLOPs* sdp* tp* offload* Scaling efficiency
7B 16 136.76 32 1 N 100.0%
32 132.65 64 1 N 97.0%
64 125.31 64 1 N 91.6%
128 115.01 64 1 N 84.1%
13B 16 141.43 32 1 Y 100.0%
32 139.46 256 1 N 98.6%
64 132.17 128 1 N 93.5%
128 120.75 128 1 N 85.4%
70B 32 154.33 256 1 Y 100.0%
64 149.60 256 1 N 96.9%
128 136.52 64 2 N 88.5%

*At the given model size, sequence length, and number of nodes, we show the globally optimal throughput and configurations after exploring various sdp, tp, and activation offloading combinations.

The preceding table summarizes the optimal throughput numbers subject to sharded data parallel (sdp) degree (typically using FSDP hybrid sharding instead of full sharding, with more details in the next section), tensor parallel (tp) degree, and activation offloading value changes, demonstrating a near-linear scaling for SMP together with SMDDP. For example, given the Llama 2 model size 7B and sequence length 4,096, overall it achieves scaling efficiencies of 97.0%, 91.6%, and 84.1% (relative to 16 nodes) at 32, 64, and 128 nodes, respectively. The scaling efficiencies are stable across different model sizes and increase slightly as the model size gets larger.

SMP and SMDDP also demonstrate similar scaling efficiencies for other sequence lengths such as 2,048 and 8,192.

SageMaker model parallel library 2.0 performance: Llama 2 70B

Model sizes have continued to grow over the past years, along with frequent state-of-the-art performance updates in the LLM community. In this section, we illustrate performance in SageMaker for the Llama 2 model using a fixed model size 70B, sequence length of 4,096, and a global batch size of 4 million. To compare with the previous table’s globally optimal configuration and throughput (with SMDDP backend, typically FSDP hybrid sharding and TE), the following table extends to other optimal throughputs (potentially with tensor parallelism) with extra specifications on the distributed backend (NCCL and SMDDP), FSDP sharding strategies (full sharding and hybrid sharding), and enabling TE or not (default).

Model size Number of nodes TFLOPS TFLOPs #3 config TFLOPs improvement over baseline
. . NCCL full sharding: #0 SMDDP full sharding: #1 SMDDP hybrid sharding: #2 SMDDP hybrid sharding with TE: #3 sdp* tp* offload* #0 → #1 #1 → #2 #2 → #3 #0 → #3
70B 32 150.82 149.90 150.05 154.33 256 1 Y -0.6% 0.1% 2.9% 2.3%
64 144.38 144.38 145.42 149.60 256 1 N 0.0% 0.7% 2.9% 3.6%
128 68.53 103.06 130.66 136.52 64 2 N 50.4% 26.8% 4.5% 99.2%

*At the given model size, sequence length, and number of nodes, we show the globally optimal throughput and configuration after exploring various sdp, tp, and activation offloading combinations.

The latest release of SMP and SMDDP supports multiple features including native PyTorch FSDP, extended and more flexible hybrid sharding, transformer engine integration, tensor parallelism, and optimized all gather collective operation. To better understand how SageMaker achieves efficient distributed training for LLMs, we explore incremental contributions from SMDDP and the following SMP core features:

  • SMDDP enhancement over NCCL with FSDP full sharding
  • Replacing FSDP full sharding with hybrid sharding, which reduces communication cost to improve throughput
  • A further boost to throughput with TE, even when tensor parallelism is disabled
  • At lower resource settings, activation offloading might be able to enable training that would otherwise be infeasible or very slow due to high memory pressure

FSDP full sharding: SMDDP enhancement over NCCL

As shown in the previous table, when models are fully sharded with FSDP, although NCCL (TFLOPs #0) and SMDDP (TFLOPs #1) throughputs are comparable at 32 or 64 nodes, there is a huge improvement of 50.4% from NCCL to SMDDP at 128 nodes.

At smaller model sizes, we observe consistent and significant improvements with SMDDP over NCCL, starting at smaller cluster sizes, because SMDDP is able to mitigate the communication bottleneck effectively.

FSDP hybrid sharding to reduce communication cost

In SMP 1.0, we launched sharded data parallelism, a distributed training technique powered by Amazon in-house MiCS technology. In SMP 2.0, we introduce SMP hybrid sharding, an extensible and more flexible hybrid sharding technique that allows models to be sharded among a subset of GPUs, instead of all training GPUs, which is the case for FSDP full sharding. It’s useful for medium-sized models that don’t need to be sharded across the entire cluster in order to satisfy per-GPU memory constraints. This leads to clusters having more than one model replica and each GPU communicating with fewer peers at runtime.

SMP’s hybrid sharding enables efficient model sharding over a wider range, from the smallest shard degree with no out of memory issues up to the whole cluster size (which equates to full sharding).

The following figure illustrates the throughput dependence on sdp at tp = 1 for simplicity. Although it’s not necessarily the same as the optimal tp value for NCCL or SMDDP full sharding in the previous table, the numbers are quite close. It clearly validates the value of switching from full sharding to hybrid sharding at a large cluster size of 128 nodes, which is applicable to both NCCL and SMDDP. For smaller model sizes, significant improvements with hybrid sharding start at smaller cluster sizes, and the difference keeps increasing with cluster size.

Improvements with TE

TE is designed to accelerate LLM training on NVIDIA GPUs. Despite not using FP8 because it’s unsupported on p4d instances, we still see significant speedup with TE on p4d.

On top of MiCS trained with the SMDDP backend, TE introduces a consistent boost for throughput across all cluster sizes (the only exception is full sharding at 128 nodes), even when tensor parallelism is disabled (tensor parallel degree is 1).

For smaller model sizes or various sequence lengths, the TE boost is stable and non-trivial, in the range of approximately 3–7.6%.

Activation offloading at low resource settings

At low resource settings (given a small number of nodes), FSDP might experience a high memory pressure (or even out of memory in the worst case) when activation checkpointing is enabled. For such scenarios bottlenecked by memory, turning on activation offloading is potentially an option to improve performance.

For example, as we saw previously, although the Llama 2 at model size 13B and sequence length 4,096 is able to train optimally with at least 32 nodes with activation checkpointing and without activation offloading, it achieves the best throughput with activation offloading when limited to 16 nodes.

Enable training with long sequences: SMP tensor parallelism

Longer sequence lengths are desired for long conversations and context, and are getting more attention in the LLM community. Therefore, we report various long sequence throughputs in the following table. The table shows optimal throughputs for Llama 2 training on SageMaker, with various sequence lengths from 2,048 up to 32,768. At sequence length 32,768, native FSDP training is infeasible with 32 nodes at a global batch size of 4 million.

. . . TFLOPS
Model size Sequence length Number of nodes Native FSDP and NCCL SMP and SMDDP SMP improvement
7B 2048 32 129.25 138.17 6.9%
4096 32 124.38 132.65 6.6%
8192 32 115.25 123.11 6.8%
16384 32 100.73 109.11 8.3%
32768 32 N.A. 82.87 .
13B 2048 32 137.75 144.28 4.7%
4096 32 133.30 139.46 4.6%
8192 32 125.04 130.08 4.0%
16384 32 111.58 117.01 4.9%
32768 32 N.A. 92.38 .
*: max . . . . 8.3%
*: median . . . . 5.8%

When the cluster size is large and given a fixed global batch size, some model training might be infeasible with native PyTorch FSDP, lacking a built-in pipeline or tensor parallelism support. In the preceding table, given a global batch size of 4 million, 32 nodes, and sequence length 32,768, the effective batch size per GPU is 0.5 (for example, tp = 2 with batch size 1), which would otherwise be infeasible without introducing tensor parallelism.

Conclusion

In this post, we demonstrated efficient LLM training with SMP and SMDDP on p4d instances, attributing contributions to multiple key features, such as SMDDP enhancement over NCCL, flexible FSDP hybrid sharding instead of full sharding, TE integration, and enabling tensor parallelism in favor of long sequence lengths. After being tested over a wide range of settings with various models, model sizes, and sequence lengths, it exhibits robust near-linear scaling efficiencies, up to 128 p4d instances on SageMaker. In summary, SageMaker continues to be a powerful tool for LLM researchers and practitioners.

To learn more, refer to SageMaker model parallelism library v2, or contact the SMP team at sm-model-parallel-feedback@amazon.com.

Acknowledgements

We’d like to thank Robert Van Dusen, Ben Snyder, Gautam Kumar, and Luis Quintela for their constructive feedback and discussions.


About the Authors

Xinle Sheila Liu is an SDE in Amazon SageMaker. In her spare time, she enjoys reading and outdoor sports.

Suhit Kodgule is a Software Development Engineer with the AWS Artificial Intelligence group working on deep learning frameworks. In his spare time, he enjoys hiking, traveling, and cooking.

Victor Zhu is a Software Engineer in Distributed Deep Learning at Amazon Web Services. He can be found enjoying hiking and board games around the SF Bay Area.

Derya Cavdar works as a software engineer at AWS. Her interests include deep learning and distributed training optimization.

Teng Xu is a Software Development Engineer in the Distributed Training group in AWS AI. He enjoys reading.

Read More

Manage your Amazon Lex bot via AWS CloudFormation templates

Manage your Amazon Lex bot via AWS CloudFormation templates

Amazon Lex is a fully managed artificial intelligence (AI) service with advanced natural language models to design, build, test, and deploy conversational interfaces in applications. It employs advanced deep learning technologies to understand user input, enabling developers to create chatbots, virtual assistants, and other applications that can interact with users in natural language.

Managing your Amazon Lex bots using AWS CloudFormation allows you to create templates defining the bot and all the AWS resources it depends on. AWS CloudFormation provides and configures those resources on your behalf, removing the risk of human error when deploying bots to new environments. The benefits of using CloudFormation include:

  • Consistency – A CloudFormation template provides a more consistent and automated way to deploy and manage the resources associated with an Amazon Lex bot.
  • Version control – With AWS CloudFormation, you can use version control systems like Git to manage your CloudFormation templates. This allows you to maintain different versions of your bot and roll back to previous versions if needed.
  • Reusability – You can reuse CloudFormation templates across multiple environments, such as development, staging, and production. This saves time and effort in defining the same bot across different environments.
  • Expandability – As your Amazon Lex bot grows in complexity, managing it through the AWS Management Console becomes more challenging. AWS CloudFormation allows for a more streamlined and efficient approach to managing the bot’s definition and resources.
  • Automation – Using a CloudFormation template allows you to automate the deployment process. You can use AWS services like AWS CodePipeline and AWS CodeBuild to build, test, and deploy your Amazon Lex bot automatically.

In this post, we guide you through the steps involved in creating a CloudFormation template for an Amazon Lex V2 bot.

Solution overview

We have chosen the Book Trip bot as our starting point for this exercise. We use a CloudFormation template to create a new bot from scratch, including defining intents, slots, and other required components. Additionally, we explore topics such as version control, aliases, integrating AWS Lambda functions, creating conditional branches, and enabling logging.

Prerequisites

You should have the following prerequisites:

  • An AWS account to create and deploy a CloudFormation template
  • The necessary AWS Identity and Access Management (IAM) permissions to deploy AWS CloudFormation and the resources used in the template
  • Basic knowledge of Amazon Lex, Lambda functions, and associated services
  • Basic knowledge of creating and deploying CloudFormation templates

Create an IAM role

To begin, you need to create an IAM role that the bot will use. You can achieve this by initializing a CloudFormation template and adding the IAM role as a resource. You can use the following template to create the role. If you download the example template and deploy it, you should see that an IAM role has been created. We provide examples of templates as we go through this post and merge them as we get further along.

AWSTemplateFormatVersion: "2010-09-09"
Transform: AWS::Serverless-2016-10-31
Description: CloudFormation template for book hotel bot.
Resources:
  # 1. IAM role that is used by the bot at runtime
  BotRuntimeRole:    
    Type: AWS::IAM::Role
    Properties:
      AssumeRolePolicyDocument:
        Version: "2012-10-17"
        Statement:
          - Effect: Allow
            Principal:
              Service:
                - lexv2.amazonaws.com
            Action:
              - "sts:AssumeRole"
      Path: "/"
      Policies:
        - PolicyName: LexRuntimeRolePolicy
          PolicyDocument:
            Version: 2012-10-17
            Statement:
              - Effect: Allow
                Action:
                  - "polly:SynthesizeSpeech"
                  - "comprehend:DetectSentiment"
                Resource: "*"

Configure the Amazon Lex bot

Next, you need to add the bot definition. The following is the YAML template for the Amazon Lex bot definition; you construct the necessary components one by one:

Type: AWS::Lex::Bot
Properties:
  AutoBuildBotLocales: Boolean
  BotFileS3Location: 
    S3Location
  BotLocales: 
    - BotLocale
  BotTags: 
    - Tag
  DataPrivacy: 
    DataPrivacy
  Description: String
  IdleSessionTTLInSeconds: Integer
  Name: String
  RoleArn: String
  TestBotAliasSettings: 
    TestBotAliasSettings
  TestBotAliasTags: 
    - Tag

To create a bot that only includes the bot definition without any intent, you can use the following template. Here, you specify the bot’s name, the ARN of the role that you previously created, data privacy settings, and more:

BookHotelBot:
    DependsOn: BotRuntimeRole # The role created in the previous step
    Type: AWS::Lex::Bot
    Properties:
      Name: "BookHotel"
      Description: "Sample Bot to book a hotel"
      RoleArn: !GetAtt BotRuntimeRole.Arn      
      #For each Amazon Lex bot created with the Amazon Lex Model Building Service, you must specify whether your use of Amazon Lex 
      #is related to a website, program, or other application that is directed or targeted, in whole or in part, to children under 
      #age 13 and subject to the Children's Online Privacy Protection Act (COPPA) by specifying true or false in the 
      #childDirected field.
      DataPrivacy:
        ChildDirected: false
      IdleSessionTTLInSeconds: 300

You can download the updated template. Deploying the updated template allows you to create both the role and the bot definition. Note that you’re updating the stack you created in the previous step.

The final step entails defining the BotLocales, which form the majority of the bot’s functionality. This includes, for example, Intents and Slot types. The following is the YAML template:

  CustomVocabulary: 
    CustomVocabulary
  Description: String
  Intents: 
    - Intent
  LocaleId: String
  NluConfidenceThreshold: Number
  SlotTypes: 
    - SlotType
  VoiceSettings: 
    VoiceSettings

In this case, you build the BookHotel intent, which requires a custom slot type for room types. You set the LocaleId, then the VoiceSettings. Then you add the SlotTypes and their corresponding values.

The next step is to define the Intents, starting with the first intent, BookHotel, which involves adding utterances, slots, and slot priorities. The details of these nodes are demonstrated in the provided template. Finally, you add the second intent, which is the FallbackIntent. See the following code:

BotLocales:
        - LocaleId: "en_US"
          Description: "en US locale"
          NluConfidenceThreshold: 0.40
          VoiceSettings:
            VoiceId: "Matthew"
          SlotTypes:
            - Name: "RoomTypeValues"
              Description: "Type of room"
              SlotTypeValues:
                - SampleValue:
                    Value: queen
                - SampleValue:
                    Value: king
                - SampleValue:
                    Value: deluxe
              ValueSelectionSetting:
                ResolutionStrategy: ORIGINAL_VALUE
          Intents:
            - Name: "BookHotel"
              Description: "Intent to book a hotel room"
              SampleUtterances:
                - Utterance: "Book a hotel"
                - Utterance: "I want a make hotel reservations"
                - Utterance: "Book a {Nights} night stay in {Location}"
              IntentConfirmationSetting:
                PromptSpecification:
                  MessageGroupsList:
                    - Message:
                        PlainTextMessage:
                          Value: "Okay, I have you down for a {Nights} night stay in {Location} starting {CheckInDate}.  Shall I book the reservation?"
                  MaxRetries: 3
                  AllowInterrupt: false
                DeclinationResponse:
                  MessageGroupsList:
                    - Message:
                        PlainTextMessage:
                          Value: "Okay, I have cancelled your reservation in progress."
                  AllowInterrupt: false
              SlotPriorities:
                - Priority: 1
                  SlotName: Location
                - Priority: 2
                  SlotName: CheckInDate
                - Priority: 3
                  SlotName: Nights
                - Priority: 4
                  SlotName: RoomType
              Slots:
                - Name: "Location"
                  Description: "Location of the city in which the hotel is located"
                  SlotTypeName: "AMAZON.City"
                  ValueElicitationSetting:
                    SlotConstraint: "Required"
                    PromptSpecification:
                      MessageGroupsList:
                        - Message:
                            PlainTextMessage:
                              Value: "What city will you be staying in?"
                      MaxRetries: 2
                      AllowInterrupt: false
                - Name: "CheckInDate"
                  Description: "Date of check-in"
                  SlotTypeName: "AMAZON.Date"
                  ValueElicitationSetting:
                    SlotConstraint: "Required"
                    PromptSpecification:
                      MessageGroupsList:
                        - Message:
                            PlainTextMessage:
                              Value: "What day do you want to check in?"
                      MaxRetries: 2
                      AllowInterrupt: false
                - Name: "Nights"
                  Description: "something"
                  SlotTypeName: "AMAZON.Number"
                  ValueElicitationSetting:
                    SlotConstraint: "Required"
                    PromptSpecification:
                      MessageGroupsList:
                        - Message:
                            PlainTextMessage:
                              Value: "How many nights will you be staying?"
                      MaxRetries: 2
                      AllowInterrupt: false
                - Name: "RoomType"
                  Description: "Enumeration of types of rooms that are offered by a hotel."
                  SlotTypeName: "RoomTypeValues"
                  ValueElicitationSetting:
                    SlotConstraint: "Required"
                    PromptSpecification:
                      MessageGroupsList:
                        - Message:
                            PlainTextMessage:
                              Value: "What type of room would you like, queen, king or deluxe?"
                      MaxRetries: 2
                      AllowInterrupt: false
            - Name: "FallbackIntent"
              Description: "Default intent when no other intent matches"
              ParentIntentSignature: "AMAZON.FallbackIntent"

You can download the CloudFormation template for the work done until now. After you update your stack with this template, a functional bot will be deployed. On the Amazon Lex console, you can confirm that there is a draft version of the bot, and a default alias named TestBotAlias has been created.

bot alias

Create a new bot version and alias

Amazon Lex supports publishing versions of bots, intents, and slot types so that you can control your client applications’ implementation. A version is a numbered snapshot of your bot definition that you can publish for use in different parts of your workflow, such as development, beta deployment, and production. Amazon Lex bots also support aliases. An alias is a pointer to a specific version of a bot. With an alias, you can update your client applications’ version. In practical scenarios, bot aliases are used for blue/green deployments and managing environment-specific configurations like development and production environments.

To illustrate, let’s say you point an alias to version 1 of your bot. When it’s time to update the bot, you can publish version 2 and change the alias to point to the new version. Because your applications use the alias instead of a specific version, all clients receive the new functionality without requiring updates.

Keep in mind that when you modify the CloudFormation template and initiate deployment, the changes are implemented within the draft version, primarily meant for testing. After you complete your testing phase, you can establish a new version to finalize the changes you’ve incorporated so far.

Next, you create a new bot version based on your draft, set up a new alias, and link the version to this alias. The following are the two new resources to add to your template:

BookHotelInitialVersion:
    DependsOn: BookHotelBot
    Type: AWS::Lex::BotVersion
    Properties:
      BotId: !Ref BookHotelBot
      BotVersionLocaleSpecification:
        - LocaleId: en_US
          BotVersionLocaleDetails:
            SourceBotVersion: DRAFT
      Description: Hotel Bot initial version

  BookHotelDemoAlias:
    Type: AWS::Lex::BotAlias
    Properties:
      BotId: !Ref BookHotelBot
      BotAliasName: "BookHotelDemoAlias"
      BotVersion: !GetAtt BookHotelInitialVersion.BotVersion

You can download the new version of the template and deploy it by updating your stack. You can see on the Amazon Lex console that a new version is created and associated with a new alias called BookHotelDemoAlias.

demo alias

When you create a new version of an Amazon Lex bot, it typically increments the version number sequentially, starting from 1. To discern a specific version, you can refer to its description.

initial version

Add a Lambda function

To initialize values or validate user input for your bot, you can add a Lambda function as a code hook to your bot. Similarly, you can use a Lambda function for fulfillment as well, for example writing data to databases or calling APIs save the collected information. For more information, refer to Enabling custom logic with AWS Lambda functions.

Let’s add a new resource for the Lambda function to the CloudFormation template. Although it’s generally not advised to embed code in CloudFormation templates, we do so here solely for the sake of making the demo deployment less complicated. See the following code:

HotelBotFunction:
    DependsOn: BotRuntimeRole # So that the Lambda function is ready before the bot deployment
    Type: AWS::Serverless::Function
    Properties:
      FunctionName: book_hotel_lambda
      Runtime: python3.11
      Timeout: 15
      Handler: index.lambda_handler
      InlineCode: |
        import os
        import json

        def close(intent_request):
            intent_request['sessionState']['intent']['state'] = 'Fulfilled'

            message = {"contentType": "PlainText",
                      "content": "Your Booking is confirmed"}

            session_attributes = {}
            sessionState = intent_request['sessionState']
            if 'sessionAttributes' in sessionState:
                session_attributes = sessionState['sessionAttributes']

            requestAttributes = None
            if 'requestAttributes' in intent_request:
                requestAttributes = intent_request['requestAttributes']

            return {
                'sessionState': {
                    'sessionAttributes': session_attributes,
                    'dialogAction': {
                        'type': 'Close'
                    },
                    'intent': intent_request['sessionState']['intent'],
                    'originatingRequestId': 'xxxxxxx-xxxx-xxxx-xxxx'
                },
                'messages':  [message],
                'sessionId': intent_request['sessionId'],
                'requestAttributes': requestAttributes
            }

        def router(event):
            intent_name = event['sessionState']['intent']['name']
            slots = event['sessionState']['intent']['slots']
            if (intent_name == 'BookHotel'):
                # invoke lambda and return result
                return close(event)

            raise Exception(
                'The intent is not supported by Lambda: ' + intent_name)

        def lambda_handler(event, context):
            response = router(event)
            return response

To use this Lambda function for the fulfillment, enable the code hook settings in your intent:

Intents:
  - Name: "BookHotel"
    Description: "Intent to book a hotel room"
    FulfillmentCodeHook:
      Enabled: true
    SampleUtterances:
      - Utterance: "Book a hotel"
      - Utterance: "I want a make hotel reservations"
      - Utterance: "Book a {Nights} night stay in {Location}"

Because you made changes to your bot, you can create a new version of the bot by adding a new resource named BookHotelVersionWithLambda in the template:

BookHotelVersionWithLambda:
    DependsOn: BookHotelInitialVersion
    Type: AWS::Lex::BotVersion
    Properties:
      BotId: !Ref BookHotelBot
      BotVersionLocaleSpecification:
        - LocaleId: en_US
          BotVersionLocaleDetails:
            SourceBotVersion: DRAFT
      Description: Hotel Bot with a lambda function

The Lambda function is associated with a bot alias. Amazon Lex V2 can use one Lambda function per bot alias per language. Therefore, you must update your alias in the template to add the Lambda function resource. You can do so in the BotAliasLocalSettings section. You also need to point the alias to the new version you created. The following code is the modified alias configuration:

  BookHotelDemoAlias:
    Type: AWS::Lex::BotAlias
    Properties:
      BotId: !Ref BookHotelBot
      BotAliasName: "BookHotelDemoAlias"
      BotVersion: !GetAtt BookHotelVersionWithLambda.BotVersion
      # Remove BotAliasLocaleSettings if you aren't concerned with Lambda setup.
      # If you are you can modify the LambdaArn below to get started.
      BotAliasLocaleSettings:
        - LocaleId: en_US
          BotAliasLocaleSetting:
            Enabled: true
            CodeHookSpecification:
              LambdaCodeHook:
                CodeHookInterfaceVersion: "1.0"
                LambdaArn: !GetAtt HotelBotFunction.Arn

Up until now, you have only linked the Lambda function with the alias. However, you need to grant permission to allow the alias to invoke the Lambda function. In the following code, you add the Lambda invoke permission for Amazon Lex and specify the alias ARN as the source ARN:

  LexInvokeLambdaPermission:
    Type: AWS::Lambda::Permission
    Properties:
      Action: "lambda:InvokeFunction"
      FunctionName: !GetAtt HotelBotFunction.Arn
      Principal: "lexv2.amazonaws.com"
      SourceArn: !GetAtt BookHotelDemoAlias.Arn

You can download the latest version of the template. After updating your stack with this version, you will have an Amazon Lex bot integrated with a Lambda function.

second version

updated alis

Conditional branches

Now let’s explore the conditional branch feature of the Amazon Lex bot and consider a scenario where booking more than five nights in Seattle is not allowed for the next week. As per the business requirement, the conversation should end with an appropriate message if the user attempts to book more than five nights in Seattle. The conditional branch for that is represented in the CloudFormation template under the SlotCaptureSetting:

- Name: "Nights"
                  Description: “Number of nights.”
                  SlotTypeName: "AMAZON.Number"
                  ValueElicitationSetting:
                    SlotConstraint: "Required"
                    SlotCaptureSetting:
                      CaptureConditional:
                        DefaultBranch:
                          NextStep:
                            DialogAction:
                              Type: "ElicitSlot"
                              SlotToElicit: "RoomType"
                        ConditionalBranches:
                          - Name: "Branch1"
                            Condition:
                              ExpressionString: '{Nights}>5 AND {Location} = "Seattle"'
                            Response:
                              AllowInterrupt: true
                              MessageGroupsList:
                                - Message:
                                    PlainTextMessage:
                                      Value: “Sorry, we cannot book more than five nights in {Location} right now."
                            NextStep:
                              DialogAction:
                                Type: "EndConversation"
                        IsActive: true

                    PromptSpecification:
                      MessageGroupsList:
                        - Message:
                            PlainTextMessage:
                              Value: "How many nights will you be staying?"
                      MaxRetries: 2
                      AllowInterrupt: false

Because you changed the bot definition, you need to create a new version in the template and link it with the alias. This is a temporary modification because the business plans to allow large bookings in Seattle soon. The following are the two new resources you add to the template:

BookHotelConditionalBranches:
    DependsOn: BookHotelVersionWithLambda
    Type: AWS::Lex::BotVersion
    Properties:
      BotId: !Ref BookHotelBot
      BotVersionLocaleSpecification:
        - LocaleId: en_US
          BotVersionLocaleDetails:
            SourceBotVersion: DRAFT
      Description: Hotel Bot Version with conditional branches

  BookHotelDemoAlias:
    Type: AWS::Lex::BotAlias
    Properties:
      BotId: !Ref BookHotelBot
      BotAliasName: "BookHotelDemoAlias"
      BotVersion: !GetAtt BookHotelConditionalBranches.BotVersion
      # Remove BotAliasLocaleSettings if you aren't concerned with Lambda setup.
      # If you are you can modify the LambdaArn below to get started.
      BotAliasLocaleSettings:
        - LocaleId: en_US
          BotAliasLocaleSetting:
            Enabled: true
            CodeHookSpecification:
              LambdaCodeHook:
                CodeHookInterfaceVersion: "1.0"
                LambdaArn: !GetAtt HotelBotFunction.Arn

You can download the updated template. After you update your stack with this template version, the alias will be directed to the version incorporating the conditional branching feature. To undo this modification, you can update the alias to revert back to the previous version.

third version

alias for third version

Logs

You can also enable logs for your Amazon Lex bot. To do so, you must update the bot’s role to grant permissions for writing Amazon CloudWatch logs. The following is an example of adding a CloudWatch policy to the role:

BotRuntimeRole:
    Type: AWS::IAM::Role
    Properties:
      AssumeRolePolicyDocument:
        Version: "2012-10-17"
        Statement:
          - Effect: Allow
            Principal:
              Service:
                - lexv2.amazonaws.com
            Action:
              - "sts:AssumeRole"
      Path: "/"
      Policies:
        - PolicyName: LexRuntimeRolePolicy
          PolicyDocument:
            Version: 2012-10-17
            Statement:
              - Effect: Allow
                Action:
                  - "polly:SynthesizeSpeech"
                  - "comprehend:DetectSentiment"
                Resource: "*"
        - PolicyName: CloudWatchPolicy
          PolicyDocument:
            Version: 2012-10-17
            Statement:
              - Effect: Allow
                Action:
                  - "logs:CreateLogStream"
                  - "logs:PutLogEvents"
                Resource: "*"

To ensure consistent and predictable behavior, you should be as specific as possible when defining resource names and properties in CloudFormation templates. This is because the use of the wildcard character (*) in CloudFormation templates can pose potential security risks and lead to unintended consequences. Therefore, it’s recommended to avoid using wildcards and instead use explicit values wherever possible.

Next, you create a CloudWatch log group resource, as shown in the following code, to direct your logs to this group:

  #Log Group
  LexLogGroup:
    Type: AWS::Logs::LogGroup
    Properties:
      LogGroupName: /lex/hotel-bot
      RetentionInDays: 5

Finally, you update your alias to enable conversation log settings:

BookHotelDemoAlias:
    Type: AWS::Lex::BotAlias
    Properties:
      BotId: !Ref BookHotelBot
      BotAliasName: "BookHotelDemoAlias"
      BotVersion: !GetAtt BookHotelConditionalBranches.BotVersion
      BotAliasLocaleSettings:
        - LocaleId: en_US
          BotAliasLocaleSetting:
            Enabled: true
            CodeHookSpecification:
              LambdaCodeHook:
                CodeHookInterfaceVersion: "1.0"
                LambdaArn: !GetAtt HotelBotFunction.Arn
      ConversationLogSettings:
        TextLogSettings:
          - Destination:
              CloudWatch:
                CloudWatchLogGroupArn: !GetAtt LexLogGroup.Arn
                LogPrefix: bookHotel
            Enabled: true

When you update the stack with this template, you enable the conversation logs for your bot. A new version is not created in this step because there are no changes to your bot resource. You can download the latest version of the template.

Clean Up

To prevent incurring charges in the future, delete the CloudFormation stack you created.

Conclusion

In this post, we discussed the step-by-step process to create a CloudFormation template for an Amazon Lex V2 bot. Initially, we deployed a basic bot, then we explored the potential of aliases and versions and how to use them efficiently with templates. Next, we learned how to integrate a Lambda function with an Amazon Lex V2 bot and implemented conditional branching in the bot’s conversation flow to accommodate business requirements. Finally, we added logging features by creating a CloudWatch log group resource and updating the bot’s role with the necessary permissions.

The template allows for the straightforward deployment and management of the bot, with the ability to revert changes as necessary. Overall, the CloudFormation template is useful for managing and optimizing an Amazon Lex V2 bot.

As the next step, you can explore sample Amazon Lex bots and apply the techniques discussed in this post to convert them into CloudFormation templates. This hands-on practice will solidify your understanding of managing Amazon Lex V2 bots through infrastructure as code.


About the Authors

Thomas Rindfuss is a Sr. Solutions Architect on the Amazon Lex team. He invents, develops, prototypes, and evangelizes new technical features and solutions for Language AI services that improves the customer experience and eases adoption.

Rijeesh Akkambeth Chathoth is a Professional Services Consultant at AWS. He helps customers in achieving their desired business
outcomes in the Contact Center space by leveraging Amazon Connect, Amazon Lex and GenAI features.

Read More