NVIDIA Omniverse Expands Worlds Using Apple Vision Pro

NVIDIA Omniverse Expands Worlds Using Apple Vision Pro

NVIDIA is bringing OpenUSD-based Omniverse enterprise digital twins to the Apple Vision Pro.

Announced today at NVIDIA GTC, a new software framework built on Omniverse Cloud APIs, or application programming interfaces, lets developers easily send their Universal Scene Description (OpenUSD) industrial scenes from their content creation applications to the NVIDIA Graphics Delivery Network (GDN), a global network of graphics-ready data centers that can stream advanced 3D experiences to Apple Vision Pro.

In a demo unveiled at the global AI conference, NVIDIA presented an interactive, physically accurate digital twin of a car streamed in full fidelity to Apple Vision Pro’s high-resolution displays.

The demo featured a designer wearing the Vision Pro, using a car configurator application developed by CGI studio Katana on the Omniverse platform. The designer toggles through paint and trim options and even enters the vehicle — leveraging the power of spatial computing by blending 3D photorealistic environments with the physical world.

Bringing the Power of RTX Enterprise Cloud Rendering to Spatial Computing

Spatial computing has emerged as a powerful technology for delivering immersive experiences and seamless interactions between people, products, processes and physical spaces. Industrial enterprise use cases require incredibly high-resolution displays and powerful sensors operating at high frame rates to make manufacturing experiences true to reality.

This new Omniverse-based workflow combines Apple Vision Pro groundbreaking high-resolution displays with NVIDIA’s powerful RTX cloud rendering to deliver spatial computing experiences with just the device and an internet connection.

This cloud-based approach allows real-time physically based renderings to be streamed seamlessly to Apple Vision Pro, delivering high-fidelity visuals without compromising details of the massive, engineering fidelity datasets.

“The breakthrough ultra-high-resolution displays of Apple Vision Pro, combined with photorealistic rendering of OpenUSD content streamed from NVIDIA accelerated computing, unlocks an incredible opportunity for the advancement of immersive experiences,” said Mike Rockwell, vice president of the Vision Products Group at Apple. “Spatial computing will redefine how designers and developers build captivating digital content, driving a new era of creativity and engagement.”

“Apple Vision Pro is the first untethered device which allows for enterprise customers to realize their work without compromise,” said Rev Lebaredian, vice president of simulation at NVIDIA. “We look forward to our customers having access to these amazing tools.”

The workflow also introduces hybrid rendering, a groundbreaking technique that combines local and remote rendering on the device. Users can render fully interactive experiences in a single application from Apple’s native SwiftUI and Reality Kit with the Omniverse RTX Renderer streaming from GDN.

NVIDIA GDN, available in over 130 countries, taps NVIDIA’s global cloud-to-edge streaming infrastructure to deliver smooth, high-fidelity, interactive experiences. By moving heavy compute tasks to GDN, users can tackle the most demanding rendering use cases, no matter the size or complexity of the dataset.

Enhancing Spatial Computing Workloads Across Use Cases

The Omniverse-based workflow showed potential for a wide range of use cases. For example, designers could use the technology to see their 3D data in full fidelity, with no loss in quality or model decimation. This means designers can interact with trustworthy simulations that look and behave like the real physical product. This also opens new channels and opportunities for e-commerce experiences.

In industrial settings, factory planners can view and interact with their full engineering factory datasets, letting them optimize their workflows and identify potential bottlenecks.

For developers and independent software vendors, NVIDIA is building the capabilities that would allow them to use the native tools on Apple Vision Pro to seamlessly interact with existing data in their applications.

Learn more about NVIDIA Omniverse and GDN.

Read More

NVIDIA and Siemens Bring Immersive Visualization and Generative AI to Industrial Design and Manufacturing

NVIDIA and Siemens Bring Immersive Visualization and Generative AI to Industrial Design and Manufacturing

Generative AI and digital twins are changing the way companies in multiple industries design, manufacture and operate their products.

Siemens, a leading technology company for automation, digitalization and sustainability, announced today at NVIDIA GTC that it is expanding its partnership with NVIDIA by adopting new NVIDIA Omniverse Cloud APIs, or application programming interfaces, with its Siemens Xcelerator platform applications, starting with Teamcenter X. Teamcenter X is Siemens’ industry-leading cloud-based product lifecycle management (PLM) software.

NVIDIA Omniverse is a platform of APIs and services based on Universal Scene Description (OpenUSD) that enables developers to build generative AI-powered tools, applications and services for industrial digital twins and automation.

Enterprises of all sizes depend on Teamcenter software, part of the Siemens Xcelerator platform, to develop and deliver products at scale. By connecting NVIDIA Omniverse with Teamcenter X, Siemens will be able to provide engineering teams with the ability to make their physics-based digital twins more immersive and photorealistic, helping eliminate workflow waste and reduce errors.

Through the use of Omniverse APIs, workflows such as applying materials, lighting environments and other supporting scenery assets in physically based renderings will be dramatically accelerated using generative AI.

AI integrations will also allow engineering data to be contextualized as it would appear in the real world, allowing other stakeholders — from sales and marketing teams to decision-makers and customers — to benefit from deeper insight and understanding of real-world product appearance.

Unifying and Visualizing Complex Industrial Datasets

Traditionally, companies have relied heavily on physical prototypes and costly modifications to complete large-scale industrial projects and build complex, connected products. That approach is expensive and error-prone, limits innovation and slows time to market.

By connecting Omniverse Cloud APIs to the Xcelerator platform, Siemens will enable its customers to enhance their digital twins with physically based rendering, helping supercharge industrial-scale design and manufacturing projects. With the ability to connect generative AI APIs or agents, users can effortlessly generate 3D objects or high-dynamic range image backgrounds to view their assets in context.

This means that companies like HD Hyundai, a leader in sustainable ship manufacturing, can unify and visualize complex engineering projects directly within Teamcenter X. At NVIDIA GTC, Siemens and NVIDIA demonstrated how HD Hyundai could use the software to visualize digital twins of liquified natural gas carriers, which can comprise over 7 million discrete parts, helping validate their product before moving to production.

Interoperable, photoreal and physics-based digital twins like these accelerate engineering collaboration and allow customers to minimize workflow waste, save time and costs, and reduce risk of manufacturing defects.

Combining Digital and Physical Worlds With Omniverse APIs

Omniverse Cloud APIs enable data interoperability and physically based rendering for industrial-scale design and manufacturing projects in Teamcenter X. This starts with a real-time, embedded, photoreal viewport powered by the USD Render and USD Write APIs, which engineers can use to interactively navigate, edit and iterate on a shared model of their live data.

The USD Query API lets Teamcenter X users navigate and interact with physically accurate scenes, while the USD Notify API automatically provides real-time design and scene updates. To facilitate cloud-based collaboration and data exchange, Teamcenter X will leverage the Omniverse Channel API to establish a secure connection between multiple users across devices.

In the future, Siemens plans to bring NVIDIA accelerated computing, generative AI and Omniverse to more of its Siemens Xcelerator portfolio.

Learn more about NVIDIA Omniverse, Siemens Xcelerator and the partnership.

Get started with NVIDIA Omniverse, access OpenUSD resources, and learn how Omniverse Enterprise can connect your team. Stay up to date on Instagram, Medium and Twitter. For more, join the Omniverse community on the  forums, Discord server, Twitch and YouTube channels.

Read More

NVIDIA Supercharges Autonomous System Development with Omniverse Cloud APIs

NVIDIA Supercharges Autonomous System Development with Omniverse Cloud APIs

While simulation is critical for training, testing and deploying autonomy,  achieving real-world fidelity is incredibly challenging.

It requires accurate modeling of the physics and behavior of an autonomous system’s sensors and surroundings.

Designed to address this challenge by delivering large-scale, high-fidelity sensor simulation, Omniverse Cloud APIs, announced today at NVIDIA GTC, are poised to accelerate the path to autonomy. They bring together a rich ecosystem of simulation tools, applications and sensors.

The application programming interfaces address the critical need for high-fidelity sensor simulations to safely explore the myriad real-world scenarios autonomous systems will encounter.

In addition, the Omniverse Cloud platform offers application developers access to a range of powerful Universal Scene Description (OpenUSD), RTX and generative AI-enabled service-level cloud APIs to bring interoperability and physically based rendering to next-generation tools.

Simulation Key to Unlocking New Levels of Safety

As demand increases for robots, AVs, and other AI systems, developers are seeking to accelerate their workflows. Sensor data powers these systems’ perception capabilities, enabling them to comprehend their environment and make informed decisions in real time.

Traditionally, developers have used real-world data for training, testing and validation.

However, these methods are limited in covering rare scenarios or data that can’t be captured in the real world. Sensor simulation provides a seamless way to effectively test countless “what if” scenarios and diverse environmental conditions.

With Omniverse Cloud APIs, developers can enhance the workflows they’re already using with high-fidelity sensor simulation to tackle the challenge of developing full-stack autonomy.

This not only streamlines the development process but also lowers the barriers to entry for companies of virtually all sizes developing autonomous machines.

The Ecosystem Advantage

By bringing together an expansive ecosystem of simulators, verification and validation (V&V) tools, content and sensor developers, the Omniverse Cloud APIs enable a universal environment for AI system development.

Developers and software vendors such as CARLA, MathWorks, MITRE, Foretellix and Voxel51 underscore the broad appeal of these APIs in autonomous vehicles.

CARLA is an open-source AV simulator used by more than 100,000 developers. With Omniverse Cloud APIs, CARLA users can enhance their existing workflows with high-fidelity sensor simulation.

Similarly, MITRE, a nonprofit that operates federally funded R&D centers and is dedicated to improving safety in technology, is building a Digital Proving Ground for the AV industry to validate self-driving solutions. The DPG will use the Omniverse APIs to enable core sensor simulation capabilities for their developers.

MathWorks and Foretellix provide critical simulation tools for authoring, executing, monitoring, and debugging of testing scenarios. As the GTC demo showed, combining such simulation and test automation tools with the APIs forms a powerful test environment for AV development. On the showfloor, Foretellix is showing an in-depth look at this solution in Booth 630.

And, by integrating the APIs with Voxel51’s FiftyOne platform, developers can easily visualize and organize ground-truth data generated in simulation for streamlined training and testing.

Leading industrial-sensor solution provider SICK AG is working on integrating these APIs in its sensor development process to reduce the number of physical prototypes, iterate quickly on design modifications and validate the eventual performance. These validated sensor models can eventually be used by autonomous systems developers in their applications.

Developers will also have access to sensor models from a variety of manufacturers, including lidar makers Hesai, Innoviz Technologies, Luminar, MicroVision, Robosense, and Seyond, visual sensor suppliers OMNIVISION, onsemi, and Sony Semiconductor Solutions, and Continental, FORVIA HELLA, and Arbe for radar.

Additionally, AI/ML developers can call on these APIs to generate large and diverse sets of synthetic data — critical input for training and validating perception models that power these autonomous systems.

Empowering Developers and Accelerating Innovation

By reducing the traditional barriers to high-fidelity sensor simulation, NVIDIA Omniverse Cloud APIs empower developers to address complex AI problems without significant infrastructure overhauls.

This democratization of access to advanced simulation tools promises to accelerate innovation, allowing developers to quickly adapt to and integrate the latest technological advancements into their testing and development processes.

Apply here for early access to Omniverse Cloud APIs.

Get started with NVIDIA Omniverse, access OpenUSD resources, and learn how Omniverse Enterprise can connect your team. Stay up to date on Instagram, Medium and Twitter. For more, join the Omniverse community on the  forums, Discord server, Twitch and YouTube channels.

Read More

Staying in Sync: NVIDIA Combines Digital Twins With Real-Time AI for Industrial Automation

Staying in Sync: NVIDIA Combines Digital Twins With Real-Time AI for Industrial Automation

Real-time AI is helping with the heavy lifting in manufacturing, factory logistics and robotics.

In such industries — often involving bulky products, expensive equipment, cobot environments and logistically complex facilities — a simulation-first approach is ushering in the next phase of automation.

NVIDIA founder and CEO Jensen Huang today demonstrated in his GTC keynote how developers can use digital twins to develop, test and refine their large-scale, real-time AIs entirely in simulation before rolling them out in industrial infrastructure, saving significant time and cost.

NVIDIA Omniverse, Metropolis, Isaac and cuOpt interact in AI gyms where developers can train AI agents to help robots and humans navigate unpredictable or complex events.

In the demo, a digital twin of a 100,000-square-foot warehouse — built using the NVIDIA Omniverse platform for developing and connecting OpenUSD applications — operates as a simulation environment for dozens of digital workers and multiple autonomous mobile robots (AMRs), vision AI agents and sensors.

Each AMR, running the NVIDIA Isaac Perceptor multi-sensor stack, processes visual information from six sensors, all simulated in the digital twin.

At the same time, the NVIDIA Metropolis platform for vision AI creates a single centralized map of worker activity across the entire warehouse, fusing together data from 100 simulated ceiling-mounted camera streams with multi-camera tracking. This centralized occupancy map helps inform optimal AMR routes calculated by the NVIDIA cuOpt engine for solving complex routing problems.

cuOpt, a record-breaking optimization AI microservice, solves complex routing problems with multiple constraints using GPU-accelerated evolutionary algorithms.

All of this happens in real time, while Isaac Mission Control coordinates the entire fleet using map data and route graphs from cuOpt to send and execute AMR commands.

An AI Gym for Industrial Digitalization

AI agents can assist in large-scale industrial environments by, for example, managing fleets of robots in a factory or identifying streamlined configurations for human-robot collaboration in supply chain distribution centers. To build these complex agents, developers need digital twins that function as AI gyms — physically accurate environments for AI evaluation, simulation and training.

Such software-in-the-loop AI testing enables AI agents and AMRs to adapt to real-world unpredictability.

In the demo, an incident occurs along an AMR’s planned route, blocking the path and preventing it from picking up a pallet. NVIDIA Metropolis updates an occupancy grid, mapping all humans, robots and objects in a single view. cuOpt then plans an optimal route, and the AMR responds accordingly to minimize downtime.

With Metropolis vision foundation models powering the NVIDIA Visual Insight Agent (VIA) framework, AI agents can be built to help operations teams answer questions like, “What situation occurred in aisle three of the factory?” And the generative AI-powered agent offers immediate insights such as, “Boxes fell from the shelves at 3:30 p.m., blocking the aisle.”

Developers can use the VIA framework to build AI agents capable of processing large amounts of live or archived videos and images with vision-language models — whether deployed at the edge or in the cloud. This new generation of visual AI agents will help nearly every industry summarize, search and extract actionable insights from video using natural language.

All of these AI functions can be enhanced through continuous, simulation-based training and are deployed as modular NVIDIA NIM inference microservices.

Learn more about the latest advancements in generative AI and industrial digitalization at NVIDIA GTC, a global AI conference running through Thursday, March 21, at the San Jose Convention Center and online.

Read More

At Your Microservice: NVIDIA Smooths Businesses’ Journey to Generative AI

At Your Microservice: NVIDIA Smooths Businesses’ Journey to Generative AI

NVIDIA’s AI platform is available to any forward-thinking business — and it’s easier to use than ever.

Launched today, NVIDIA AI Enterprise 5.0 includes NVIDIA microservices, downloadable software containers for deploying generative AI applications and accelerated computing. It’s available from leading cloud service providers, system builders and software vendors — and it’s in use at customers such as Uber.

“Our adoption of NVIDIA AI Enterprise inference software is important for meeting the high performance our users expect,” said Albert Greenberg, vice president of platform engineering at Uber. “Uber prides itself on being at the forefront of adopting and using the latest, most advanced AI innovations to deliver a customer service platform that sets the industry standard for effectiveness and excellence.”

Microservices Speed App Development

Developers are turning to microservices as an efficient way to build modern enterprise applications at a global scale. Working from a browser, they use cloud APIs, or application programming interfaces, to compose apps that can run on systems and serve users worldwide.

NVIDIA AI Enterprise 5.0 now includes a wide range of microservices — NVIDIA NIM for deploying AI models in production and the  NVIDIA CUDA-X collection of microservices which includes NVIDIA cuOpt.

NIM microservices optimize inference for dozens of popular AI models from NVIDIA and its partner ecosystem.

Powered by NVIDIA inference software — including Triton Inference Server, TensorRT, and TensorRT-LLM — NIM slashes deployment times from weeks to minutes. It provides security and manageability based on industry standards as well as compatibility with enterprise-grade management tools.

NVIDIA cuOpt is a GPU-accelerated AI microservice that’s set world records for route optimization and can empower dynamic decision-making that reduces cost, time and carbon footprint. It’s one of the CUDA-X microservices that help industries put AI into production.

More capabilities are in the works. For example, NVIDIA RAG LLM operator — now in early access and described in more detail here — will move co-pilots and other generative AI applications that use retrieval-augmented generation from pilot to production without rewriting any code.

NVIDIA microservices are being adopted by leading application and cybersecurity platform providers including CrowdStrike, SAP and ServiceNow.

More Tools and Features

Three other updates in version 5.0 are worth noting.

The platform now packs NVIDIA AI Workbench, a developer toolkit for quickly downloading, customizing, and running generative AI projects. The software is now generally available and supported with an NVIDIA AI Enterprise license.

Version 5.0 also now supports Red Hat OpenStack Platform, the environment most Fortune 500 companies use for creating private and public cloud services. Maintained by Red Hat, it provides developers a familiar option for building virtual computing environments. IBM Consulting will help customers deploy these new capabilities.

In addition, version 5.0 expands support to cover a wide range of the latest NVIDIA GPUs, networking hardware and virtualization software.

Available to Run Anywhere

The enhanced NVIDIA AI platform is easier to access than ever.

NIM and CUDA-X microservices and all the 5.0 features will be available soon on the AWS, Google Cloud, Microsoft Azure and Oracle Cloud marketplaces.

For those who prefer to run code in their own data centers, VMware Private AI Foundation with NVIDIA will support the software, so it can be deployed in the virtualized data centers of Broadcom’s customers.

Companies have the option of running NVIDIA AI Enterprise on Red Hat OpenShift, allowing them to deploy on bare-metal or virtualized environments. It’s also supported on Canonical’s Charmed Kubernetes as well as Ubuntu.

In addition, the AI platform will be part of the software available on HPE ProLiant servers from Hewlett Packard Enterprise (HPE). HPE’s enterprise computing solution for generative AI handles inference and model fine-tuning using NVIDIA AI Enterprise.

In addition, Anyscale, Dataiku and DataRobot — three leading providers of the software for managing machine learning operations — will support NIM on their platforms. They join an NVIDIA ecosystem of hundreds of MLOps partners, including Microsoft Azure Machine Learning, Dataloop AI, Domino Data Lab and Weights & Biases.

However they access it, NVIDIA AI Enterprise 5.0 users can benefit from software that’s secure, production-ready and optimized for performance. It can be flexibly deployed for applications in the data center, the cloud, on workstations or at the network’s edge.

NVIDIA AI Enterprise is available through leading system providers, including Cisco, Dell Technologies, HP, HPE, Lenovo and Supermicro.

Hear Success Stories at GTC

Users will share their experiences with the software at NVIDIA GTC, a global AI conference, running March 18-21 at the San Jose Convention Center.

For example, ServiceNow chief digital information officer Chris Bedi will speak on a panel about harnessing generative AI’s potential. In a separate talk, ServiceNow vice president of AI Products Jeremy Barnes will share on using NVIDIA AI Enterprise to achieve maximum developer productivity.

Executives from BlackRock, Medtronic, SAP and Uber will discuss their work in finance, healthcare, enterprise software, and business operations using the NVIDIA AI platform.

In addition, executives from ControlExpert, a global application provider for  car insurance companies based in Germany, will share how they developed an AI-powered claims management solution using NVIDIA AI Enterprise software.

They’re among a growing set of companies that benefit from NVIDIA’s work evaluating hundreds of internal and external generative AI projects — all integrated into a single package that’s been tested for stability and security.

And get the full picture from NVIDIA CEO and founder Jensen Huang in his GTC keynote.

See notice regarding software product information. 

Read More

Optimize price-performance of LLM inference on NVIDIA GPUs using the Amazon SageMaker integration with NVIDIA NIM Microservices

Optimize price-performance of LLM inference on NVIDIA GPUs using the Amazon SageMaker integration with NVIDIA NIM Microservices

NVIDIA NIM microservices now integrate with Amazon SageMaker, allowing you to deploy industry-leading large language models (LLMs) and optimize model performance and cost. You can deploy state-of-the-art LLMs in minutes instead of days using technologies such as NVIDIA TensorRT, NVIDIA TensorRT-LLM, and NVIDIA Triton Inference Server on NVIDIA accelerated instances hosted by SageMaker.

NIM, part of the NVIDIA AI Enterprise software platform listed on AWS marketplace, is a set of inference microservices that bring the power of state-of-the-art LLMs to your applications, providing natural language processing (NLP) and understanding capabilities, whether you’re developing chatbots, summarizing documents, or implementing other NLP-powered applications. You can use pre-built NVIDIA containers to host popular LLMs that are optimized for specific NVIDIA GPUs for quick deployment or use NIM tools to create your own containers.

In this post, we provide a high-level introduction to NIM and show how you can use it with SageMaker.

An introduction to NVIDIA NIM

NIM provides optimized and pre-generated engines for a variety of popular models for inference. These microservices support a variety of LLMs, such as Llama 2 (7B, 13B, and 70B), Mistral-7B-Instruct, Mixtral-8x7B, NVIDIA Nemotron-3 22B Persona, and Code Llama 70B, out of the box using pre-built NVIDIA TensorRT engines tailored for specific NVIDIA GPUs for maximum performance and utilization. These models are curated with the optimal hyperparameters for model-hosting performance for deploying applications with ease.

If your model is not in NVIDIA’s set of curated models, NIM offers essential utilities such as the Model Repo Generator, which facilitates the creation of a TensorRT-LLM-accelerated engine and a NIM-format model directory through a straightforward YAML file. Furthermore, an integrated community backend of vLLM provides support for cutting-edge models and emerging features that may not have been seamlessly integrated into the TensorRT-LLM-optimized stack.

In addition to creating optimized LLMs for inference, NIM provides advanced hosting technologies such as optimized scheduling techniques like in-flight batching, which can break down the overall text generation process for an LLM into multiple iterations on the model. With in-flight batching, rather than waiting for the whole batch to finish before moving on to the next set of requests, the NIM runtime immediately evicts finished sequences from the batch. The runtime then begins running new requests while other requests are still in flight, making the best use of your compute instances and GPUs.

Deploying NIM on SageMaker

NIM integrates with SageMaker, allowing you to host your LLMs with performance and cost optimization while benefiting from the capabilities of SageMaker. When you use NIM on SageMaker, you can use capabilities such as scaling out the number of instances to host your model, performing blue/green deployments, and evaluating workloads using shadow testing—all with best-in-class observability and monitoring with Amazon CloudWatch.

Conclusion

Using NIM to deploy optimized LLMs can be a great option for both performance and cost. It also helps make deploying LLMs effortless. In the future, NIM will also allow for Parameter-Efficient Fine-Tuning (PEFT) customization methods like LoRA and P-tuning. NIM also plans to have LLM support by supporting Triton Inference Server, TensorRT-LLM, and vLLM backends.

We encourage you to learn more about NVIDIA microservices and how to deploy your LLMs using SageMaker and try out the benefits available to you. NIM is available as a paid offering as part of the NVIDIA AI Enterprise software subscription available on AWS Marketplace.

In the near future, we will post an in-depth guide for NIM on SageMaker.


About the authors

James Park is a Solutions Architect at Amazon Web Services. He works with Amazon.com to design, build, and deploy technology solutions on AWS, and has a particular interest in AI and machine learning. In h is spare time he enjoys seeking out new cultures, new experiences,  and staying up to date with the latest technology trends.You can find him on LinkedIn.

Saurabh Trikande is a Senior Product Manager for Amazon SageMaker Inference. He is passionate about working with customers and is motivated by the goal of democratizing machine learning. He focuses on core challenges related to deploying complex ML applications, multi-tenant ML models, cost optimizations, and making deployment of deep learning models more accessible. In his spare time, Saurabh enjoys hiking, learning about innovative technologies, following TechCrunch, and spending time with his family.

Qing Lan is a Software Development Engineer in AWS. He has been working on several challenging products in Amazon, including high performance ML inference solutions and high performance logging system. Qing’s team successfully launched the first Billion-parameter model in Amazon Advertising with very low latency required. Qing has in-depth knowledge on the infrastructure optimization and Deep Learning acceleration.

Nikhil Kulkarni is a software developer with AWS Machine Learning, focusing on making machine learning workloads more performant on the cloud, and is a co-creator of AWS Deep Learning Containers for training and inference. He’s passionate about distributed Deep Learning Systems. Outside of work, he enjoys reading books, fiddling with the guitar, and making pizza.

Harish Tummalacherla is Software Engineer with Deep Learning Performance team at SageMaker. He works on performance engineering for serving large language models efficiently on SageMaker. In his spare time, he enjoys running, cycling and ski mountaineering.

Eliuth Triana Isaza is a Developer Relations Manager at NVIDIA empowering Amazon’s AI MLOps, DevOps, Scientists and AWS technical experts to master the NVIDIA computing stack for accelerating and optimizing Generative AI Foundation models spanning from data curation, GPU training, model inference and production deployment on AWS GPU instances. In addition, Eliuth is a passionate mountain biker, skier, tennis and poker player.

Jiahong Liu is a Solution Architect on the Cloud Service Provider team at NVIDIA. He assists clients in adopting machine learning and AI solutions that leverage NVIDIA accelerated computing to address their training and inference challenges. In his leisure time, he enjoys origami, DIY projects, and playing basketball.

Kshitiz Gupta is a Solutions Architect at NVIDIA. He enjoys educating cloud customers about the GPU AI technologies NVIDIA has to offer and assisting them with accelerating their machine learning and deep learning applications. Outside of work, he enjoys running, hiking and wildlife watching.

Read More

MELON: Reconstructing 3D objects from images with unknown poses

MELON: Reconstructing 3D objects from images with unknown poses

A person’s prior experience and understanding of the world generally enables them to easily infer what an object looks like in whole, even if only looking at a few 2D pictures of it. Yet the capacity for a computer to reconstruct the shape of an object in 3D given only a few images has remained a difficult algorithmic problem for years. This fundamental computer vision task has applications ranging from the creation of e-commerce 3D models to autonomous vehicle navigation.

A key part of the problem is how to determine the exact positions from which images were taken, known as pose inference. If camera poses are known, a range of successful techniques — such as neural radiance fields (NeRF) or 3D Gaussian Splatting — can reconstruct an object in 3D. But if these poses are not available, then we face a difficult “chicken and egg” problem where we could determine the poses if we knew the 3D object, but we can’t reconstruct the 3D object until we know the camera poses. The problem is made harder by pseudo-symmetries — i.e., many objects look similar when viewed from different angles. For example, square objects like a chair tend to look similar every 90° rotation. Pseudo-symmetries of an object can be revealed by rendering it on a turntable from various angles and plotting its photometric self-similarity map.

Self-Similarity map of a toy truck model. Left: The model is rendered on a turntable from various azimuthal angles, θ. Right: The average L2 RGB similarity of a rendering from θ with that of θ*. The pseudo-similarities are indicated by the dashed red lines.

The diagram above only visualizes one dimension of rotation. It becomes even more complex (and difficult to visualize) when introducing more degrees of freedom. Pseudo-symmetries make the problem ill-posed, with naïve approaches often converging to local minima. In practice, such an approach might mistake the back view as the front view of an object, because they share a similar silhouette. Previous techniques (such as BARF or SAMURAI) side-step this problem by relying on an initial pose estimate that starts close to the global minima. But how can we approach this if those aren’t available?

Methods, such as GNeRF and VMRF leverage generative adversarial networks (GANs) to overcome the problem. These techniques have the ability to artificially “amplify” a limited number of training views, aiding reconstruction. GAN techniques, however, often have complex, sometimes unstable, training processes, making robust and reliable convergence difficult to achieve in practice. A range of other successful methods, such as SparsePose or RUST, can infer poses from a limited number views, but require pre-training on a large dataset of posed images, which aren’t always available, and can suffer from “domain-gap” issues when inferring poses for different types of images.

In “MELON: NeRF with Unposed Images in SO(3)”, spotlighted at 3DV 2024, we present a technique that can determine object-centric camera poses entirely from scratch while reconstructing the object in 3D. MELON (Modulo Equivalent Latent Optimization of NeRF) is one of the first techniques that can do this without initial pose camera estimates, complex training schemes or pre-training on labeled data. MELON is a relatively simple technique that can easily be integrated into existing NeRF methods. We demonstrate that MELON can reconstruct a NeRF from unposed images with state-of-the-art accuracy while requiring as few as 4–6 images of an object.

MELON

We leverage two key techniques to aid convergence of this ill-posed problem. The first is a very lightweight, dynamically trained convolutional neural network (CNN) encoder that regresses camera poses from training images. We pass a downscaled training image to a four layer CNN that infers the camera pose. This CNN is initialized from noise and requires no pre-training. Its capacity is so small that it forces similar looking images to similar poses, providing an implicit regularization greatly aiding convergence.

The second technique is a modulo loss that simultaneously considers pseudo symmetries of an object. We render the object from a fixed set of viewpoints for each training image, backpropagating the loss only through the view that best fits the training image. This effectively considers the plausibility of multiple views for each image. In practice, we find N=2 views (viewing an object from the other side) is all that’s required in most cases, but sometimes get better results with N=4 for square objects.

These two techniques are integrated into standard NeRF training, except that instead of fixed camera poses, poses are inferred by the CNN and duplicated by the modulo loss. Photometric gradients back-propagate through the best-fitting cameras into the CNN. We observe that cameras generally converge quickly to globally optimal poses (see animation below). After training of the neural field, MELON can synthesize novel views using standard NeRF rendering methods.

We simplify the problem by using the NeRF-Synthetic dataset, a popular benchmark for NeRF research and common in the pose-inference literature. This synthetic dataset has cameras at precisely fixed distances and a consistent “up” orientation, requiring us to infer only the polar coordinates of the camera. This is the same as an object at the center of a globe with a camera always pointing at it, moving along the surface. We then only need the latitude and longitude (2 degrees of freedom) to specify the camera pose.

MELON uses a dynamically trained lightweight CNN encoder that predicts a pose for each image. Predicted poses are replicated by the modulo loss, which only penalizes the smallest L2 distance from the ground truth color. At evaluation time, the neural field can be used to generate novel views.

Results

We compute two key metrics to evaluate MELON’s performance on the NeRF Synthetic dataset. The error in orientation between the ground truth and inferred poses can be quantified as a single angular error that we average across all training images, the pose error. We then test the accuracy of MELON’s rendered objects from novel views by measuring the peak signal-to-noise ratio (PSNR) against held out test views. We see that MELON quickly converges to the approximate poses of most cameras within the first 1,000 steps of training, and achieves a competitive PSNR of 27.5 dB after 50k steps.

Convergence of MELON on a toy truck model during optimization. Left: Rendering of the NeRF. Right: Polar plot of predicted (blue x), and ground truth (red dot) cameras.

MELON achieves similar results for other scenes in the NeRF Synthetic dataset.

Reconstruction quality comparison between ground-truth (GT) and MELON on NeRF-Synthetic scenes after 100k training steps.

Noisy images

MELON also works well when performing novel view synthesis from extremely noisy, unposed images. We add varying amounts, σ, of white Gaussian noise to the training images. For example, the object in σ=1.0 below is impossible to make out, yet MELON can determine the pose and generate novel views of the object.

Novel view synthesis from noisy unposed 128×128 images. Top: Example of noise level present in training views. Bottom: Reconstructed model from noisy training views and mean angular pose error.

This perhaps shouldn’t be too surprising, given that techniques like RawNeRF have demonstrated NeRF’s excellent de-noising capabilities with known camera poses. The fact that MELON works for noisy images of unknown camera poses so robustly was unexpected.

Conclusion

We present MELON, a technique that can determine object-centric camera poses to reconstruct objects in 3D without the need for approximate pose initializations, complex GAN training schemes or pre-training on labeled data. MELON is a relatively simple technique that can easily be integrated into existing NeRF methods. Though we only demonstrated MELON on synthetic images we are adapting our technique to work in real world conditions. See the paper and MELON site to learn more.

Acknowledgements

We would like to thank our paper co-authors Axel Levy, Matan Sela, and Gordon Wetzstein, as well as Florian Schroff and Hartwig Adam for continuous help in building this technology. We also thank Matthew Brown, Ricardo Martin-Brualla and Frederic Poitevin for their helpful feedback on the paper draft. We also acknowledge the use of the computational resources at the SLAC Shared Scientific Data Facility (SDF).

Read More

Fine-tune Code Llama on Amazon SageMaker JumpStart

Fine-tune Code Llama on Amazon SageMaker JumpStart

Today, we are excited to announce the capability to fine-tune Code Llama models by Meta using Amazon SageMaker JumpStart. The Code Llama family of large language models (LLMs) is a collection of pre-trained and fine-tuned code generation models ranging in scale from 7 billion to 70 billion parameters. Fine-tuned Code Llama models provide better accuracy and explainability over the base Code Llama models, as evident on its testing against HumanEval and MBPP datasets. You can fine-tune and deploy Code Llama models with SageMaker JumpStart using the Amazon SageMaker Studio UI with a few clicks or using the SageMaker Python SDK. Fine-tuning of Llama models is based on the scripts provided in the llama-recipes GitHub repo from Meta using PyTorch FSDP, PEFT/LoRA, and Int8 quantization techniques.

In this post, we walk through how to fine-tune Code Llama pre-trained models via SageMaker JumpStart through a one-click UI and SDK experience available in the following GitHub repository.

What is SageMaker JumpStart

With SageMaker JumpStart, machine learning (ML) practitioners can choose from a broad selection of publicly available foundation models. ML practitioners can deploy foundation models to dedicated Amazon SageMaker instances from a network isolated environment and customize models using SageMaker for model training and deployment.

What is Code Llama

Code Llama is a code-specialized version of Llama 2 that was created by further training Llama 2 on its code-specific datasets and sampling more data from that same dataset for longer. Code Llama features enhanced coding capabilities. It can generate code and natural language about code, from both code and natural language prompts (for example, “Write me a function that outputs the Fibonacci sequence”). You can also use it for code completion and debugging. It supports many of the most popular programming languages used today, including Python, C++, Java, PHP, Typescript (JavaScript), C#, Bash, and more.

Why fine-tune Code Llama models

Meta published Code Llama performance benchmarks on HumanEval and MBPP for common coding languages such as Python, Java, and JavaScript. The performance of Code Llama Python models on HumanEval demonstrated varying performance across different coding languages and tasks ranging from 38% on 7B Python model to 57% on 70B Python models. In addition, fine-tuned Code Llama models on SQL programming language have shown better results, as evident in SQL evaluation benchmarks. These published benchmarks highlight the potential benefits of fine-tuning Code Llama models, enabling better performance, customization, and adaptation to specific coding domains and tasks.

No-code fine-tuning via the SageMaker Studio UI

To start fine-tuning your Llama models using SageMaker Studio, complete the following steps:

  1. On the SageMaker Studio console, choose JumpStart in the navigation pane.

You will find listings of over 350 models ranging from open source and proprietary models.

  1. Search for Code Llama models.

If you don’t see Code Llama models, you can update your SageMaker Studio version by shutting down and restarting. For more information about version updates, refer to Shut down and Update Studio Apps. You can also find other model variants by choosing Explore all Code Generation Models or searching for Code Llama in the search box.

SageMaker JumpStart currently supports instruction fine-tuning for Code Llama models. The following screenshot shows the fine-tuning page for the Code Llama 2 70B model.

  1. For Training dataset location, you can point to the Amazon Simple Storage Service (Amazon S3) bucket containing the training and validation datasets for fine-tuning.
  2. Set your deployment configuration, hyperparameters, and security settings for fine-tuning.
  3. Choose Train to start the fine-tuning job on a SageMaker ML instance.

We discuss the dataset format you need prepare for instruction fine-tuning in the next section.

  1. After the model is fine-tuned, you can deploy it using the model page on SageMaker JumpStart.

The option to deploy the fine-tuned model will appear when fine-tuning is finished, as shown in the following screenshot.

Fine-tune via the SageMaker Python SDK

In this section, we demonstrate how to fine-tune Code LIama models using the SageMaker Python SDK on an instruction-formatted dataset. Specifically, the model is fine-tuned for a set of natural language processing (NLP) tasks described using instructions. This helps improve the model’s performance for unseen tasks with zero-shot prompts.

Complete the following steps to complete your fine-tuning job. You can get the entire fine-tuning code from the GitHub repository.

First, let’s look at the dataset format required for the instruction fine-tuning. The training data should be formatted in a JSON lines (.jsonl) format, where each line is a dictionary representing a data sample. All training data must be in a single folder. However, it can be saved in multiple .jsonl files. The following is a sample in JSON lines format:

{
	'system_prompt': 'a chat',
	'question': 'Please focus on the efficiency of this problem and provide code in python:nYou are given two strings `s` and `t` consisting of only lowercase English letters.nnReturn _the minimum number of characters that need to be appended to the end of_ `s` _so that_ `t` _becomes a **subsequence** of_ `s`.nnA **subsequence** is a string that can be derived from another string by deleting some or no characters without changing the order of the remaining characters.nn**Example 1:**nn**Input:** s = "coaching ", t = "coding "n**Output:** 4n**Explanation:** Append the characters "ding " to the end of s so that s = "coachingding ".nNow, t is a subsequence of s ( "**co**aching**ding** ").nIt can be shown that appending any 3 characters to the end of s will never make t a subsequence.nn**Example 2:**nn**Input:** s = "abcde ", t = "a "n**Output:** 0n**Explanation:** t is already a subsequence of s ( "**a**bcde ").nn**Example 3:**nn**Input:** s = "z ", t = "abcde "n**Output:** 5n**Explanation:** Append the characters "abcde " to the end of s so that s = "zabcde ".nNow, t is a subsequence of s ( "z**abcde** ").nIt can be shown that appending any 4 characters to the end of s will never make t a subsequence.nn**Constraints:**nn* `1 <= s.length, t.length <= 105`n* `s` and `t` consist only of lowercase English letters.',
	'response': '### Explanationn The algorithm maintains two pointers, `i` and `j`, to iterate through the given strings `s` and `t`. The pointer `i` iterates through the string `s` and the pointer `j` iterates through the string `t`. While iterating through `s`, if we find a character that matches the current character in `t`, we increment the `j` pointer. This is because the matched character can be a part of the subsequence we are looking for. The iteration stops when either `i` reaches the end of `s` or `j` reaches the end of `t`.nnAt the end, we subtract the index `j` from the length of `t`. This gives us the minimum number of characters that need to be appended to the end of `s` so that `t` becomes a subsequence of `s`.nn### Pseudocode:n```nMIN-ADDITION(s, t)n s_len = LENGTH(s)n t_len = LENGTH(t)n i = 1n j = 1n WHILE i <= s_len AND j <= t_lenn IF s[i] == t[j]n j = j + 1n ENDIFn i = i + 1n ENDWHILEn RETURN t_len - j + 1nENDMIN-ADDITIONn```nn### Coden```pythonndef min_addition(s: str, t: str) -> int:n s_len = len(s)n t_len = len(t)n i, j = 0, 0n while i < s_len and j < t_len:n if s[i] == t[j]:n j += 1n i += 1n return t_len - jn```n'
}

The training folder can contain a template.json file describing the input and output formats. The following is an example template:

{
    "prompt": "{system_prompt} ### Input: {question}",
    "completion": "{response}"
}

To match the template, each sample in the JSON lines files must include system_prompt, question, and response fields. In this demonstration, we use the Dolphin Coder dataset from Hugging Face.

After you prepare the dataset and upload it to the S3 bucket, you can start fine-tuning using the following code:

from sagemaker.jumpstart.estimator import JumpStartEstimator

model_id = "meta-textgeneration-llama-codellama-7b" 
model_version = "*"
train_data_location = f"s3://{your_own_bucket_hosting_training_data}/" # training data in s3 bucket

estimator = JumpStartEstimator(
    model_id=model_id,
    model_version=model_version,
    hyperparameters= hyperparameters,
    environment={
        "accept_eula": "false"
    },  # please change `accept_eula` to be `true` to accept EULA.
)

estimator.fit({"training": train_data_location})

You can deploy the fine-tuned model directly from the estimator, as shown in the following code. For details, see the notebook in the GitHub repository.

finetuned_predictor = estimator.deploy()

Fine-tuning techniques

Language models such as Llama are more than 10 GB or even 100 GB in size. Fine-tuning such large models requires instances with significantly high CUDA memory. Furthermore, training these models can be very slow due to the size of the model. Therefore, for efficient fine-tuning, we use the following optimizations:

  • Low-Rank Adaptation (LoRA) – This is a type of parameter efficient fine-tuning (PEFT) for efficient fine-tuning of large models. With this method, you freeze the whole model and only add a small set of adjustable parameters or layers into the model. For instance, instead of training all 7 billion parameters for Llama 2 7B, you can fine-tune less than 1% of the parameters. This helps in significant reduction of the memory requirement because you only need to store gradients, optimizer states, and other training-related information for only 1% of the parameters. Furthermore, this helps in reduction of training time as well as the cost. For more details on this method, refer to LoRA: Low-Rank Adaptation of Large Language Models.
  • Int8 quantization – Even with optimizations such as LoRA, models such as Llama 70B are still too big to train. To decrease the memory footprint during training, you can use Int8 quantization during training. Quantization typically reduces the precision of floating point data types. Although this decreases the memory required to store model weights, it degrades the performance due to loss of information. Int8 quantization uses only a quarter precision but doesn’t incur degradation of performance because it doesn’t simply drop the bits. It rounds the data from one type to the another. To learn about Int8 quantization, refer to LLM.int8(): 8-bit Matrix Multiplication for Transformers at Scale.
  • Fully Sharded Data Parallel (FSDP) – This is a type of data-parallel training algorithm that shards the model’s parameters across data parallel workers and can optionally offload part of the training computation to the CPUs. Although the parameters are sharded across different GPUs, computation of each microbatch is local to the GPU worker. It shards parameters more uniformly and achieves optimized performance via communication and computation overlapping during training.

The following table summarizes the details of each model with different settings.

Model Default Setting LORA + FSDP LORA + No FSDP Int8 Quantization + LORA + No FSDP
Code Llama 2 7B LORA + FSDP Yes Yes Yes
Code Llama 2 13B LORA + FSDP Yes Yes Yes
Code Llama 2 34B INT8 + LORA + NO FSDP No No Yes
Code Llama 2 70B INT8 + LORA + NO FSDP No No Yes

Fine-tuning of Llama models is based on scripts provided by the following GitHub repo.

Supported hyperparameters for training

Code Llama 2 fine-tuning supports a number of hyperparameters, each of which can impact the memory requirement, training speed, and performance of the fine-tuned model:

  • epoch – The number of passes that the fine-tuning algorithm takes through the training dataset. Must be an integer greater than 1. Default is 5.
  • learning_rate – The rate at which the model weights are updated after working through each batch of training examples. Must be a positive float greater than 0. Default is 1e-4.
  • instruction_tuned – Whether to instruction-train the model or not. Must be True or False. Default is False.
  • per_device_train_batch_size – The batch size per GPU core/CPU for training. Must be a positive integer. Default is 4.
  • per_device_eval_batch_size – The batch size per GPU core/CPU for evaluation. Must be a positive integer. Default is 1.
  • max_train_samples – For debugging purposes or quicker training, truncate the number of training examples to this value. Value -1 means using all of the training samples. Must be a positive integer or -1. Default is -1.
  • max_val_samples – For debugging purposes or quicker training, truncate the number of validation examples to this value. Value -1 means using all of the validation samples. Must be a positive integer or -1. Default is -1.
  • max_input_length – Maximum total input sequence length after tokenization. Sequences longer than this will be truncated. If -1, max_input_length is set to the minimum of 1024 and the maximum model length defined by the tokenizer. If set to a positive value, max_input_length is set to the minimum of the provided value and the model_max_length defined by the tokenizer. Must be a positive integer or -1. Default is -1.
  • validation_split_ratio – If validation channel is none, the ratio of the train-validation split from the train data must be between 0–1. Default is 0.2.
  • train_data_split_seed – If validation data is not present, this fixes the random splitting of the input training data to training and validation data used by the algorithm. Must be an integer. Default is 0.
  • preprocessing_num_workers – The number of processes to use for preprocessing. If None, the main process is used for preprocessing. Default is None.
  • lora_r – Lora R. Must be a positive integer. Default is 8.
  • lora_alpha – Lora Alpha. Must be a positive integer. Default is 32
  • lora_dropout – Lora Dropout. must be a positive float between 0 and 1. Default is 0.05.
  • int8_quantization – If True, the model is loaded with 8-bit precision for training. Default for 7B and 13B is False. Default for 70B is True.
  • enable_fsdp – If True, training uses FSDP. Default for 7B and 13B is True. Default for 70B is False. Note that int8_quantization is not supported with FSDP.

When choosing the hyperparameters, consider the following:

  • Setting int8_quantization=True decreases the memory requirement and leads to faster training.
  • Decreasing per_device_train_batch_size and max_input_length reduces the memory requirement and therefore can be run on smaller instances. However, setting very low values may increase the training time.
  • If you’re not using Int8 quantization (int8_quantization=False), use FSDP (enable_fsdp=True) for faster and efficient training.

Supported instance types for training

The following table summarizes the supported instance types for training different models.

Model Default Instance Type Supported Instance Types
Code Llama 2 7B ml.g5.12xlarge

ml.g5.12xlarge,

ml.g5.24xlarge,

ml.g5.48xlarge,

ml.p3dn.24xlarge,

ml.g4dn.12xlarge

Code Llama 2 13B ml.g5.12xlarge

ml.g5.24xlarge,

ml.g5.48xlarge,

ml.p3dn.24xlarge,

ml.g4dn.12xlarge

Code Llama 2 70B ml.g5.48xlarge

ml.g5.48xlarge

ml.p4d.24xlarge

When choosing the instance type, consider the following:

  • G5 instances provide the most efficient training among the instance types supported. Therefore, if you have G5 instances available, you should use them.
  • Training time largely depends on the amount of the number of GPUs and the CUDA memory available. Therefore, training on instances with the same number of GPUs (for example, ml.g5.2xlarge and ml.g5.4xlarge) is roughly the same. Therefore, you can use the cheaper instance for training (ml.g5.2xlarge).
  • When using p3 instances, training will be done with 32-bit precision because bfloat16 is not supported on these instances. Therefore, the training job will consume double the amount of CUDA memory when training on p3 instances compared to g5 instances.

To learn about the cost of training per instance, refer to Amazon EC2 G5 Instances.

Evaluation

Evaluation is an important step to assess the performance of fine-tuned models. We present both qualitative and quantitative evaluations to show improvement of fine-tuned models over non-fine-tuned ones. In qualitative evaluation, we show an example response from both fine-tuned and non-fine-tuned models. In quantitative evaluation, we use HumanEval, a test suite developed by OpenAI to generate Python code to test the abilities of producing correct and accurate results. The HumanEval repository is under MIT license. We fine-tuned Python variants of all Code LIama models over different sizes (Code LIama Python 7B, 13B, 34B, and 70B on the Dolphin Coder dataset), and present the evaluation results in the following sections.

Qualitatively evaluation

With your fine-tuned model deployed, you can start using the endpoint to generate code. In the following example, we present responses from both base and fine-tuned Code LIama 34B Python variants on a test sample in the Dolphin Coder dataset:

alpaca_format = """{system_prompt}

### Input:
{question}

### Response:n"""

system="You are a coding AI"
prompt = """Please explain the reasoning behind the following solution and 
provide code in python: Given a string `s`, return the longest palindromic 
substring in `s`. **Example 1:** **Input:** s = "babad " **Output:** "bab " 
**Explanation:** "aba " is also a valid answer. **Example 2:** 
**Input:** s = "cbbd " **Output:** "bb " 
**Constraints:** * `1 <= s.length <= 1000` * `s` consist of only digits 
and English letters."""

payload = json.dumps({
    "inputText": alpaca_format.format(system_prompt=system, question=prompt),
    "textGenerationConfig": {
    "maxTokenCount": 2048,
    "temperature":0.2
}})

finetuned_response = finetuned_predictor.predict(payload)
finetuned_response["generated_text"]

The fine-tuned Code Llama model, in addition to providing the code for the preceding query, generates a detailed explanation of the approach and a pseudo code.

Code Llama 34b Python Non-Fine-Tuned Response:

### Code
def longestPalindrome(s):
    # Fill this in.
    longest = ""
    for i in range(len(s)):
        for j in range(len(s), i, -1):
            if len(longest) >= j-i:
                break
            elif s[i:j] == s[i:j][::-1]:
                longest = s[i:j]
                break
    return longest

print(longestPalindrome("abaxyzzyxf"))
# axyzzyx
###Reference:
- [https://www.geeksforgeeks.org/longest-palindrome-substring-set-1/]
(https://www.geeksforgeeks.org/longest-palindrome-substring-set-1/)

Code Llama 34B Python Fine-Tuned Response

### Explanation
The algorithm uses dynamic programming to find the longest palindromic substring. 
It initializes a 2D boolean table `dp` with the same dimensions as the input string `s`. 
The `dp[i][j]` value is set to `true` if the substring from index `i` to index `j` is a palindrome.

The algorithm then iterates through the string in a nested loop, starting from the first character to the last. 
It checks if the substring from index `i` to index `j` is a palindrome. 
If it is, it sets the `dp[i][j]` value to `true`. 
If the substring is of length 2 or 3, it sets the `dp[i][j]` value to `true` as well.

After the nested loop, the algorithm iterates through the `dp` table to find the longest palindromic substring. 
It keeps track of the maximum length and the starting and ending indices of the substring.

Finally, the algorithm returns the substring from the starting index to the ending index.

### Code
```python
def longestPalindrome(s: str) -> str:
    n = len(s)
    dp = [[False] * n for _ in range(n)]

    for i in range(n):
        dp[i][i] = True
        if i + 1 < n:
            dp[i][i + 1] = s[i] == s[i + 1]

    max_len = 1
    start = 0
    end = 0

    for i in range(n - 1):
        for j in range(i + 1, n):
            if j - i == 1 or dp[i + 1][j - 1]:
                dp[i][j] = s[i] == s[j]
                if j - i + 1 > max_len:
                    max_len = j - i + 1
                    start = i
                    end = j

    return s[start:end + 1]
```

Ground Truth

### Explanation 
1. Initialize `start` and `maxLength` for result substring. 
2. Iterate through the given string `s` using the index `i`. 
3. For each index `i`, create two pointers `l` and `r` starting at `i`. 
4. Check if there's a consecutive sequence of identical characters, increment the right pointer `r` until the end of the sequence is reached. 
5. Update the index `i` to the current value of `r`. 
6. Expand the pointers `l` and `r` outwards to find the longest palindromic substring, checking that characters on both sides are equal. 
7. If the current length of the substring is greater than `maxLength`, update `start` and `maxLength`. 
8. Return the longest palindromic substring using the `start` and `maxLength`. 

### Code 
```python 
def longest_palindromic_substring(s: str) → str:
    n = len(s)
    if n == 0:
        return ""
    
    start, max_length = 0, 1
    for i in range(n):
        l, r = i, i
        while r < n - 1 and s[r] == s[r + 1]:
            r += 1
            i = r
        while l > 0 and r < n - 1 and s[l - 1] == s[r + 1]:
            l -= 1
            r += 1
        length = r - l + 1
        if length > max_length:
            start, max_length = l, length
    return s[start:start + max_length]
```

Interestingly, our fine-tuned version of Code Llama 34B Python provides a dynamic programming-based solution to the longest palindromic substring, which is different from the solution provided in the ground truth from the selected test example. Our fine-tuned model reasons and explains the dynamic programming-based solution in detail. On the other hand, the non-fine-tuned model hallucinates potential outputs right after the print statement (shown in the left cell) because the output axyzzyx is not the longest palindrome in the given string. In terms of time complexity, the dynamic programming solution is generally better than the initial approach. The dynamic programming solution has a time complexity of O(n^2), where n is the length of the input string. This is more efficient than the initial solution from the non-fine-tuned model, which also had a quadratic time complexity of O(n^2) but with a less optimized approach.

This looks promising! Remember, we only fine-tuned the Code LIama Python variant with 10% of the Dolphin Coder dataset. There is a lot more to explore!

Despite of thorough instructions in the response, we still need examine the correctness of the Python code provided in the solution. Next, we use an evaluation framework called Human Eval to run integration tests on the generated response from Code LIama to systematically examine its quality.

Quantitative evaluation with HumanEval

HumanEval is an evaluation harness for evaluating an LLM’s problem-solving capabilities on Python-based coding problems, as described in the paper Evaluating Large Language Models Trained on Code. Specifically, it consists of 164 original Python-based programming problems that assess a language model’s ability to generate code based on provided information like function signature, docstring, body, and unit tests.

For each Python-based programming question, we send it to a Code LIama model deployed on a SageMaker endpoint to get k responses. Next, we run each of the k responses on the integration tests in the HumanEval repository. If any response of the k responses passes the integration tests, we count that test case succeed; otherwise, failed. Then we repeat the process to calculate the ratio of successful cases as the final evaluation score named pass@k. Following standard practice, we set k as 1 in our evaluation, to only generate one response per question and test whether it passes the integration test.

The following is a sample code to use HumanEval repository. You can access the dataset and generate a single response using a SageMaker endpoint. For details, see the notebook in the GitHub repository.

%pip3 install human_eval
import json
from human_eval.evaluation import evaluate_functional_correctness
from human_eval.data import write_jsonl, read_problems
from tqdm import tqdm
problems = read_problems()

num_samples_per_task = 1 # value k: number of responses for each question
samples = [
    dict(task_id=task_id, completion=generate_one_completion(problems[task_id]["prompt"]))
    for task_id in tqdm(problems)
    for _ in range(num_samples_per_task)
]
write_jsonl("samples.jsonl", samples)

evaluate_functional_correctness('./samples.jsonl')

The following table shows the improvements of the fine-tuned Code LIama Python models over the non-fine-tuned models across different model sizes. To ensure correctness, we also deploy the non-fine-tuned Code LIama models in SageMaker endpoints and run through Human Eval evaluations. The pass@1 numbers (the first row in the following table) match the reported numbers in the Code Llama research paper. The inference parameters are consistently set as "parameters": {"max_new_tokens": 384, "temperature": 0.2}.

As we can see from the results, all the fine-tuned Code LIama Python variants show significant improvement over the non-fine-tuned models. In particular, Code LIama Python 70B outperforms the non-fine-tuned model by approximately 12%.

. 7B Python 13B Python 34B 34B Python 70B Python
Pre-trained model performance (pass@1) 38.4 43.3 48.8 53.7 57.3
Fine-tuned model performance (pass@1) 45.12 45.12 59.1 61.5 69.5

Now you can try fine-tuning Code LIama models on your own dataset.

Clean up

If you decide that you no longer want to keep the SageMaker endpoint running, you can delete it using AWS SDK for Python (Boto3), AWS Command Line Interface (AWS CLI), or SageMaker console. For more information, see Delete Endpoints and Resources. Additionally, you can shut down the SageMaker Studio resources that are no longer required.

Conclusion

In this post, we discussed fine-tuning Meta’s Code Llama 2 models using SageMaker JumpStart. We showed that you can use the SageMaker JumpStart console in SageMaker Studio or the SageMaker Python SDK to fine-tune and deploy these models. We also discussed the fine-tuning technique, instance types, and supported hyperparameters. In addition, we outlined recommendations for optimized training based on various tests we carried out. As we can see from these results of fine-tuning three models over two datasets, fine-tuning improves summarization compared to non-fine-tuned models. As a next step, you can try fine-tuning these models on your own dataset using the code provided in the GitHub repository to test and benchmark the results for your use cases.


About the Authors

Dr. Xin Huang is a Senior Applied Scientist for Amazon SageMaker JumpStart and Amazon SageMaker built-in algorithms. He focuses on developing scalable machine learning algorithms. His research interests are in the area of natural language processing, explainable deep learning on tabular data, and robust analysis of non-parametric space-time clustering. He has published many papers in ACL, ICDM, KDD conferences, and Royal Statistical Society: Series A.

Vishaal Yalamanchali is a Startup Solutions Architect working with early-stage generative AI, robotics, and autonomous vehicle companies. Vishaal works with his customers to deliver cutting-edge ML solutions and is personally interested in reinforcement learning, LLM evaluation, and code generation. Prior to AWS, Vishaal was an undergraduate at UCI, focused on bioinformatics and intelligent systems.

Meenakshisundaram Thandavarayan works for AWS as an AI/ ML Specialist. He has a passion to design, create, and promote human-centered data and analytics experiences. Meena focuses on developing sustainable systems that deliver measurable, competitive advantages for strategic customers of AWS. Meena is a connector and design thinker, and strives to drive businesses to new ways of working through innovation, incubation, and democratization.

Dr. Ashish Khetan is a Senior Applied Scientist with Amazon SageMaker built-in algorithms and helps develop machine learning algorithms. He got his PhD from University of Illinois Urbana-Champaign. He is an active researcher in machine learning and statistical inference, and has published many papers in NeurIPS, ICML, ICLR, JMLR, ACL, and EMNLP conferences.

Read More

Exploring how context, culture, and character matter in avatar research

Exploring how context, culture, and character matter in avatar research

This research paper was presented at the IEEE VR Workshop Series on Animation in Virtual and Augmented Environments (opens in new tab) (ANIVAE 2024), the premier series on 3D content creation for simulated training in extended reality.

IEEE Conference logo with the paper featured

Face-to-face communication is changing, moving beyond physical interaction to include video conferencing and AR/VR platforms, where the participants are represented by avatars. Sophisticated avatars, animated through motion tracking, can realistically portray their human counterparts, but they can also suffer from noise, such as jitter and distortion, reducing their realism. Advances in motion-capture technology aim to reduce such issues, but they come with higher development costs and require additional time due to the need for advanced components. While some noise is inevitable, it’s important to determine acceptable types and levels to efficiently develop and introduce AR/VR devices and avatars to the market. Additionally, understanding how noise impacts avatar-based communication is essential for creating more inclusive avatars that accurately represent diverse cultures and abilities, enhancing the user experience.

In our paper, “Ecological Validity and the Evaluation of Avatar Facial Animation Noise,” presented at ANIVAE 2024, we explore the challenge of evaluating avatar noise without a standardized approach. Traditional methods, which present participants with isolated facial animation noise to gauge perception thresholds, fall short of reflecting real-life avatar interactions. Our approach emphasizes ecological validity—the extent to which experiments mimic real-world conditions—as central in assessing avatar noise. We discovered this significantly influences participants’ response to avatars, highlighting the impact of context on noise perception. Our goal is to improve avatar acceptance, inclusivity, and communication by developing noise evaluation methods that better represent actual experiences. 

Seeing the big picture  

To set up our study, we animated two avatars using motion capture, as depicted in Figure 1 (A). We recorded the performance of two professional actors enacting a scene between an architect and a client discussing home renovations and examining a 3D model of the proposed design. We used two proprietary characters for the avatars, whose faces were animated with 91 expression blendshapes. This allowed for a broad range of facial expressions and subtle variations in emotions, contributing to a more realistic animation. To examine different dynamics, we created six variations of the scene, changing the characters’ gender, role, and whether they agreed on the renovation plan.

Figure 1: A. Motion capture of a social interaction scenario for the experiment. B. The motion capture was remapped to stylized avatars. C. Participants experienced the scene wearing a HoloLens 2 and responded to questions on a tablet app. D. The avatars’ facial features were degraded with different types of animation noises of varying severity.
Figure 1: A. Motion capture of a social interaction scenario for the experiment. B. The motion capture was remapped to stylized avatars. C. Participants experienced the scene wearing a HoloLens 2 and responded to questions on a tablet app. D. The avatars’ facial features were degraded with different types of animation noises of varying severity.

Fifty-six participants engaged in two experiments to evaluate the impact of noise on avatar facial animation. The first experiment had low ecological validity. Participants viewed fragmented clips of dialogue through a Microsoft HoloLens 2 device and used a slider to adjust any noise to an acceptable level. The second experiment featured high ecological validity, showing the scene in its full social context. Here, participants used a HoloLens 2 to judge the noise in facial expressions as either “appropriate” or “inappropriate” for the conversation. In contrast to the first experiment, this method considered the social aspects of context, culture, and character. 

Results indicate that noise was less distracting when participants viewed the scene in its entirety, revealing a greater tolerance for noise in high ecological validity scenarios. Isolated clips, on the other hand, led to greater annoyance with facial animation noise, suggesting the importance of social context over hyper-realistic animation. 

Cultural observations showed that noise perception was influenced by implicit cultural norms, particularly around gender roles and agreement levels. For example, in the second experiment, where participants viewed the conversation within its greater social context (high ecological validity), noise was deemed “appropriate” when the female architect agreed with the male client and “inappropriate” when she disagreed, revealing potential gender biases not observed in reversed gender roles. These findings emphasize the importance of applying high ecological validity in experiments to uncover socio-cultural influences on avatar perception. They also underscore the need to carefully consider context and cultural dynamics in avatar design. 

Finally, we explored the character trait of empathy. Participants with lower empathy scores were more critical of noise in context-rich scenarios. This indicates that experiments focusing solely on low ecological validity might overlook important insights on how empathy influences responses to avatar facial animation noise.

MICROSOFT RESEARCH PODCAST

AI Frontiers: The future of scale with Ahmed Awadallah and Ashley Llorens

This episode features Senior Principal Research Manager Ahmed H. Awadallah, whose work improving the efficiency of large-scale AI models and efforts to help move advancements in the space from research to practice have put him at the forefront of this new era of AI.


Avatars need to be studied in realistic situations 

When people communicate, they engage in a complex process influenced by environment, cultural background, and the nonverbal cues they perceive and interpret. By prioritizing high ecological validity in studies on avatar perception, researchers can uncover these socio-cultural influences and trust that their findings are relevant and applicable to real-life interactions within digital spaces. 

Our research examines how different combinations of demographic characteristics change the way people react to avatars, and we hope to encourage more inclusivity in avatar design. It’s essential to have an established set of guidelines to achieve this goal, and this work is one step in that direction. While our study’s scope is limited, its methodology can be applied broadly across different devices and settings.

Acknowledgements

We would like to thank Ken Jakubzak, James Clemoes, Cornelia Treptow, Michaela Porubanova, Kerry Read, Daniel McDuff, Marina Kuznetsova and Mathew Lamb for their research collaboration. We would also like to thank Shawn Bruner for providing the characters for the study and Panagiotis Giannakopoulos for leading the animation and motion capture pipelines.

The post Exploring how context, culture, and character matter in avatar research appeared first on Microsoft Research.

Read More