Run AI on Your PC? GeForce Users Are Ahead of the Curve

Run AI on Your PC? GeForce Users Are Ahead of the Curve

Gone are the days when AI was the domain of sprawling data centers or elite researchers.

For GeForce RTX users, AI is now running on your PC. It’s personal, enhancing every keystroke, every frame and every moment.

Gamers are already enjoying the benefits of AI in over 300 RTX games. Meanwhile, content creators have access to over 100 RTX creative and design apps, with AI enhancing everything from video and photo editing to asset generation.

And for GeForce enthusiasts, it’s just the beginning. RTX is the platform for today and the accelerator that will power the AI of tomorrow.

How Did AI and Gaming Converge?

NVIDIA pioneered the integration of AI and gaming with DLSS, a technique that uses AI to generate pixels in video games automatically and which has increased frame rates by up to 4x.

And with the recent introduction of DLSS 3.5, NVIDIA has enhanced the visual quality in some of the world’s top titles, setting a new standard for visually richer and more immersive gameplay.

But NVIDIA’s AI integration doesn’t stop there. Tools like RTX Remix empower game modders to remaster classic content using high-quality textures and materials generated by AI.

With NVIDIA ACE for Games, AI-powered avatars come to life on the PC, marking a new era of immersive gaming.

How Are RTX and AI Powering Creators?

Creators use AI to imagine new concepts, automate tedious tasks and create stunning works of art. They rely on RTX because it accelerates top creator applications, including the world’s most popular photo editing, video editing, broadcast and 3D apps.

With over 100 RTX apps now AI-enabled, creators can get more done and deliver incredible results.

The performance metrics are staggering.

RTX GPUs boost AI image generation speeds in tools like Stable Diffusion by 4.5x compared to competing processors. Meanwhile, in 3D rendering, Blender experiences a speed increase of 5.4x.

Video editing in DaVinci Resolve powered by AI doubles its speed, and Adobe Photoshop’s photo editing tasks become 3x as swift.

NVIDIA RTX AI tech demonstrates a staggering 10x faster speeds in distinct workflows when juxtaposed against its competitors.

NVIDIA provides various AI tools, apps and software development kits designed specifically for creators. This includes exclusive offerings like NVIDIA Omniverse, OptiX Denoiser, NVIDIA Canvas, NVIDIA Broadcast and NVIDIA DLSS.

How Is AI Changing Our Digital Experience Beyond Chatbots?

Beyond gaming and content creation, RTX GPUs bring AI to all types of users.

Add Microsoft to the equation and 100 million RTX-powered Windows 11 PCs and workstations are already AI-ready.

The complementary technologies behind the Windows platform and NVIDIA’s dynamic AI hardware and software stack are the driving forces that power hundreds of Windows apps and games.

  • Gamers: RTX-accelerated AI has been adopted in more than 300 games, increasing frame rates and enhancing visual fidelity.
  • Creators: More than 100 AI-enabled creative applications benefit from RTX acceleration — including the top apps for image generation, video editing, photo editing and 3D. AI helps artists work faster, automate tedious tasks and  expand the boundaries of creative expression.
  • Video Streamers: RTX Video Super Resolution uses AI to increase the resolution and improve the quality of streamed video, elevating the home video experience.
  • Office Workers and Students: Teleconferencing and remote learning get an RTX boost with NVIDIA Broadcast. AI improves video and audio quality and adds unique effects to make virtual interactions smoother and collaboration more efficient.
  • Developers: Thanks to NVIDIA’s world-leading AI development platform and technology developed by Microsoft and NVIDIA called CUDA on Windows Subsystem for Linux, developers can now do early AI development and training from the comfort of Windows, and easily migrate to servers for large training runs.

What Are the Emerging AI Applications for RTX PCs?

Generative AI enables users to quickly generate new content based on a variety of inputs — text, images, sounds, animation, 3D models or other types of data — bringing easy-to-use AI to more PCs.

Large language models (LLMs) are at the heart of many of these use cases.

Perhaps the best known is ChatGPT, a chatbot that runs in the cloud and one of the fastest growing applications in history.

Many of these LLMs now run directly on PC, enabling new end-user applications like automatically drafting documents and emails, summarizing web content, extracting insights from spreadsheet data, planning travel, and powering general-purpose AI assistants.

LLMs are some of the most demanding PC workloads, requiring a powerful AI accelerator — like an RTX GPU.

What Powers the AI Revolution on Our Desktops (and Beyond)?

What’s fueling the PC AI revolution?

Three pillars: lightning-fast graphics processing from GPUs, AI capabilities integral to GeForce and the omnipresent cloud.

Gamers already know all about the parallel processing power of GPUs. But what role did the GPU play in enabling AI in the cloud?

NVIDIA GPUs have transformed cloud services. These advanced systems power everything from voice recognition to autonomous factory operations.

In 2016, NVIDIA hand-delivered to OpenAI the first NVIDIA DGX AI supercomputer — the engine behind the LLM breakthrough powering ChatGPT.

NVIDIA DGX supercomputers, packed with GPUs and used initially as an AI research instrument, are now running 24/7 at businesses worldwide to refine data and process AI. Half of all Fortune 100 companies have installed DGX AI supercomputers.

The cloud, in turn, provides more than just vast quantities of training data for advanced AI models running on these machines.

Why Choose Desktop AI?

But why run AI on your desktop when the cloud seems limitless?

GPU-equipped desktops — where the AI revolution began — are still where the action is.

  • Availability: Whether a gamer or a researcher, everyone needs tools — from games to sophisticated AI models used by wildlife researchers in the field — that can function even when offline.
  • Speed: Some applications need instantaneous results. Cloud latency doesn’t always cut it.
  • Data size: Uploading and downloading large datasets from the cloud can be inefficient and cumbersome.
  • Privacy: Whether you’re a Fortune 500 company or just editing family photos and videos, we all have data we want to keep close to home.

RTX GPUs are based on the same architecture that fuels NVIDIA’s cloud performance. They blend the benefits of running AI locally with access to tools and the performance only NVIDIA can deliver.

NPUs, often called inference accelerators, are now finding their way into modern CPUs, highlighting the growing understanding of AI’s critical role in every application.

While NPUs are designed to offload light AI tasks, NVIDIA’s GPUs stand unparalleled for demanding AI models with raw performance ranging from a 20x-100x increase.

What’s Next for AI in Our Everyday Lives?

AI isn’t just a trend — it will impact many aspects of our daily lives.

AI functionality will expand as research advances and user expectations will evolve. Keeping up will require GPUs — and a rich software stack built on top of them — that are up to the challenge.

NVIDIA is at the forefront of this transformative era, offering end-to-end optimized development solutions.

NVIDIA provides developers with tools to add more AI features to PCs, enhancing value for users, all powered by RTX.

From gaming innovations with RTX Remix to the NVIDIA NeMo LLM language model for assisting coders, the AI landscape on the PC is rich and expanding.

Whether it’s stunning new gaming content, AI avatars, incredible tools for creators or the next generation of digital assistants, the promise of AI-powered experiences will continuously redefine the standard of personal computing.

Learn more about GeForce’s AI capabilities.

Read More

Into the Omniverse: Blender 4.0 Alpha Release Sets Stage for New Era of OpenUSD Artistry

Into the Omniverse: Blender 4.0 Alpha Release Sets Stage for New Era of OpenUSD Artistry

Editor’s note: This post is part of Into the Omniverse, a series focused on how artists, developers and enterprises can transform their workflows using the latest advances in OpenUSD and NVIDIA Omniverse.

For seasoned 3D artists and budding digital creation enthusiasts alike, an alpha version of the popular 3D software Blender is elevating creative journeys.

With the update’s features for intricate shader network creation and enhanced asset-export capabilities, the development community using Blender and the Universal Scene Description framework, aka OpenUSD, is helping to evolve the 3D landscape.

NVIDIA engineers play a key role in enhancing the OpenUSD capabilities of Blender which also brings enhancements for use with NVIDIA Omniverse, a development platform for connecting and building OpenUSD-based tools and applications.

A Universal Upgrade for Blender Workflows

With Blender 4.0 Alpha, 3D creators across industries and enterprises can access optimized OpenUSD workflows for various use cases.

For example, Emily Boehmer, a design intern at BMW Group’s Technology Office in Munich, is using the combined power of Omniverse, Blender and Adobe Substance 3D Painter to create realistic, OpenUSD-based assets to train computer vision AI models.

Boehmer worked with her team to create assets for use with SORDI.ai, an AI dataset published by BMW Group that contains over 800,000 photorealistic images.

A clip of an industrial crate virtually “aging.”

USD helped optimize Boehmer’s workflow. “It’s great to see USD support for both Blender and Substance 3D Painter,” she said. “When I create 3D assets using USD, I can be confident that they’ll look and behave as I expect them to in the scenes that they’ll be placed in because I can add physical properties to them.”

Australian animator Marko Matosevic is also harnessing the combined power of Blender, Omniverse and USD in his 3D workflows.

Matosevic began creating tutorials for his YouTube channel, Markom3D, to help artists of all levels. He now shares his vast 3D knowledge with over 77,000 subscribers.

Most recently, Matosevic created a 3D spaceship in Blender that he later enhanced in Omniverse through virtual reality.

Individual creators aren’t the only ones seeing success with Blender and USD. Multimedia entertainment studio Moment Factory creates OpenUSD-based digital twins to simulate their immersive events — including live performances, multimedia shows and interactive installations — in Omniverse with USD before deploying them in the real world.

Moment Factory’s interactive installation at InfoComm 2023.

Team members can work in the digital twin at the same time, including designers using Blender to create and render eye-catching beauty shots to share their creative vision with customers.

See how Moment Factory uses Omniverse, Blender and USD to bring their immersive events to life in their recent livestream.

These 3D workflow enhancements are available to all. Blender users and USD creators, including Boehmer, showcased their unique 3D pipeline on this recent Omniverse community livestream:

New Features Deliver Elevated 3D Experience

The latest USD improvements in Blender are the result of collaboration among many contributors, including AMD, Apple, Unity and NVIDIA, enabled by the Blender Foundation.

For example, hair object support — which improves USD import and export capabilities for digital hair — was added by a Unity software engineer. And a new Python IO callback system — which lets technical artists use Python to access USD application programming interfaces — was developed by a software engineer at NVIDIA, with support from others at Apple and AMD.

NVIDIA engineers are continuing to work on other USD contributions to include in future Blender updates.

Coming soon, the Blender 4.0 Alpha 201.0 Omniverse Connector will offer new features for USD and Omniverse users, including:

  • Universal Material Mapper 2 add-on: This allows for more complex shader networks, or the blending of multiple textures and materials, to be round-tripped between Omniverse apps and Blender through USD.
  • Improved UsdPreviewSurface support and USDZ import/export capabilities: This enables creators to export 3D assets for viewing in AR and VR applications.
  • Generic attribute support: This allows geometry artists to generate vertex colors — red, green or blue values — or other per-vertex (3D point) values and import/export them between Blender and other 3D applications.

Learn more about the Blender updates by watching this tutorial:

Get Plugged Into the Omniverse 

Learn from industry experts on how OpenUSD is enabling custom 3D pipelines, easing 3D tool development and delivering interoperability between 3D applications in sessions from SIGGRAPH 2023, now available on demand.

Anyone can build their own Omniverse extension or Connector to enhance their 3D workflows and tools. Explore the Omniverse ecosystem’s growing catalog of connections, extensions, foundation applications and third-party tools.

Share your Blender and Omniverse work as part of the latest community challenge, #StartToFinish. Use the hashtag to submit a screenshot of a project featuring both its beginning and ending stages for a chance to be featured on the @NVIDIAStudio and @NVIDIAOmniverse social channels.

To learn more about how OpenUSD can improve 3D workflows, check out a new video series about the framework. For more resources on OpenUSD, explore the Alliance for OpenUSD forum or visit the AOUSD website.

Get started with NVIDIA Omniverse by downloading the standard license for free or learn how Omniverse Enterprise can connect your team

Developers can check out these Omniverse resources to begin building on the platform. 

Stay up to date on the platform by subscribing to the newsletter and following NVIDIA Omniverse on Instagram, LinkedIn, Medium, Threads and Twitter.

For more, check out our forums, Discord server, Twitch and YouTube channels.

Featured image courtesy of Alex Trevino.

Read More

NVIDIA CEO Jensen Huang to Headline AI Summit in Tel Aviv

NVIDIA CEO Jensen Huang to Headline AI Summit in Tel Aviv

NVIDIA founder and CEO Jensen Huang will highlight the newest in generative AI and cloud computing at the NVIDIA AI Summit in Tel Aviv from Oct. 15-16.

The two-day summit is set to attract more than 2,500 developers, researchers and decision-makers from across one of the world’s most vibrant technology hubs.

With over 6,000 startups, Israel consistently ranks among the world’s top countries for VC investments per capita. The 2023 Global Startup Ecosystem report places Tel Aviv among the top 5 cities globally for startups.

The summit features more than 60 live sessions led by experts from NVIDIA and the region’s tech leaders, who will dive deep into topics like accelerated computing, robotics, cybersecurity and climate science.

Attendees will be able to network and gain insights from some of NVIDIA’s foremost experts, including Kimberly Powell, vice president and general manager of healthcare; Deepu Talla, vice president and general manager of embedded and edge computing; Gilad Shainer, senior vice president of networking and HPC; and Gal Chechik, senior director and head of the Israel AI Research Center.

Key events and features of the summit include:

  • Livestream: The keynote by Huang will take place Monday, Oct. 16, at 10 a.m. Israel time (11 p.m. Pacific) and will be available for livestreaming, with on-demand access to follow.
  • Ecosystem exhibition: An exhibition space at the Summit will showcase NVIDIA’s tech demos, paired with contributions from partners and emerging startups from the NVIDIA Inception program.
  • Deep dive into AI: The first day is dedicated to intensive learning sessions hosted by the NVIDIA Deep Learning Institute. Workshops encompass topics like “Fundamentals of Deep Learning” and “Building AI-Based Cybersecurity Pipelines,” among a range of other topics. Edge AI & Robotics Developer Day activities will explore innovations in AI and the NVIDIA Jetson Orin platform.
  • Multitrack sessions: The second day will include multiple tracks, covering areas such as generative AI and LLMs, AI in healthcare, networking and developer tools and NVIDIA Omniverse.

Learn more at https://www.nvidia.com/en-il/ai-summit-israel/.

Featured image credit: Gady Munz via the PikiWiki – Israel free image collection project

Read More

Cash In: ‘PAYDAY 3’ Streams on GeForce NOW

Cash In: ‘PAYDAY 3’ Streams on GeForce NOW

Time to get the gang back together — PAYDAY 3 streams on GeForce NOW this week.

It’s one of 11 titles joining the cloud this week, including Party Animals.

The Perfect Heist

PAYDAY 3 on GeForce NOW
Not pictured: the crew member in a fuzzy bunny mask. He stayed home.

PAYDAY 3 is the highly anticipated sequel to one of the world’s most popular co-op shooters. Step out of retirement and back into the life of crime in the shoes of the Payday Gang — who bring the envy of their peers and the nightmare of law enforcement wherever they go. Set several years after the end of the crew’s reign of terror over Washington, D.C., the game reassembles the group to deal with the threat that’s roused them out of early retirement.

Upgrade to a GeForce NOW Ultimate membership to pull off every heist at the highest quality. Ultimate members can stream on GeForce RTX 4080 rigs with support for up to 4K at 120 frames per second gameplay on PCs and Macs, providing a gaming experience so seamless that it would be a crime to stream on anything less.

Game On

Party Animals on GeForce NOW
Paw it out with friends on nearly any device.

There’s always more action every GFN Thursday. Here’s the full list of this week’s GeForce NOW library additions:

  • HumanitZ (New release on Steam, Sept. 18)
  • Party Animals (New release on Steam, Sept. 20)
  • PAYDAY 3 (New release on Steam, Epic Games Store, Xbox PC Game Pass, Sept. 21)
  • Warhaven (New release on Steam)
  • 911 Operator (Epic Games Store)
  • Ad Infinitum (Steam)
  • Chained Echoes (Xbox, available on PC Game Pass)
  • Deceit 2 (Steam)
  • The Legend of Tianding (Xbox, available on PC Game Pass)
  • Mechwarrior 5: Mercenaries (Xbox, available on PC Game Pass)
  • Sprawl (Steam)

Starting today, the Cyberpunk 2077 2.0 patch will also be supported, adding DLSS 3.5 technology and other new features.

What are you planning to play this weekend? Let us know on Twitter or in the comments below.

Read More

Virtually Incredible: Mercedes-Benz Prepares Its Digital Production System for Next-Gen Platform With NVIDIA Omniverse, MB.OS and Generative AI

Virtually Incredible: Mercedes-Benz Prepares Its Digital Production System for Next-Gen Platform With NVIDIA Omniverse, MB.OS and Generative AI

Mercedes-Benz is using digital twins for production with help from NVIDIA Omniverse, a platform for developing Universal Scene Description (OpenUSD) applications to design, collaborate, plan and operate manufacturing and assembly facilities.

Mercedes-Benz’s new production techniques will bring its next-generation vehicle portfolio into its manufacturing facilities operating in Rastatt, Germany; Kecskemét, Hungary; and Beijing, China — and offer a blueprint for its more than 30 factories worldwide. This “Digital First” approach enhances efficiency, avoids defects and saves time, marking a step-change in the flexibility, resilience and intelligence of the Mercedes-Benz MO360 production system.

The digital twin in production helps ensure Mercedes-Benz assembly lines can be retooled, configured and optimized in physically accurate simulations first. The new assembly lines in the Kecskemét plant will enable production of vehicles based on the newly launched Mercedes Modular Architecture that are developed virtually using digital twins in Omniverse.

By leveraging Omniverse, Mercedes-Benz can interact directly with its suppliers, reducing coordination processes by 50%. Using a digital twin in production doubles the speed for converting or constructing an assembly hall, while improving the quality of the processes, according to the automaker.

“Using NVIDIA Omniverse and AI, Mercedes-Benz is building a connected, digital-first approach to optimize its manufacturing processes, ultimately reducing construction time and production costs,” said Rev Lebaredian, vice president of Omniverse and simulation technology at NVIDIA, during a digital event held earlier today.

In addition, the introduction of AI opens up new areas of energy and cost savings. The Rastatt plant is being used to pioneer digital production in the paint shop. Mercedes-Benz used AI to monitor relevant sub-processes in the pilot testing, which led to energy savings of 20%.

Supporting State-of-the-Art Software Systems

Next-generation Mercedes-Benz vehicles will feature its new operating system “MB.OS,” which will be standard across its entire vehicle portfolio and deliver premium software capabilities and experiences across all vehicle domains.

Mercedes-Benz has partnered with NVIDIA to develop software-defined vehicles. Its fleets will be built on NVIDIA DRIVE Orin and DRIVE software, with intelligent driving capabilities tested and validated in the NVIDIA DRIVE Sim platform, which is also built on Omniverse.

The automaker’s MO360 production system will enable it to produce electric, hybrid and gas models on the same production lines and to scale the manufacturing of electric vehicles. The implementation of MB.OS in production will allow its cars to roll off assembly lines with the latest versions of vehicle software.

“Mercedes-Benz is initiating a new era of automotive manufacturing thanks to the integration of artificial intelligence, MB.OS and the digital twin based on NVIDIA Omniverse into the MO360 ecosystem,” said Jörg Burzer, member of the board of the Mercedes-Benz Group AG, Production, Quality and Supply Chain Management. “With our new ‘Digital First’ approach, we unlock efficiency potential even before the launch of our MMA models in our global production network and can accelerate the ramp-up significantly.”

Flexible Factories of the Future

Avoiding costly manufacturing production shutdowns is critical. Running simulations in NVIDIA Omniverse enables factory planners to optimize factory floor and production line layouts for supply routes, and production lines can be validated without having to disrupt production.

This virtual approach also enables efficient design of new lines and change management for existing lines while reducing downtime and helping improve product quality. For the world’s automakers, much is at stake across the entire software development stack, from chip to cloud.

Omniverse Collaboration for Efficiencies 

The Kecskemét plant is the first with a full digital twin of the entire factory. This virtual area enables development at the heart of assembly, between its tech and trim lines. And plans are for the new Kecskemét factory hall to launch into full production.

Collaboration in Omniverse has enabled plant suppliers and planners to interact with each other interactively in the virtual environment, so that layout options and automation changes can be incorporated and validated in real time. This accelerates how quickly new production lines can reach maximum capacity and reduces the risk of re-work or stoppages.

Virtual collaboration with digital twins can accelerate planning and implementation of projects by weeks, as well as translate to significant cost savings for launching new manufacturing lines.

Learn more about NVIDIA Omniverse and DRIVE Orin.

Read More

Oracle Cloud Infrastructure Offers New NVIDIA GPU-Accelerated Compute Instances

Oracle Cloud Infrastructure Offers New NVIDIA GPU-Accelerated Compute Instances

With generative AI and large language models (LLMs) driving groundbreaking innovations, the computational demands for training and inference are skyrocketing.

These modern-day generative AI applications demand full-stack accelerated compute, starting with state-of-the-art infrastructure that can handle massive workloads with speed and accuracy. To help meet this need, Oracle Cloud Infrastructure today announced general availability of NVIDIA H100 Tensor Core GPUs on OCI Compute, with NVIDIA L40S GPUs coming soon.

NVIDIA H100 Tensor Core GPU Instance on OCI

The OCI Compute bare-metal instances with NVIDIA H100 GPUs, powered by the NVIDIA Hopper architecture, enable an order-of-magnitude leap for large-scale AI and high-performance computing, with unprecedented performance, scalability and versatility for every workload.

Organizations using NVIDIA H100 GPUs obtain up to a 30x increase in AI inference performance and a 4x boost in AI training compared with tapping the NVIDIA A100 Tensor Core GPU. The H100 GPU is designed for resource-intensive computing tasks, including training LLMs and inference while running them.

The BM.GPU.H100.8 OCI Compute shape includes eight NVIDIA H100 GPUs, each with 80GB of HBM2 GPU memory. Between the eight GPUs, 3.2TB/s of bisectional bandwidth enables each GPU to communicate directly with all seven other GPUs via NVIDIA NVSwitch and NVLink 4.0 technology. The shape includes 16 local NVMe drives with a capacity of 3.84TB each and also includes 4th Gen Intel Xeon CPU processors with 112 cores, as well as 2TB of system memory.

In a nutshell, this shape is optimized for organizations’ most challenging workloads.

Depending on timelines and sizes of workloads, OCI Supercluster allows organizations to scale their NVIDIA H100 GPU usage from a single node to up to tens of thousands of H100 GPUs over a high-performance, ultra-low-latency network.

NVIDIA L40S GPU Instance on OCI

The NVIDIA L40S GPU, based on the NVIDIA Ada Lovelace architecture, is a universal GPU for the data center, delivering breakthrough multi-workload acceleration for LLM inference and training, visual computing and video applications. The OCI Compute bare-metal instances with NVIDIA L40S GPUs will be available for early access later this year, with general availability coming early in 2024.

These instances will offer an alternative to the NVIDIA H100 and A100 GPU instances for tackling smaller- to medium-sized AI workloads, as well as for graphics and video compute tasks. The NVIDIA L40S GPU achieves up to a 20% performance boost for generative AI workloads and as much as a 70% improvement in fine-tuning AI models compared with the NVIDIA A100.

The BM.GPU.L40S.4 OCI Compute shape includes four NVIDIA L40S GPUs, along with the latest-generation Intel Xeon CPU with up to 112 cores, 1TB of system memory, 15.36TB of low-latency NVMe local storage for caching data and 400GB/s of cluster network bandwidth. This instance was created to tackle a wide range of use cases, ranging from LLM training, fine-tuning and inference to NVIDIA Omniverse workloads and industrial digitalization, 3D graphics and rendering, video transcoding and FP32 HPC.

NVIDIA and OCI: Enterprise AI

This collaboration between OCI and NVIDIA will enable organizations of all sizes to join the generative AI revolution by providing them with state-of-the-art NVIDIA H100 and L40S GPU-accelerated infrastructure.

Access to NVIDIA GPU-accelerated instances may not be enough, however. Unlocking the maximum potential of NVIDIA GPUs on OCI Compute means having an optimal software layer. NVIDIA AI Enterprise streamlines the development and deployment of enterprise-grade accelerated AI software with open-source containers and frameworks optimized for the underlying NVIDIA GPU infrastructure, all with the help of support services.

To learn more, join NVIDIA at Oracle Cloud World in the AI Pavillion, attend this session on the new OCI instances on Wednesday, Sept. 20, and visit these web pages on Oracle Cloud Infrastructure, OCI Compute, how Oracle approaches AI and the NVIDIA AI Platform.

Read More

Meet the Omnivore: Industrial Designer Blends Art and OpenUSD to Create 3D Assets for AI Training

Meet the Omnivore: Industrial Designer Blends Art and OpenUSD to Create 3D Assets for AI Training

Editor’s note: This post is a part of our Meet the Omnivore series, which features individual creators and developers who use NVIDIA Omniverse and OpenUSD to accelerate their 3D workflows and create virtual worlds.

As a student at the Queensland University of Technology (QUT) in Australia, Emily Boehmer was torn between pursuing the creative arts or science.

And then she discovered industrial design, which allowed her to dive into research and coding while exploring visualization workflows like sketching, animation and 3D modeling.

Now, Boehmer is putting her skills to practice as a design intern at BMW Group’s Technology Office in Munich. The team uses NVIDIA Omniverse, a platform for developing and connecting 3D tools and applications, and Universal Scene Description — aka OpenUSD — to enhance its synthetic data generation pipelines.

Boehmer creates realistic 3D assets that can be used with SORDI.ai, short for Synthetic Object Recognition Dataset for Industries. Published by BMW Group, Microsoft and NVIDIA, SORDI.ai helps developers and researchers streamline and accelerate the training of AI for production. To automate image generation, the team developed an extension based on Omniverse Replicator, a software development kit for creating custom synthetic data generation tools.

As part of the SORDI.ai team, Boehmer uses Blender and Adobe Substance Painter to design 3D assets with high levels of physical accuracy and photorealism, helping ensure that synthetic data can be used to efficiently train AI models.

All the assets Boehmer creates are used to test and simulate autonomous robots on the NVIDIA Isaac Sim platform, which provides developers a suite of synthetic data generation capabilities that can power photorealistic, physically accurate virtual environments.

Creating Realistic 3D Assets for Training AI 

As a design intern, Boehmer’s main tasks are animation and 3D modeling. The process starts with taking photos of target objects. Then, she uses the 2D photos as references by lining them up with the 3D models in Blender.

3D objects can consist of thousands of polygons, so Boehmer creates two versions of the asset — one with a low number of polygons and one with a higher polygon count. The details of the high-poly version can be baked onto the low-poly model, helping maintain more details so the asset looks realistic.

Once the 3D assets are created, Boehmer uses the models to start assembling scenes. Her favorite aspect of the Omniverse platform is the flexibility of USD, because it allows her to easily make changes to 3D models.

USD workflows have enabled the BMW Group’s design teams to create many different scenes using the same components, as they can easily access all the USD files stored on Omniverse Nucleus. When creating portions of a scene, Boehmer pulls from dozens of USD models from SORDI.ai and adds them into scenes that will be used by other designers to assemble larger factory scenes.

Boehmer only has to update the USD file of the original asset to automatically apply changes to all reference files containing it.

“It’s great to see USD support for both Blender and Substance Painter,” she said. “When I create 3D assets using USD, I can be confident that they’ll look and behave as expected in the scenes they’ll be placed in.”

Emily Boehmer’s creative process starts with photographing the object, then using that image as a reference to build and texture 3D models.

Building Factory Scenes With Synthetic Data

The Isaac Sim platform is a key part of the SORDI.ai team’s workflow. It’s used to develop pipelines that use generative AI and procedural algorithms for 3D scene generation. The team also developed an extension based on Omniverse Replicator that automates randomization within a scene when generating synthetic images.

“The role of design interns like me is to realistically model and texture the assets used for scenes built in Isaac Sim,” Boehmer said. “The more realistic the assets are, the more realistic the synthetic images can be and the more effective they are for training AI models for real scenarios.”

Data annotation — the process of labeling data like images, text, audio or video with relevant tags — makes it easier for AI to understand the data, but the manual process can be incredibly time-consuming, especially for large quantities of content. SORDI.ai addresses these challenges by using synthetic data to train AI.

When importing assets into Omniverse and creating USD versions of the files, Boehmer tags them with the appropriate data label. Once these assets have been put together in a scene, she can use Omniverse Replicator to generate images that are automatically annotated using the original labels.

And using SORDI.ai, designers can set up scenes and generate thousands of annotated images with just one click.

Boehmer will be a guest on an Omniverse livestream on Wednesday, Sept. 20, where she’ll demonstrate how she uses Blender and Substance Painter in Omniverse for synthetic image generation pipelines.

Join In on the Creation

Anyone can build their own Omniverse extension or Connector to enhance their 3D workflows and tools. Creators and developers can download Omniverse for free, and enterprise teams can use the platform for their 3D projects.

Check out artwork from other “Omnivores” and submit projects in the gallery. See how creators are using OpenUSD to accelerate a variety of 3D workflows in the latest OpenUSD All Stars. And connect workflows to Omniverse with software from Adobe, Autodesk, Blender, Epic Games, Reallusion and more.

Get started with NVIDIA Omniverse by downloading the standard license free, or learn how Omniverse Enterprise can connect your team. Developers can get started with Omniverse resources and learn about OpenUSD. Explore the growing ecosystem of 3D tools connected to Omniverse.

Stay up to date on the platform by subscribing to the newsletter, and follow NVIDIA Omniverse on Instagram, Medium and Twitter. For more, join the Omniverse community and check out the Omniverse forums, Discord server, Twitch and YouTube channels.

Read More

Ray Shines with NVIDIA AI: Anyscale Collaboration to Help Developers Build, Tune, Train and Scale Production LLMs

Ray Shines with NVIDIA AI: Anyscale Collaboration to Help Developers Build, Tune, Train and Scale Production LLMs

Large language model development is about to reach supersonic speed thanks to a collaboration between NVIDIA and Anyscale.

At its annual Ray Summit developers conference, Anyscale — the company behind the fast growing open-source unified compute framework for scalable computing —  announced today that it is bringing NVIDIA AI to Ray open source and the Anyscale Platform. It will also be integrated into Anyscale Endpoints, a new service announced today that makes it easy for application developers to cost-effectively embed LLMs in their applications using the most popular open source models.

These integrations can dramatically speed generative AI development and efficiency while boosting security for production AI, from proprietary LLMs to open models such as Code Llama, Falcon, Llama 2, SDXL and more.

Developers will have the flexibility to deploy open-source NVIDIA software with Ray or opt for NVIDIA AI Enterprise software running on the Anyscale Platform for a fully supported and secure production deployment.

Ray and the Anyscale Platform are widely used by developers building advanced LLMs for generative AI applications capable of powering ‌intelligent chatbots, coding copilots and powerful search and summarization tools.

NVIDIA and Anyscale Deliver Speed, Savings and Efficiency

Generative AI applications are captivating the attention of businesses around the globe. Fine-tuning, augmenting and running LLMs requires significant investment and expertise. Together, NVIDIA and Anyscale can help reduce costs and complexity for generative AI development and deployment with a number of application integrations.

NVIDIA TensorRT-LLM, new open-source software announced last week, will support Anyscale offerings to supercharge LLM performance and efficiency to deliver cost savings. Also supported in the NVIDIA AI Enterprise software platform, Tensor-RT LLM automatically scales inference to run models in parallel over multiple GPUs, which can provide up to 8x higher performance when running on NVIDIA H100 Tensor Core GPUs, compared to prior-generation GPUs.

TensorRT-LLM automatically scales inference to run models in parallel over multiple GPUs and includes custom GPU kernels and optimizations for a wide range of popular LLM models. It also implements the new FP8 numerical format available in the NVIDIA H100 Tensor Core GPU Transformer Engine and offers an easy-to-use and customizable Python interface.

NVIDIA Triton Inference Server software supports inference across cloud, data center, edge and embedded devices on GPUs, CPUs and other processors. Its integration can enable Ray developers to boost efficiency when deploying AI models from multiple deep learning and machine learning frameworks, including TensorRT, TensorFlow, PyTorch, ONNX, OpenVINO, Python, RAPIDS XGBoost and more.

With the NVIDIA NeMo framework, Ray users will be able to easily fine-tune and customize LLMs with business data,  paving the way for LLMs that understand the unique offerings of individual businesses.

NeMo is an end-to-end, cloud-native framework to build, customize and deploy generative AI models anywhere. It features training and inferencing frameworks, guardrailing toolkits, data curation tools and pretrained models, offering enterprises an easy, cost-effective and fast way to adopt generative AI.

Options for Open-Source or Fully Supported Production AI 

Ray open source and the Anyscale Platform enable developers to effortlessly move from open source to deploying production AI at scale in the cloud.

The Anyscale Platform provides fully managed, enterprise-ready unified computing that makes it easy to build, deploy and manage scalable AI and Python applications using Ray, helping customers bring AI products to market faster at significantly lower cost.

Whether developers use Ray open source or the supported Anyscale Platform, Anyscale’s core functionality helps them easily orchestrate LLM workloads. The NVIDIA AI integration can help developers build, train, tune and scale AI with even greater efficiency.

Ray and the Anyscale Platform run on accelerated computing from leading clouds, with the option to run on hybrid or multi-cloud computing. This helps developers easily scale up as they need more computing to power a successful LLM deployment.

The collaboration will also enable developers to begin building models on their workstations through NVIDIA AI Workbench and scale them easily across hybrid or multi-cloud accelerated computing once it’s time to move to production.

NVIDIA AI integrations with Anyscale are in development and expected to be available by the end of the year.

Developers can sign up to get the latest news on this integration as well as a free 90-day evaluation of NVIDIA AI Enterprise.

To learn more, attend the Ray Summit in San Francisco this week or watch the demo video below.

See this notice regarding NVIDIA’s software roadmap.

Read More

Shout at the Devil: Capcom’s ‘Devil May Cry 5’ Joins GeForce NOW

Shout at the Devil: Capcom’s ‘Devil May Cry 5’ Joins GeForce NOW

GFN Thursday is downright demonic, as Devil May Cry 5 comes to GeForce NOW.

Capcom’s action-packed third-person brawler leads 15 titles joining the GeForce NOW library this week, including Gears Tactics and The Crew Motorfest.

It’s also the last week to take on the Ultimate KovaaK’s Challenge. Get on the leaderboard today for a chance to win a 240Hz gaming monitor, a gaming Chromebook, GeForce NOW memberships or other prizes. The challenge ends on Thursday, Sept. 21.

The Devil Returns

Devil May Cry 5 on GeForce NOW
Jackpot!

Devil May Cry 5 is the next title from Capcom’s catalog to come to GeForce NOW. Members can stream all of its high-octane, stylish action at GeForce RTX quality to nearly any device, thanks to the power of GeForce NOW cloud gaming servers.

The threat of demonic power has returned to menace the world once again. Take on hordes of enemies as Nero, V or the legendary Dante with the ramped-up sword-and-gun gameplay that the series is known for. Battle epic bosses in adrenaline-fueled fights across the overrun Red Grave City — all to the beat of a truly killer soundtrack.

Take the action on the go thanks to the power of the cloud. GeForce NOW Priority members can take the fight with them across nearly any device at up to 1080p and 60 frames per second.

Kickin’ It Into High Gear

Gears Tactics on GeForce NOW
A squad of survivors is all it takes to stop the Locust threat.

Rise up and fight, members. Gears Tactics is the next PC Game Pass title to arrive in the cloud.

Gears Tactics is a fast-paced, turn-based strategy game from one of the most acclaimed video game franchises — Gears of War. Set a dozen years before the first Gears of War game, the Gears Tactics story opens as cities on the planet Sera begin falling to the monstrous threat rising from underground: the Locust Horde. With the government in disarray, a squad of survivors emerges as humanity’s last hope. Play as the defiant soldier Gabe Diaz to recruit, develop and command squads on a desperate mission to hunt down the relentless and powerful leader of the Locust army, Ukkon, the group’s monster-making mastermind.

Fight for survival and outsmart the enemy with the sharpness of 4K resolution streaming from the cloud with a GeForce NOW Ultimate membership.

Hit the Road, Jack

The Crew Motorfest on GeForce NOW
The best way to see Hawaii is by car, at 100 mph.

The Crew Motorfest also comes to GeForce NOW this week. The latest entry in Ubisoft’s racing franchise drops drivers into the open roads of Oahu, Hawaii. Get behind the wheel of 600+ iconic vehicles from the past, present and future, including sleek sports cars, rugged off-road vehicles and high-performance racing machines. Race alone or with friends through the bustling city of Honolulu, test off-roading skills on the ashy slopes of a volcano or kick back on the sunny beaches behind the wheel of a buggy.

Members can take a test drive from Sept. 14-17 with a five-hour free trial. Explore the vibrant Hawaiian open world, participate in thrilling driving activities and collect prestigious cars, with all progress carrying over to the full game purchase.

Take the pole position with a GeForce NOW Ultimate membership to stream The Crew Motorfest and more than 1,600 other titles at the highest frame rates. Upgrade today.

A New Challenge

Gunbrella on GeForce NOW
Rain, rain, go away. The umbrella is also a gun today.

With GeForce NOW, there’s always something new to play. Here’s what’s hitting the playlist this week:

  • Tavernacle! (New release on Steam, Sept. 11)
  • Gunbrella (New release on Steam, Sept. 13)
  • The Crew Motorfest (New release on Ubisoft Connect, Sept. 14)
  • Amnesia: The Bunker (Xbox, available on PC Game Pass)
  • Descenders (Xbox, available on PC Game Pass)
  • Devil May Cry 5 (Steam)
  • Gears Tactics (Steam and Xbox, available on PC Game Pass)
  • Last Call BBS (Xbox)
  • The Matchless Kungfu (Steam)
  • Mega City Police (Steam)
  • Opus Magnum (Xbox)
  • Remnant II (Epic Games Store)
  • Space Hulk: Deathwing – Enhanced Edition (Xbox)
  • Superhot (Xbox)
  • Vampyr (Xbox)

What are you planning to play this weekend? Let us know on Twitter or in the comments below.

Read More

Unlocking the Language of Genomes and Climates: Anima Anandkumar on Using Generative AI to Tackle Global Challenges

Unlocking the Language of Genomes and Climates: Anima Anandkumar on Using Generative AI to Tackle Global Challenges

Generative AI-based models can not only learn and understand natural languages — they can learn the very language of nature itself, presenting new possibilities for scientific research.

Anima Anandkumar, Bren Professor at Caltech and senior director of AI research at NVIDIA, was recently invited to speak at the President’s Council of Advisors on Science and Technology.

At the talk, Anandkumar says that generative AI was described as “an inflection point in our lives,” with discussions swirling around how to “harness it to benefit society and humanity through scientific applications.”

On the latest episode of NVIDIA’s AI Podcast, host Noah Kravitz spoke with Anandkumar on generative AI’s potential to make splashes in the scientific community.

It can, for example, be fed DNA, RNA, viral and bacterial data to craft a model that understands the language of genomes. That model can help predict dangerous coronavirus variants to accelerate drug and vaccine research.

Generative AI can also predict extreme weather events like hurricanes or heat waves. Even with an AI boost, trying to predict natural events is challenging because of the sheer number of variables and unknowns.

“Those are the aspects we’re working on at NVIDIA and Caltech, in collaboration with many other organizations, to say, ‘How do we capture the multitude of scales present in the natural world?’” she said. “With the limited data we have, can we hope to extrapolate to finer scales? Can we hope to embed the right constraints and come up with physically valid predictions that make a big impact?”

Anandkumar adds that to ensure AI models are responsibly and safely used, existing laws must be strengthened to prevent dangerous downstream applications.

She also talks about the AI boom, which is transforming the role of humans across industries, and problems yet to be solved.

“This is the research advice I give to everyone: the most important thing is the question, not the answer,” she said.

You Might Also Like

Jules Anh Tuan Nguyen Explains How AI Lets Amputee Control Prosthetic Hand, Video Games
A postdoctoral researcher at the University of Minnesota discusses his efforts to allow amputees to control their prosthetic limb — right down to the finger motions — with their minds.

Overjet’s Ai Wardah Inam on Bringing AI to Dentistry
Overjet, a member of NVIDIA Inception, is moving fast to bring AI to dentists’ offices. Dr. Wardah Inam, CEO of the company, discusses using AI to improve patient care.

Immunai CTO and Co-Founder Luis Voloch on Using Deep Learning to Develop New Drugs
Luis Voloch talks about tackling the challenges of the immune system with a machine learning and data science mindset.

Subscribe to the AI Podcast: Now Available on Amazon Music

The AI Podcast is now available through Amazon Music.

In addition, get the AI Podcast through iTunes, Google Podcasts, Google Play, Castbox, DoggCatcher, Overcast, PlayerFM, Pocket Casts, Podbay, PodBean, PodCruncher, PodKicker, Soundcloud, Spotify, Stitcher and TuneIn.

Make the AI Podcast better. Have a few minutes to spare? Fill out this listener survey.

Read More