NVIDIA Scoops Up Wins at COMPUTEX Best Choice Awards

NVIDIA Scoops Up Wins at COMPUTEX Best Choice Awards

Building on more than a dozen years of stacking wins at the COMPUTEX trade show’s annual Best Choice Awards, NVIDIA was today honored with BCAs for its latest technologies.

The NVIDIA GH200 Grace Hopper Superchip won the Computer and System Category Award; the NVIDIA Spectrum-X AI Ethernet networking platform won the Networking and Communication Category Award; and the NVIDIA AI Enterprise software platform won a Golden Award.

The awards — judged on the functionality, innovation and market potential of products exhibited at the leading computer and technology expo — were announced ahead of the show, which runs from June 4-7, in Taipei.

NVIDIA founder and CEO Jensen Huang will deliver a COMPUTEX keynote address on Sunday, June 2, at 7 p.m. Taiwan time, at the NTU Sports Center and online.

NVIDIA AI Enterprise Takes Gold

NVIDIA AI Enterprise — a cloud-native software platform that streamlines the development and deployment of copilots and other generative AI applications — won a Golden Award.

The platform lifts the burden of maintaining and securing complex AI software, so businesses can focus on building and harnessing the technology’s game-changing insights.

Microservices that come with NVIDIA AI Enterprise — including NVIDIA NIM and NVIDIA CUDA-X — optimize model performance and run anywhere with enterprise-grade security, support and stability, offering users a smooth transition from prototype to production.

Plus, the platform’s ability to improve AI performance results in better overall utilization of computing resources. This means companies using NVIDIA AI Enterprise need fewer servers to support the same workloads, greatly reducing their energy costs and data center footprint.

More BCA Wins for NVIDIA Technologies

NVIDIA GH200 and Spectrum-X were named best in their respective categories.

The NVIDIA GH200 Grace Hopper Superchip is the world’s first truly heterogeneous accelerated platform for AI and high-performance computing workloads. It combines the power-efficient NVIDIA Grace CPU with an NVIDIA Hopper architecture-based GPU over a high-bandwidth 900GB/s coherent NVIDIA NVLink chip-to-chip interconnect.

The superchip — shipping worldwide and powering more than 40 AI supercomputers across global research centers, system makers and cloud providers — supercharges scientific innovation with accelerated computing and scale-out solutions for AI inference, large language models, recommenders, vector databases, HPC applications and more.

The Spectrum-X platform, featuring NVIDIA Spectrum SN5600 switches and NVIDIA BlueField-3 SuperNICs, is the world’s first Ethernet fabric built for AI, accelerating generative AI network performance 1.6x over traditional Ethernet fabrics.

It can serve as the backend AI fabric for any AI cloud or large enterprise deployment, and is available from major server manufacturers as part of the full NVIDIA AI stack.

NVIDIA Partners Recognized

Other BCA winners include NVIDIA partners Acer, ASUS, MSI and YUAN, which were given Golden Awards for their respective laptops, gaming motherboards and smart-city applications — all powered by NVIDIA technologies, such as NVIDIA GeForce RTX 4090 GPUs, the NVIDIA Studio platform for creative workflows and the NVIDIA Jetson platform for edge AI and robotics.

ASUS also won a Computer and System Category Award, while MSI won a Gaming and Entertainment Category Award.

Learn more about the latest generative AI, HPC and networking technologies by joining NVIDIA at COMPUTEX.

Read More

Into the Omniverse: SoftServe and Continental Drive Digitalization With OpenUSD and Generative AI

Into the Omniverse: SoftServe and Continental Drive Digitalization With OpenUSD and Generative AI

Editor’s note: This post is part of Into the Omniverse, a series focused on how artists, developers and enterprises can transform their workflows using the latest advances in OpenUSD and NVIDIA Omniverse.

Industrial digitalization is driving automotive innovation.

In response to the industry’s growing demand for seamless, connected driving experiences, SoftServe, a leading IT consulting and digital services provider, worked with Continental, a leading German automotive technology company, to develop Industrial Co-Pilot, a virtual agent powered by generative AI that enables engineers to streamline maintenance workflows.

SoftServe helps manufacturers like Continental to further optimize their operations by integrating the Universal Scene Description, or OpenUSD, framework into virtual factory solutions — such as Industrial Co-Pilot — developed on the NVIDIA Omniverse platform.

OpenUSD offers the flexibility and extensibility organizations need to harness the full potential of digital transformation, streamlining operations and driving efficiency. Omniverse is a platform of application programming interfaces, software development kits and services that enable developers to easily integrate OpenUSD and NVIDIA RTX rendering technologies into existing software tools and simulation workflows.

Realizing the Benefits of OpenUSD

SoftServe and Continental’s Industrial Co-Pilot brings together generative AI and immersive 3D visualization to help factory teams increase productivity during equipment and production line maintenance. With the copilot, engineers can oversee production lines and monitor the performance of individual stations or the shop floor.

They can also interact with the copilot to conduct root cause analysis and receive step-by-step work instructions and recommendations, leading to reduced documentation processes and improved maintenance procedures. It’s expected that these advancements will contribute to increased productivity and a10% reduction in maintenance effort and downtime.

In a recent Omniverse community livestream, Benjamin Huber, who leads advanced automation and digitalization in the user experience business area at Continental, highlighted the significance of the company’s collaboration with SoftServe and its adoption of Omniverse.

The Omniverse platform equips Continental and SoftServe developers with the tools needed to build a new era of AI-enabled industrial applications and services. And by breaking down data silos and fostering multi-platform cooperation with OpenUSD, SoftServe and Continental developers enable engineers to work seamlessly across disciplines and systems, driving efficiency and innovation throughout their processes.

“Any engineer, no matter what tool they’re working with, can transform their data into OpenUSD and then interchange data from one discipline to another, and from one tool to another,” said Huber.

This sentiment was echoed by Vasyl Boliuk, senior lead and test automation engineer at SoftServe, who shared how OpenUSD and Omniverse — along with other NVIDIA technologies like NVIDIA Riva, NVIDIA NeMo and NVIDIA NIM microservices — enabled SoftServe and Continental teams to develop custom large language models and connect them to new 3D workflows.

“OpenUSD allows us to add any attribute or any piece of metadata we want to our applications,” he said.

Boliuk, Huber and other SoftServe and Continental representatives joined the livestream to share more about the potential unlocked from these OpenUSD-powered solutions. Watch the replay:

By embracing cutting-edge technologies and fostering collaboration, SoftServe and Continental are helping reshape automotive manufacturing.

Get Plugged Into the World of OpenUSD

Watch SoftServe and Continental’s on-demand NVIDIA GTC talks to learn more about their virtual factory solutions and experience developing on NVIDIA Omniverse with OpenUSD:

Learn about the latest technologies driving the next industrial revolution by watching NVIDIA founder and CEO Jensen Huang’s COMPUTEX keynote on Sunday, June 2, at 7 p.m. Taiwan time.

Check out a new video series about how OpenUSD can improve 3D workflows. For more resources on OpenUSD, explore the Alliance for OpenUSD forum and visit the AOUSD website.

Get started with NVIDIA Omniverse by downloading the standard license free, access OpenUSD resources and learn how Omniverse Enterprise can connect teams. Follow Omniverse on Instagram, Medium and X. For more, join the Omniverse community on the forums, Discord server, Twitch and YouTube channels. 

Featured image courtesy of SoftServe.

Read More

Senua’s Story Continues: GeForce NOW Brings ‘Senua’s Saga: Hellblade II’ to the Cloud

Senua’s Story Continues: GeForce NOW Brings ‘Senua’s Saga: Hellblade II’ to the Cloud

Every week, GFN Thursday brings new games to the cloud, featuring some of the latest and greatest titles for members to play.

Leading the seven games joining GeForce NOW this week is the newest game in Ninja Theory’s Hellblade franchise, Senua’s Saga: Hellblade II. This day-and-date release expands the cloud gaming platform’s extensive library of over 1,900 games.

Members can also look forward to a new reward — a free in-game mount — for The Elder Scrolls Online starting Thursday, May 30. Get ready by opting into GeForce NOW’s Rewards program.

Senua Returns

Senua's Saga: Hellblade II screen
Head to the cloud to overcome the darkness.

In Senua’s Saga: Hellblade II, the sequel to the award-winning Hellblade: Senua’s Sacrifice, Senua returns in a brutal journey of survival through the myth and torment of Viking Iceland.

Intent on saving those who’ve fallen victim to the horrors of tyranny, Senua battles the forces of darkness within and without. Sink deep into the next chapter of Senua’s story, a crafted experience told through cinematic immersion, beautifully realized visuals and encapsulating sound.

Priority and Ultimate members can fully immerse themselves in Senua’s story with epic cinematic gameplay at higher resolutions and frame rates over free members. Ultimate members can stream at up to 4K 120 frames per second with exclusive access to GeForce RTX 4080 SuperPODs in the cloud, even on underpowered devices.

Level Up With New Games

Check out the full list of new games this week:

  • Synergy (New release on Steam, May 21)
  • Senua’s Saga: Hellblade II (New release on Steam and Xbox, available on PC Game Pass, May 21)
  • Crown Wars: The Black Prince (New release on Steam, May 23)
  • Serum (New release on Steam, May 23)
  • Ships at Sea (New release on Steam, May 23)
  • Exo One (Steam)
  • Phantom Brigade (Steam)

What are you planning to play this weekend? Let us know on X or in the comments below

Read More

Watt a Win: NVIDIA Sweeps New Ranking of World’s Most Energy-Efficient Supercomputers

Watt a Win: NVIDIA Sweeps New Ranking of World’s Most Energy-Efficient Supercomputers

In the latest ranking of the world’s most energy-efficient supercomputers, known as the Green500, NVIDIA-powered systems swept the top three spots, and took seven of the top 10.

The strong showing demonstrates how accelerated computing represents the most energy-efficient method for high-performance computing.

The top three systems were all powered by the NVIDIA GH200 Grace Hopper Superchip, showcasing the widespread adoption and efficiency of NVIDIA’s Grace Hopper architecture.

Leading the pack was the JEDI system, at Germany’s Forschungszentrum Jülich, which achieved an impressive 72.73 GFlops per Watt.

More’s coming. The ability to do more work using less power is driving the construction of more Grace Hopper supercomputers around the world.

Accelerating the Green Revolution in Supercomputing

Such achievements underscore NVIDIA’s pivotal role in advancing the global agenda for sustainable high-performance computing over the past decade.

Accelerated computing has proven to be the cornerstone of energy efficiency, with the majority of systems on the Green500 list — including 40 of the top 50 — now featuring this advanced technology.

Pioneered by NVIDIA, accelerated computing uses GPUs that optimize throughput — getting a lot done at once — to perform complex computations faster than systems based on CPUs alone.

And the Grace Hopper architecture is proving to be a game-changer by enhancing computational speed and dramatically increasing energy efficiency across multiple platforms.

For example, the GH200 chip embedded within the Grace Hopper systems offers over 1,000x more energy efficiency on mixed precision and AI tasks than previous generations.

Redefining Efficiency in Supercomputing

This capability is crucial for accelerating tasks that address complex scientific challenges, speeding up the work of researchers across various disciplines.

NVIDIA’s supercomputing technology excels in traditional benchmarks — and it’s set new standards in energy efficiency.

For instance, the Alps system, at the Swiss National Supercomputing Centre (CSCS), is equipped with NVIDIA Grace Hopper GH200. The CSCS submission optimized for the Green500, dubbed preAlps,It recorded 270 petaflops on the High-Performance Linpack benchmark, used for solving complex linear equations.

The Green500 rankings highlight platforms that provide highly efficient FP64 performance, which is crucial for accurate simulations used in scientific computing. This result underscores NVIDIA’s commitment to powering supercomputers for tasks across a full range of capabilities.

This metric demonstrates substantial system performance, leading to its high ranking on the TOP500 list of the world’s fastest supercomputers. The high position on the Green500 list indicates that this scalable performance does not come at the cost of energy efficiency.

Such performance shows how the Grace Hopper architecture introduces a new era in processing technology, merging tightly coupled CPU and GPU functionalities to enhance not only performance but also significantly improve energy efficiency.

This advancement is supported by the incorporation of an optimized high-efficiency link that moves data between the CPU and GPU.

NVIDIA’s upcoming Blackwell platform is set to build on this by offering the computational power of the Titan supercomputer launched 10 years ago — a $100 million system the size of a tennis court — yet be efficient enough to be powered by a wall socket just like a typical home appliance.

In short, over the past decade, NVIDIA innovations have enhanced the accessibility and sustainability of high-performance computing, making scientific breakthroughs faster, cheaper and greener.

A Future Defined by Sustainable Innovation

As NVIDIA continues to push the boundaries of what’s possible in high-performance computing, it remains committed to enhancing the energy efficiency of global computing infrastructure.

The success of the Grace Hopper supercomputers in the Green500 rankings highlights NVIDIA’s leadership and its commitment to more sustainable global computing.

Explore how NVIDIA’s pioneering role in green computing is advancing scientific research, as well as shaping a more sustainable future worldwide.

Read More

NVIDIA Expands Collaboration With Microsoft to Help Developers Build, Deploy AI Applications Faster

NVIDIA Expands Collaboration With Microsoft to Help Developers Build, Deploy AI Applications Faster

If optimized AI workflows are like a perfectly tuned orchestra — where each component, from hardware infrastructure to software libraries, hits exactly the right note — then the long-standing harmony between NVIDIA and Microsoft is music to developers’ ears.

The latest AI models developed by Microsoft, including the Phi-3 family of small language models, are being optimized to run on NVIDIA GPUs and made available as NVIDIA NIM inference microservices. Other microservices developed by NVIDIA, such as the cuOpt route optimization AI, are regularly added to Microsoft Azure Marketplace as part of the NVIDIA AI Enterprise software platform.

In addition to these AI technologies, NVIDIA and Microsoft are delivering a growing set of optimizations and integrations for developers creating high-performance AI apps for PCs powered by NVIDIA GeForce RTX and NVIDIA RTX GPUs.

Building on the progress shared at NVIDIA GTC, the two companies are furthering this ongoing collaboration at Microsoft Build, an annual developer event, taking place this year in Seattle through May 23.

Accelerating Microsoft’s Phi-3 Models 

Microsoft is expanding its family of Phi-3 open small language models, adding small (7-billion-parameter) and medium (14-billion-parameter) models similar to its Phi-3-mini, which has 3.8 billion parameters. It’s also introducing a new 4.2-billion-parameter multimodal model, Phi-3-vision, that supports images and text.

All of these models are GPU-optimized with NVIDIA TensorRT-LLM and available as NVIDIA NIMs, which are accelerated inference microservices with a standard application programming interface (API) that can be deployed anywhere.

APIs for the NIM-powered Phi-3 models are available at ai.nvidia.com and through NVIDIA AI Enterprise on the Azure Marketplace.

NVIDIA cuOpt Now Available on Azure Marketplace

NVIDIA cuOpt, a GPU-accelerated AI microservice for route optimization, is now available in Azure Marketplace via NVIDIA AI Enterprise. cuOpt features massively parallel algorithms that enable real-time logistics management for shipping services, railway systems, warehouses and factories.

The model has set two dozen world records on major routing benchmarks, demonstrating the best accuracy and fastest times. It could save billions of dollars for the logistics and supply chain industries by optimizing vehicle routes, saving travel time and minimizing idle periods.

Through Azure Marketplace, developers can easily integrate the cuOpt microservice with Azure Maps to support teal-time logistics management and other cloud-based workflows, backed by enterprise-grade management tools and security.

Optimizing AI Performance on PCs With NVIDIA RTX

The NVIDIA accelerated computing platform is the backbone of modern AI — helping developers build solutions for over 100 million Windows GeForce RTX-powered PCs and NVIDIA RTX-powered workstations worldwide.

NVIDIA and Microsoft are delivering new optimizations and integrations to Windows developers to accelerate AI in next-generation PC and workstation applications. These include:

  • Faster inference performance for large language models via the NVIDIA DirectX driver, the Generative AI ONNX Runtime extension and DirectML. These optimizations, available now in the GeForce Game Ready, NVIDIA Studio and NVIDIA RTX Enterprise Drivers, deliver up to 3x faster performance on NVIDIA and GeForce RTX GPUs.
  • Optimized performance on RTX GPUs for AI models like Stable Diffusion and Whisper via WebNN, an API that enables developers to accelerate AI models in web applications using on-device hardware.
  • With Windows set to support PyTorch through DirectML, thousands of Hugging Face models will work in Windows natively. NVIDIA and Microsoft are collaborating to scale performance on more than 100 million RTX GPUs.

Join NVIDIA at Microsoft Build 

Conference attendees can visit NVIDIA booth FP28 to meet developer experts and experience live demos of NVIDIA NIM, NVIDIA cuOpt, NVIDIA Omniverse and the NVIDIA RTX AI platform. The booth also highlights the NVIDIA MONAI platform for medical imaging workflows and NVIDIA BioNeMo generative AI platform for drug discovery — both available on Azure as part of NVIDIA AI Enterprise.

Attend sessions with NVIDIA speakers to dive into the capabilities of the NVIDIA RTX AI platform on Windows PCs and discover how to deploy generative AI and digital twin tools on Microsoft Azure.

And sign up for the Developer Showcase, taking place Wednesday, to discover how developers are building innovative generative AI using NVIDIA AI software on Azure.

Read More

New Performance Optimizations Supercharge NVIDIA RTX AI PCs for Gamers, Creators and Developers

New Performance Optimizations Supercharge NVIDIA RTX AI PCs for Gamers, Creators and Developers

NVIDIA today announced at Microsoft Build new AI performance optimizations and integrations for Windows that help deliver maximum performance on NVIDIA GeForce RTX AI PCs and NVIDIA RTX workstations.

Large language models (LLMs) power some of the most exciting new use cases in generative AI and now run up to 3x faster with ONNX Runtime (ORT) and DirectML using the new NVIDIA R555 Game Ready Driver. ORT and DirectML are high-performance tools used to run AI models locally on Windows PCs.

WebNN, an application programming interface for web developers to deploy AI models, is now accelerated with RTX via DirectML, enabling web apps to incorporate fast, AI-powered capabilities. And PyTorch will support DirectML execution backends, enabling Windows developers to train and infer complex AI models on Windows natively. NVIDIA and Microsoft are collaborating to scale performance on RTX GPUs.

These advancements build on NVIDIA’s world-leading AI platform, which accelerates more than 500 applications and games on over 100 million RTX AI PCs and workstations worldwide.

RTX AI PCs — Enhanced AI for Gamers, Creators and Developers

NVIDIA introduced the first PC GPUs with dedicated AI acceleration, the GeForce RTX 20 Series with Tensor Cores, along with the first widely adopted AI model to run on Windows, NVIDIA DLSS, in 2018. Its latest GPUs offer up to 1,300 trillion operations per second of dedicated AI performance.

In the coming months, Copilot+ PCs equipped with new power-efficient systems-on-a-chip and RTX GPUs will be released, giving gamers, creators, enthusiasts and developers increased performance to tackle demanding local AI workloads, along with Microsoft’s new Copilot+ features.

For gamers on RTX AI PCs, NVIDIA DLSS boosts frame rates by up to 4x, while NVIDIA ACE brings game characters to life with AI-driven dialogue, animation and speech.

For content creators, RTX powers AI-assisted production workflows in apps like Adobe Premiere, Blackmagic Design DaVinci Resolve and Blender to automate tedious tasks and streamline workflows. From 3D denoising and accelerated rendering to text-to-image and video generation, these tools empower artists to bring their visions to life.

For game modders, NVIDIA RTX Remix, built on the NVIDIA Omniverse platform, provides AI-accelerated tools to create RTX remasters of classic PC games. It makes it easier than ever to capture game assets, enhance materials with generative AI tools and incorporate full ray tracing.

For livestreamers, the NVIDIA Broadcast application delivers high-quality AI-powered background subtraction and noise removal, while NVIDIA RTX Video provides AI-powered upscaling and auto-high-dynamic range to enhance streamed video quality.

Enhancing productivity, LLMs powered by RTX GPUs execute AI assistants and copilots faster, and can process multiple requests simultaneously.

And RTX AI PCs allow developers to build and fine-tune AI models directly on their devices using NVIDIA’s AI developer tools, which include NVIDIA AI Workbench, NVIDIA cuDNN and CUDA on Windows Subsystem for Linux. Developers also have access to RTX-accelerated AI frameworks and software development kits like NVIDIA TensorRT, NVIDIA Maxine and RTX Video.

The combination of AI capabilities and performance deliver enhanced experiences for gamers, creators and developers.

Faster LLMs and New Capabilities for Web Developers

Microsoft recently released the generative AI extension for ORT, a cross-platform library for AI inference. The extension adds support for optimization techniques like quantization for LLMs like Phi-3, Llama 3, Gemma and Mistral. ORT supports different execution providers for inferencing via various software and hardware stacks, including DirectML.

ORT with the DirectML backend offers Windows AI developers a quick path to develop AI capabilities, with stability and production-grade support for the broad Windows PC ecosystem. NVIDIA optimizations for the generative AI extension for ORT, available now in R555 Game Ready, Studio and NVIDIA RTX Enterprise Drivers, help developers get up to 3x faster performance on RTX compared to previous drivers.

Inference performance for three LLMs using ONNX Runtime and the DirectML execution provider with the latest R555 GeForce driver compared to the previous R550 driver. INSEQ=2000 representative of document summarization workloads. All data captured with GeForce RTX 4090 GPU using batch size 1. The generative AI extension support for int4 quantization, plus the NVIDIA optimizations, result in up to 3x faster performance for LLMs.

Developers can unlock the full capabilities of RTX hardware with the new R555 driver, bringing better AI experiences to consumers, faster. It includes:

  • Support for DQ-GEMM metacommand to handle INT4 weight-only quantization for LLMs
  • New RMSNorm normalization methods for Llama 2, Llama 3, Mistral and Phi-3 models
  • Group and multi-query attention mechanisms, and sliding window attention to support Mistral
  • In-place KV updates to improve attention performance
  • Support for GEMM of non-multiple-of-8 tensors to improve context phase performance

Additionally, NVIDIA has optimized AI workflows within WebNN to deliver the powerful performance of RTX GPUs directly within browsers. The WebNN standard helps web app developers accelerate deep learning models with on-device AI accelerators, like Tensor Cores.

Now available in developer preview, WebNN uses DirectML and ORT Web, a Javascript library for in-browser model execution, to make AI applications more accessible across multiple platforms. With this acceleration, popular models like Stable Diffusion, SD Turbo and Whisper run up to 4x faster on WebNN compared to WebGPU and are now available for developers to use. Microsoft Build attendees can learn more about developing on RTX in the Accelerating development on Windows PCs with RTX AI in-person session on Wednesday, May 22, at 11 a.m. PT.

Read More

A Superbloom of Updates in the May Studio Driver Gives Fresh Life to Content Creation

A Superbloom of Updates in the May Studio Driver Gives Fresh Life to Content Creation

Editor’s note: This post is part of our In the NVIDIA Studio series, which celebrates featured artists, offers creative tips and tricks, and demonstrates how NVIDIA Studio technology improves creative workflows. We’re also deep diving on new GeForce RTX GPU features, technologies and resources, and how they dramatically accelerate content creation.

A superbloom of creative app updates, included in the May Studio Driver, is ready for download today.

New GPU-accelerated and AI-powered apps and features are now available, backed by the NVIDIA Studio platform.

And this week’s featured In the NVIDIA Studio artist, Yao Chan, created the whimsical, spring-inspired 3D scene By the Window using her NVIDIA RTX GPU.

May’s Creative App Rundown

RTX Video is a collection of AI enhancements that improves the quality of video played on apps like YouTube, Prime Video and Disney+. RTX Video Super Resolution (VSR) upscales video for cleaner, crisper imagery, while RTX Video HDR transforms standard dynamic range video content to high-dynamic range (HDR10), improving its visibility, details and vibrancy.

Mozilla Firefox, the third most popular PC browser, has added support for RTX VSR and HDR, including AI-enhanced upscaling, de-artifacting and HDR effects for most streamed videos.

NVIDIA RTX Remix allows modders to easily capture game assets, automatically enhance materials with generative AI tools and create stunning RTX remasters with full ray tracing. RTX Remix recently added DLSS 3.5 support featuring Ray Reconstruction, an AI model that creates higher-quality images for intensive ray-traced games and apps, to the modding toolkit.

Game developers interested in creating their own ray-traced mod for a classic game can download the RTX Remix Beta and watch tutorial videos to get a head start.

Maxon’s Cinema 4D modeling software empowers 3D video effects artists and motion designers to create complex scenes with ease. The integration of the software’s Version 2024.4 with C4D’s Unified Simulation systems now enables control of emission fields to modify behaviors more precisely.

This integration unlocks the ability to orchestrate object interactions with different simulation types, including Pyro, Cloth, soft bodies and rigid bodies. These simulations run considerably faster depending on the RTX GPU in use.

The NVIDIA Omniverse Audio2Face app for iClone 8 uses AI to produce expressive facial animations solely from audio input. In addition to generating natural lip-sync animations for multilingual dialogue, the latest standalone release supports multilingual lip-sync and singing animations, as well as full-spectrum editing with slider controls and a keyframe editor.

Along with accurate lip-sync, facial animations are significantly enhanced by nuanced facial expressions. Pairing Audio2Face with the iClone AccuFACE plug-in, powered by NVIDIA Maxine, Reallusion provides a flexible and multifaceted approach to facial animation, laying the groundwork with audio tracks and adding subtle expressions with webcams.

These latest AI-powered tools and creative app power ups are available for NVIDIA and GeForce RTX GPU owners.

All Things Small, Bright and Beautiful

China-based 3D visual effects artist Yao Chan finds inspiration and joy in the small things in life.

“As the weather gradually warms up, everything is rejuvenating and flowers are blooming,” said Chan. “I want to create an illustration that captures the warm and bright atmosphere of spring.”

Her 3D scene By the Window closely resembles a corner of her home filled with various succulent plants, pots and neatly arranged gardening tools.

“I think everyone has a place or moment that warms their heart in one way or another, and that’s an emotion I want to share with my audience,” said the artist.

Chan usually first sketches out her ideas in Adobe Photoshop, but with her real-life reference already set, she dove right into blocking out the scene in Blender.

Since she wanted to use a hand-painted texture style for modeling the vases and pots, Chan added Blender’s displace modifier and used a Voronoi texture to give the shapes a handcrafted effect.

Chan used hair from the particle system and played with roughness, kink and hair shape effects to accurately model fluffy plants like Kochia scoparia and moss.

Blender Cycles’ RTX-accelerated OptiX ray tracing in the viewport, unlocked by Chan’s GeForce RTX GPU, ensured smooth, interactive modeling throughout her creative workflow.

Modeling and mesh work — complete.

For texturing, Chan referred to former In the NVIDIA Studio featured artist SouthernShotty’s tutorial, using the precision of geometry nodes to highlight the structure of objects and gradient nodes to control the color and transparency of plants.

Chan entered the node zone in Blender.

Chan then used the “pointiness” node to simulate the material of ceramic flower pots.

The “pointiness” node helped simulate materials.

Lighting was fairly straightforward, consisting of sunlight, a warm-toned key light, a cool-toned fill light and a small light source to illuminate the area beneath the table.

Several lights added brightness to the scene.

Chan also added a few volume lights in front of the camera.

Lighting from the side.

Finally, to give the image a more vintage look, Chan added noise to the final rendered image in compositing.

Final compositing work.

Chan’s AI-powered simulations and viewport renderings were powered by her RTX GPU.

“RTX GPUs accelerate workflows and ensure fluent video editing,” she said.

Check out Chan’s latest work on Instagram.

3D artist Yao Chan.

Follow NVIDIA Studio on Instagram, X and Facebook. Access tutorials on the Studio YouTube channel and get updates directly in your inbox by subscribing to the Studio newsletter

Read More

Every Company to Be an ‘Intelligence Manufacturer,’ Declares NVIDIA CEO Jensen Huang at Dell Technologies World

Every Company to Be an ‘Intelligence Manufacturer,’ Declares NVIDIA CEO Jensen Huang at Dell Technologies World

AI heralds a new era of innovation for every business in every industry, NVIDIA founder and CEO Jensen Huang said Monday during an appearance at Dell Technologies World.

“We now have the ability to manufacture intelligence,” Huang said during an on-stage conversation with Dell CEO Michael Dell. “The last Industrial Revolution was the manufacturing of software; previously, it was manufacturing electricity — now we are manufacturing intelligence.”

Together with Michael Dell, ServiceNow CEO Bill McDermott and Samsung SDS President and CEO Hwang Sung-woo, Huang shared his insights on the transformative impact of generative AI on the global economy and various industries.

“Every company at its foundation is intelligence — fundamentally every company is an intelligence manufacturer,” Huang emphasized, underscoring the potential of AI to create digital intelligence.

During the keynote, Dell and NVIDIA announced a slew of updates to the Dell AI Factory.

This includes the Dell PowerEdge XE9680L server with liquid cooling and eight NVIDIA Blackwell Tensor Core GPUs, the industry’s densest, energy-efficient rack-scale solutions for large Blackwell GPU deployments.

The Dell NativeEdge platform will automate the delivery of NVIDIA AI Enterprise software, helping developers and IT operators easily deploy AI applications and solutions at the edge. Advancements also include the ability to simplify AI application development for faster time to value with the integration of NVIDIA NIM inference microservices, deployment automation and more.

Huang discussed the concept of an AI factory, likening it to the factories of the last Industrial Revolution that used water to produce electricity. In the current Industrial Revolution, data centers act as AI factories, transforming data and electricity into valuable data tokens distributed globally.

“What has happened is instead of just producing software, we’re now producing intelligence — that intelligence is formulated in the form of tokens that can then be expressed in any information modality that we’d like it to be,” Huang explained.

Huang underscored the importance of full-stack accelerated computing to enable this, noting NVIDIA’s advancements.

Together, NVIDIA and Dell are providing the world’s industries with a full-stack offering — including computing, networking, storage, services and software — that drives copilots, coding assistants, virtual customer service agents and industrial digital twins.

Michael Dell introduced the latest innovations for the Dell AI Factory with NVIDIA, emphasizing their ability to simplify and accelerate customers’ AI journeys.

“We are unleashing this super genius power. Everyone is going to have access to this technology — and it’s gonna get smarter,” Dell said.

The Dell AI Factory with NVIDIA, announced earlier this year, offers a full stack of AI solutions from data center to edge, enabling organizations to quickly adopt and deploy AI at scale.

This platform integrates Dell’s AI capabilities with NVIDIA’s cutting-edge technologies, providing customers with an expansive AI portfolio and an open ecosystem of technology partners.

The Dell AI Factory, based on the NVIDIA partnership, will help establish AI sovereignty for countries by enabling strong data security and customized AI service development.

Together, Dell and NVIDIA will bring these capabilities to companies, help stand it up, and help develop new applications that enterprises can deploy, Huang said.

“Our partnership between us is really about that, literally from the ground up building AI factories and delivering it to the world’s enterprises as a solution,” Huang said.

Read More

Fight for Honor in ‘Men of War II’ on GFN Thursday

Fight for Honor in ‘Men of War II’ on GFN Thursday

Whether looking for new adventures, epic storylines or games to play with a friend, GeForce NOW members are covered.

Start off with the much-anticipated sequel to the Men of War franchise or cozy up with some adorable pals in Palworld, both part of five games GeForce NOW is bringing to the cloud this week.

No Guts, No Glory

Men of War II on GeForce NOW screenshot
For the cloud!

Get transported to the battlefields of World War II with historical accuracy and attention to detail in Men of War II, the newest entry in the real-time strategy series from Fulqrum Publishing.

The game features an extensive roster of units, including tanks, airplanes and infantry. With advanced enemy AI and diverse gameplay modes, Men of War II promises an immersive experience for both history enthusiasts and casual gamers.

Gear up, strategize and prepare to rewrite history. Get an extra fighting chance with a GeForce NOW Ultimate membership, which streams at up to 4K resolution and provides longer gaming sessions and faster access to games over a free membership.

Cloud Pals

Palworld on GeForce NOW
Pal around in the cloud.

Step into a world teeming with enigmatic creatures known as “Pals” in the action-adventure survival game Palworld from Pocketpair. Navigate the wilderness, gather resources and construct a base to capture, tame and train Pals, each with distinct abilities. Explore the world, uncover secrets and forge alliances or rivalries with other survivors in online co-op play mode.

Embark on adventure with these trusty Pals through a GeForce NOW membership. With a Priority membership, enjoy up to six hours of uninterrupted gaming sessions, while Ultimate members can extend their playtime to eight hours.

Master New Games

Die By The Blade on GeForce NOW
More than a one-hit wonder.

Vanquish foes with a single strike in 1v1 weapon-based fighter Die by the Blade from Grindstone. Dive into a samurai punk world and wield a range of traditional Japanese weapons. Take up arms and crush friends in local or online multiplayer, or take on unknown warriors in online ranked matches. Outwit opponents in intense, tactical battles and master the art of the one-hit kill.

Check out the list of new games this week:

  • Men of War II (New release on Steam, May 15)
  • Die by the Blade (New release on Steam, May 16)
  • Colony Survival (Steam)
  • Palworld (Steam)
  • Tomb Raider: Definitive Edition (Xbox, available on PC Game Pass)

What are you planning to play this weekend? Let us know on X or in the comments below.

Read More

NVIDIA, Teradyne and Siemens Gather in the ‘City of Robotics’ to Discuss Autonomous Machines and AI

NVIDIA, Teradyne and Siemens Gather in the ‘City of Robotics’ to Discuss Autonomous Machines and AI

Senior executives from NVIDIA, Siemens and Teradyne Robotics gathered this week in Odense, Denmark, to mark the launch of Teradyne’s new headquarters and discuss the massive advances coming to the robotics industry.

One of Denmark’s oldest cities and known as the city of robotics, Odense is home to over 160 robotics companies with 3,700 employees and contributes profoundly to the industry’s progress.

Teradyne Robotics’ new hub there, which includes cobot company Universal Robots (UR) and autonomous mobile robot (AMR) company MiR, is set to help employees maximize collaborative efforts, foster innovation and provide an environment to revolutionize advanced robotics and autonomous machines.

The grand opening showcased the latest AI robotic applications and featured a panel discussion on the future of advanced robotics. Speakers included Ujjwal Kumar, group president at Teradyne Robotics; Rainer Brehm, CEO of Siemens Factory Automation; and Deepu Talla, vice president of robotics and edge computing at NVIDIA.

“The advent of generative AI coupled with simulation and digital twins technology is at a tipping point right now, and that combination is going to change the trajectory of robotics,” commented Talla.

The Power of Partnerships

The discussion comes as the global robotics market continues to grow rapidly. The cobots market in Europe was valued at $286 million in 2022 and is projected to reach $6.7 billion by 2032, at a yearly growth rate of more than 37%.

Panelists discussed why teaming up is key to innovation for any company — whether a startup or an enterprise — and how physical AI is being used across businesses and workplaces, stressing the game-changing impact of advanced robotics.

The alliance between NVIDIA and Teradyne Robotics, which includes an AI-based intra-logistics solution alongside Siemens, showcases the strength of collaboration across the ecosystem. NVIDIA’s prominent role as a physical AI hardware provider is boosting the cobot and AMR sectors with accelerated computing, while its collaboration with Siemens is transforming industrial automation.

“NVIDIA provides all the core AI capabilities that get integrated into the hundreds and thousands of companies building robotic platforms and robots, so our approach is 100% collaboration,” Talla said.

“What excites me most about AI and robots is that collaboration is at the core of solving our customers’ problems,” Kumar added. “No one company has all the technologies needed to address these problems, so we must work together to understand and solve them at a very fast pace.”

Accelerating Innovation With AI 

AI has already made huge strides across industries and plays an important role in enhancing advanced robotics. Leveraging machine learning, computer vision and natural language processing, AI gives robots the cognitive capability to understand, learn and make decisions.

“For humans, we have our senses, but it’s not that easy for a robot, so you have to build these AI capabilities for autonomous navigation,” Talla said. “NVIDIA’s Isaac platform is enabling increased autonomy in robotics with rapid advancements in simulation, generative AI, foundation models and optimized edge computing.”

NVIDIA is working closely with the UR team to infuse AI into UR’s robotics software technology. In the case of autonomous mobile robots that move things from point A to B to C, it’s all about operating in unstructured environments and navigating autonomously.

Brehm emphasized the need to scale AI by industrializing it, allowing for automated deployment, inference and monitoring of models. He spoke about empowering customers to utilize AI effortlessly, even without AI expertise. “We want to enhance automation for more skill-based automation systems in the future,” he said.

As a leading robotics company with one of the largest installed bases of collaborative and AMRs, Teradyne has identified a long list of industry problems and is working closely with NVIDIA to solve them.

“I use the term ‘physical AI’ as opposed to ‘digital AI’ because we are taking AI to a whole new level by applying it in the physical world,” said Kumar. “We see it helping our customers in three ways: adding new capabilities to our robots, making our robots smarter with advanced path planning and navigation, and further enhancing the safety and reliability of our collaborative robots.”

The Impact of Real-World Robotics

Autonomous machines, or AI robots, are already making a noticeable difference in the real world, from industries to our daily lives. Industries such as manufacturing are using advanced robotics to enhance efficiency, accuracy and productivity.

Companies want to produce goods close to where they are consumed, with sustainability being a key driver. But this often means setting up shop in high-cost countries. The challenge is twofold: producing at competitive prices and dealing with shrinking, aging workforces that are less available for factory jobs.

“The problem for large manufacturers is the same as what small and medium manufacturers have always faced: variability,” Kumar said. “High-volume industrial robots don’t suit applications requiring continuous design tweaks. Collaborative robots combined with AI offer solutions to the pain points that small and medium customers have lived with for years, and to the new challenges now faced by large manufacturers.”

Automation isn’t just about making things faster; it’s also about making the most of the workforce. In manufacturing, automation aids smoother processes, ramps up safety, saves time and relieves pressure on employees.

“Automation is crucial and, to get there, AI is a game-changer for solving problems,” Brehm said.

AI and computing technologies are set to redefine the robotics landscape, transforming robots from mere tools to intelligent partners capable of autonomy and adaptability across industries.

Feature image by Steffen Stamp. Left to right: Fleur Nielsen, head of communications at Universal Robots; Deepu Talla, head of robotics at NVIDIA; Rainer Brehm, CEO of Siemens Factory Automation; and Ujjwal Kumar, president of Teradyne Robotics.

Read More